Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hello and welcome to a WeekendNews episode of the Leveraging
(00:03):
AI Podcast.
The podcast that sharespractical, ethical ways to
improve efficiency, grow yourbusiness, and advance your
career with ai.
This is Isar Metis, your host,and this week we are going to
talk a lot about OpenAI and moreimportantly about Sam Altman and
his predictions of AI impacts onthe near future.
We're going to continue withthat into the topic of impact of
(00:26):
AI on jobs from severaldifferent sources.
We're gonna talk about theemotional bonds forming between
people and ai, and we're gonnatalk about the AI news or the
lack of AI news at Apple'sWorldwide Developers Conference.
and then there's a lot ofexciting rapid fire items.
Many of them coming from OpenAIthis week who have made some
very serious announcement andreleased.
(00:48):
and there's some major news fromMeta as well.
So we have a lot to talk about.
So let's get started.
AS I mentioned, we are going tostart from Sam Altman's blog
post.
Sam has been releasing theseblogs for a while now, every few
months, and he's been revealingwhere he thinks the world is
(01:09):
going.
His latest blog post is calledThe Gentle Singularity, and he
basically outlines where hethinks we are right now and
where he thinks we're going.
And the way I wanna do this,which I think will be the most
effective, is to give youseveral different quotes that I
chose and I'll explain first whyI chose them.
And then we're gonna talk aboutwhy they're important.
So the reason I chose thesequotes is because I think they
(01:30):
established the followingbaselines.
One, it establishes urgency, theother, it explains the current
AI potential.
Then it sets very concrete andclear timelines to what's coming
in the near future.
It highlights the exponentialacceleration of ai.
It anchors the future ofhumanity in AI values like
empathy and AI's alignment tothat.
(01:52):
And it illustrates tangibleconnection to how AI currently
impacts resources in the world,such as energy and water.
So let's go through thesequotes.
I'll mention the importance ofeach and every one of them, and
then I'll do a quick summary inthe end.
Quote number one, we are pastthe event horizon.
The takeoff has started what hebasically means that we pass the
point of no return in thistrajectory to singularity.
(02:15):
What he basically means is whatwe have passed the point of no
return in the progress to AIthat is continuously improving
on a continuously acceleratingscale and there's no coming
back.
And if that comes from Sam thatis seeing the most advanced
research done in the worldtoday, you can assume that
(02:35):
that's reality and not somethingthat he thinks as wishful
thinking.
Quote number two.
In some big sense, cha, GPT isalready more powerful than any
human who has ever lived, and weneed to agree to that.
And I'm going to connect this tosomething else that is said in
an interview at the keynote ofSnowflake.
aNd we're gonna go back to thatinterview afterwards.
(02:56):
But over there he was askedabout a GI and what he basically
said that if anybody on Earthwould have seen today's ai, just
at the day they launched Chachipt, everybody would've called it
a GI.
And I agree with him, right?
We had zero expectations when TGBTI came out at October of 2022,
it was very, very basic.
If anybody would've said that intwo years, it will do all the
(03:18):
incredible things that it willdo right now.
We would've labeled it a GI.
We just don't call it this rightnow, but what he's saying is
that a GI, doesn't matter.
We are on this exponential curveand it's gonna keep on getting
better and better.
And the definitions are veryfluid anyway.
but what this immediatestatement means is that you
don't have to wait for a GI forit to have a very significant
(03:40):
impact on humanity and on jobsand on society and so on,
because it's already better thanmost humans at many, many
different things.
Think about just its ability togo through huge amounts of data
and make sense in it, andconnect the dots and give a
detailed summary and now takeactions as well.
(04:00):
In my company and companies thatI train and companies that I
help with consulting, we are nowdoing things in hours that used
to take teams weeks.
That shows you, again, that AIon those particular cases is
better than any human who everlived, which is what exactly Sam
is saying.
The third quote is maybe themost profound in this blog post,
and it's the following 2026,will likely see the arrival of
(04:24):
systems that can figure outnovel insights.
2027 may see the arrival ofrobots that can do tasks in the
real world.
This is next year, this is sixmonths from now.
He's saying that there's gonnabe an AI that can make new
discoveries that humans did notdiscover before.
That goes way beyond what we'reseeing right now.
(04:45):
He is also saying that in 2026,companies will be able to solve
their biggest problems just bythrowing more compute at it.
So the AI will be.
Smart enough and capable enoughto solve problems that companies
cannot solve today or that areextremely difficult to solve
today on its own, just by givingit access to more compute.
(05:06):
That's next year.
And when Sammy's sayingsomething next year, it means
he's seeing it in the lab rightnow.
The stuff that we are gettingaccess to is usually 7, 8, 9
months old.
So what they're gonna release in2026, they're already testing at
OpenAI and what, when he'ssaying that he's not making wild
predictions, he's basicallysharing what he knows and not
(05:27):
what he predicts.
Now, to make it more extreme,the next quote is, in the 2030s,
intelligence and energy ideasand the ability to make ideas
happen are going to becomewildly abundant.
So what is basically saying thatwithin five years, which puts us
in the thirties or four and ahalf, if you really want to be
more specific, the.
(05:48):
We'll be able to basically useAI to do anything we can
imagine, and he's saying thatwill drift to other areas of our
lives because AI will be able tocreate stuff including more
clean energy that we'll be ableto use for wherever we want,
such as creating more AI thatwill be able to create more
things.
So you see how this curve is notjust impacting AI intelligence,
(06:11):
it's gonna impact everything.
What this basically means isthat we're gonna see an age of
innovation like we've never hadthe ability to see before.
Now, this could obviously beused for good or bad, which is
not stating, but still meansthat's where we're going in the
very near future.
He's also talking about theacceleration of the process.
(06:31):
He's talking about that thefirst thing that compute can be
used for is better and faster AIresearch.
He also related to that in theother interview that I mentioned
at the Snowflake keynote, wherehe was asked if he could have 10
x compute, what would he applyfor?
And he said, more AI research sowe can do more stuff, which is
obviously a very interestinganswer.
(06:51):
The quote from the blog is, ifwe can do a decade worth of
research in a year or a month,then the rate of progress will
obviously be quite different.
So if you think the accelerationright now is scary, insane, and
I think we've ever seen before,think about how he thinks.
He thinks that every newcapability and every new compute
and every new breakthroughshould be applied first and
(07:12):
foremost to more AI research.
Meaning doing things even fasterand being able to do what
they've doing in a decade and heis aiming to take these
capabilities and be able to doin one month the research in AI
that used to take a decade.
Just think how unreasonable thatis, but that's what he's working
(07:33):
towards and he thinks it'sachievable.
He also connects that to theactual infrastructure.
So the next quote is, as datacenter production becomes
automated, the cost ofintelligence should eventually
converge to near the cost ofelectricity.
Basically with the advancementin data center creation with
robots and production lines,that will be significantly more
(07:54):
efficient than they are today.
The actual cost of intelligenceis gonna be, well, very close to
zero.
So the cost of using theintelligence will be the cost of
using the electricity to run thedata center.
So this means that beyond thealgorithms and the capabilities
of the AI itself, the wholeinfrastructure around it is
going to become significantlymore available, which will
(08:15):
enable this to run even fasterand do even more things.
No, speaking about data centersand their impact to the world,
Sam also shared how much energyand how much water does a single
query, which A GPT uses.
So the next quote is, theaverage query uses about 0.34
watt hours about what an ovenwould use in a little over one
(08:37):
second.
A high efficiency light bulbwould use that in a couple of
minutes.
Now, he also shared that thisone query would use 0.0 0 0 0 8
5 gallons of water, which soundsvery, very little.
So Sam puts it in theperspective of saying, oh, look
how very little.
Resources we're actually usingto return a query.
(08:59):
But what he's completelyignoring is how much of these
actually happen every day.
So, I did some math and I alsodid some online research and got
to the same conclusion in mymath and in the research that as
of right now, if they stopgrowing, they're doing about a
billion queries per day.
So the amount of wat hours, andthe amount of gallons of water
(09:20):
that they're consuming is 85,000gallons in every single day and
340 million watts.
Now you do that in a month.
That's 2,550,000 gallons and10.2 gigawatts hours.
Now, these are really largenumbers, but they're very hard
to comprehend.
(09:40):
I did some additional research.
A typical shower takes about 17gallons.
So 2.5 million gallons equals150,000 showers that Chachi PITI
is now consuming in a singlemonth.
A 10.2 gigawatt hour can power11,500 average homes in the us.
(10:03):
This was not on the grid twoyears ago.
These are incredible numbers,and that is while open air is
growing at 50% every four to sixmonths.
Now if you want the even moregeeky statistics about the 10.2
gigawatts power is that DeLoreanin Back to the Future took 1.21
(10:24):
gigawatts to travel throughtime.
That means that the amount ofpower that chat pity consumes in
a month could send the DeLoreanback and forth in time 8.4
times.
If that doesn't convince you,that is a lot of power that
they're consuming.
I don't know what will, Butputting the DeLorean and the
geeky comparisons aside for asecond.
It means that these tools areconsuming a huge amount of power
(10:46):
and a stupid amount of water.
And we were just talking aboutChachi pt.
There's obviously a lot of othertools out there that combined
are actually using more powerand more water than Cha Chipi
alone.
They're obviously the biggest,but the combination of all of
them is.
Ridiculous.
And we need to take that intoaccount.
And yes, if Sam is right and AIwill allow us to figure out a
(11:06):
way to create clean energy andas much as we want with fusion
or any other solution, thenthere's no problem.
But in the transition period,we're going to generate a
ridiculous amount of carbonfootprint on the planet and use
a lot of other resources likewater and other stuff.
And that's definitely not a goodthing.
I have a special and excitingannouncement to make, especially
(11:28):
for those of you who love thisshow and has been following it
for a while on Tuesday, June17th, we're going to do a live
session of this show celebratingepisode 200.
Yes, I know.
Crazy insane.
Episode 200 of this episode.
The episode is going to be theultimate AI showdown.
I collected four of the top AIexperts in the world today.
(11:48):
Each is an expert in a specificfield and each of them is gonna
compare the top current toolsand show you the pros and cons
on each tool and which use casesto use, which tools.
This is one of the biggestproblems that people have today,
and we thought that episode 200,would be a great opportunity to
show you how to solve that.
So episode 200, live noonEastern, on Tuesday the 17th,
(12:13):
there's gonna be a link on theshow notes to come and join us.
Don't miss this.
It's gonna be epic.
You're gonna learn a lot.
It's gonna be a lot of fun.
We're gonna announce a bunch ofnew things.
So if you're not driving, stopright now.
Click on the link, sign up forthe event, come join us on Zoom
or on LinkedIn Live.
I'm looking forward to seeingyou there.
And now back to the news.
Quote number eight, people havelong-term important and curious
(12:36):
advantage over ai.
We are hardwired to care aboutother people and what they think
and do, and we don't care verymuch about machines.
What he's claiming, I think, isthat we will endure whatever
happens with the AI because wedon't really care about the ai.
We care about each other, and Ireally hope that he's right.
(12:57):
But we will see in the nextsegment about open ai, somebody
else from open AI sharing aboutthe emotional connections that
humans are not generating withai.
And you will see that thisstatement may not be that
accurate.
I.
the next quote is morespecifically about job creation
and Sam said, if history is anyguide, we will figure out new
things to do and new things towant and assimilate new tools
(13:20):
quickly.
He's basically referring toprevious revolutions and how
humans did not becomeirrelevant, but we evolved and
found new things to do.
Again, I really hope he's right,but I think there is some broken
assumption in this process.
In every previous revolutionthat we had, what humans did is
moved further and further awayfrom manual labor and more and
(13:43):
more into using our brains inorder to achieve more.
So our intelligence was theresource that allowed us to
evolve beyond what therevolution was trying to solve.
If you think about theindustrial revolution, it's
taking machines to do the thingthat humans did, and we just
operated the machine so we cando more stuff.
If you think about electricity,well, electricity allows us to
(14:03):
do a lot more things and again,reduce the amount of labor we
needed to do.
If you think about the computerera, then computers allowed us
to design and do things thattook us manual work before that.
Now we could do quicker, butagain, it was all enabling our
intelligence to be used in abetter way.
What we are replacing right nowis our intelligence.
(14:26):
So the only thing we have leftabove the machines, presumably,
is emotions, and he referred tothat, as I mentioned in the
previous component.
How is that going to play out?
I really, really don't know, butI think comparing this
revolution to previousrevolutions has a very false
assumption as the very core partof what it assumes because
(14:48):
intelligence was this thing thatgot us through and that's not
gonna be our benefit anymore.
Quote number nine is somethingI'm gonna talk about a lot with
other statements moving forwardin this segment.
And the quote is, the sooner theworld can start a conversation
about what these board boundsare and how to define collective
alignment, the better.
He specifically talks aboutalignment of AI and alignment of
(15:10):
AI specifically to human sharedvalues.
And I have several issues withall of that.
One him and a lot of otherleaders are saying, someone
should finally do this.
People should pay attention.
The world should come together,but what are they doing in order
to make this happen?
Did Sam initiate aninternational body?
Did he start a collaborationwith the other leading labs?
(15:32):
Did he do any?
No.
He's just basically saying,somebody needs to finally do
this, or it's going to be toolate.
Now, even the stuff that they'redoing internally, let's say
they're doing the perfect work,that doesn't mean anything about
all the other labs.
That doesn't mean anything aboutthe open source world.
That doesn't mean anything abouthow this overall gonna turn out
when it comes to alignment orthe negative impacts that AI can
(15:53):
have on humanity, society,workforce, et cetera.
And it's just amazing to me thatall these leaders, and again,
we're gonna mention a fewothers, later on in this segment
are saying that somebody shouldfinally do this, but they're not
doing much to do the broaderthing other than the things
they're doing within theircompany.
In the last quote, which Ireally like because it puts it
all together.
(16:14):
He said, intelligence too cheapto mirror is well within grasp.
He is basically saying that aSI, artificial super
intelligence that is gonna befree practically is within
reach.
It's something that he sees as apractical situation in the very
near future.
And if we're looking again atspecific timeframes that he gave
(16:35):
us, it's sometime between 2026and the 2030s.
And the 2030s.
Yes, it's a whole decade, butwith the rate things are moving
right now and the way he'sprojecting the trajectory is,
might come sooner than he evenbelieves.
So now I want to connect this toa few additional things he said
in the keynote session at theSnowflake event, the moderator
(16:57):
asked, what is your advice forenterprise leaders navigating
the AI landscape in 2025?
So Sam said, just do it.
There is still a lot ofhesitancy.
And then he continued and saidThe companies that have the
quickest iteration speed to makethe cost of making mistakes the
lowest win.
What he means is that applyingAI right now with all your might
(17:20):
and knowing that it's still notgoing to be perfect, is gonna be
the way to come up on topbecause you will learn and
iterate significantly fasterthan other companies.
In my courses and when I workwith companies, I have the five
laws of success In the AI era,one of those laws is called the
law of continuously improvingcompromise.
What I mean by that law is, yes,AI is not perfect, but it's
(17:41):
getting better and better very,very fast.
And if you learn how to apply itand if you learn how to iterate
through the improvements, youcan gain significant benefits,
both in the short and long run.
And so I agree with Sam on thata hundred percent.
By the way, the same questionwas asked to the CEO of
Snowflake, of what doenterprises and companies needs
to do in the next 12 months.
(18:02):
And he said curiosity, notcaution is the more valuable
trait right now.
And I agree with him as well onthat.
We have the ability toexperiment.
We need to do it safely.
We need to find the rightinfrastructure and the right
security measures to do that.
But being able to experiment andbase your innovation on
curiosity and put that in thehands of every employee in the
(18:24):
business is explosive in itsability to drive growth and
efficiencies.
And if you haven't experiencedthat, you need to find ways to
do that and give that to yourcompany more about that later.
Now going back to Sam's quotes,one of the following questions
was, what will be different nextyear?
And Sam said the following, andI think we'll be at the point
(18:44):
next year where you can not onlyuse system to automate some
business processes or buildthese new products or services,
but you can really say, I havethis hugely important problem in
my business.
I will throw a ton of compute atit, and the models will be able
to go figure out things thatteams of people on their own
can't do.
(19:04):
And the companies that haveexperience with these models are
well positioned for the worldwhere they can say, okay, AI
system, go redo my most criticalproject, and here's a ton of
computing.
People who are ready for that Ithink will have another big step
changed next year.
So what is Sam basically saying?
(19:25):
He's saying that again, buildingthose muscles of using AI
systems will go far beyondautomation and efficiency.
They will be able todramatically change businesses,
solve the biggest problems, thebiggest bottlenecks that
companies have in.
AI solving the problem for themif they can afford to pay the
bill.
(19:46):
And obviously that will berelative to the size of your
company.
If you have a$10 billioncompany, you may invest$50
million in compute.
I mean, if you have a$10 millioncompany, you may invest$200,000
in compute, but one way or theother, it will allow you to
solve problems way bigger thanyou can solve right now with
your current team, especiallywith the amount of other stuff
that the team has to do.
(20:07):
So before I summarize thissection, I want to go to some
additional points from othersources.
Dot com sent, shared a veryinteresting article on the
impact of AI on the consultingworld.
So McKinsey reported that AI canboost the productivity of a
consultant 10 x automating taskslike data analysis, report
generation, and so on.
(20:27):
That's from a study that they'vedone in 2025.
So it's a very recent study andthis obviously threatens the
whole concept of how consultingcompanies work.
Now in addition to the fact thatit enables the consultants to do
the work at 10 x, theefficiency, it also allows
non-con consultants, meaning thecompanies themselves to build
workflow strategies, analysiscapabilities, reducing the
(20:48):
dependency on consulting firms.
Now and this report that isbased on interviews from Fortune
1000 companies.
He's basically saying that CEOswant insights yesterday, not in
six months.
So any of you had theopportunity to work with large
consulting firms.
It's a very long and veryexpensive process that can end
in months and sometimes yearsand will cost you millions of
(21:11):
dollars.
And AI can now do some of thosethings in minutes or days.
This explains why per a Deloittereport, again, from this year,
80% of top consulting firms areinvesting heavily in AI training
for their employees because theyunderstand that they will become
obsolete if they don't do that,and they might happen even if
(21:31):
they do that now, while thisreport was specifically about
consulting companies, I saidmultiple time on this podcast
that it goes way beyondconsulting.
It goes to anything that isbased on billable hours.
The concept of billable hours isgoing to crumble in the AI
future.
The biggest risk, I think, islaw firms in a year or two,
especially if you're looking atwhat Sam said, AI will be able
(21:52):
to do most or maybe all ofparalegal work.
In an average law firm, 30 to40% of billable hours come from
paralegals.
What if in two to three years,nobody will be willing to pay
you for those hours?
Can a large law firm survive andmaintain fancy offices in a high
rise in downtown.
If they lose 30 to 40% of theirrevenue, the answer is probably
(22:15):
not.
What does that mean?
It means that companies who arebasing their income on billable
hours have to find and shiftdifferent ways to run their
businesses.
That's a completely mind shiftfrom a standard operation
procedure for many companies andmany industries around the world
for decades, and from that toanother point, Palantir, CEO,
Alex Karp warns that AI isalready reducing entry level
(22:37):
jobs and opportunities,potentially creating significant
social disruption, if notaddressed quickly.
Karp also warns that we have towork very, very hard for this to
happen.
He used the terms we have towill it to be, because he's
saying if we don't do that,there's going to be some
significant negative impact tosociety.
(22:57):
And he's saying that the leadersof the tech world are ignoring
this right now.
His specific world was those ofus in tech cannot have a tin ear
to what this is going to meanfor the average person Just two
weeks ago we heard the sameexact thing from Daria Ade,
right?
That said that AI is gonna takeaway a lot of entry jobs.
(23:17):
The problem that I have with allof these, and I mentioned that
earlier in this episode, is thatall of these people are saying
someone should do somethingabout it but they're not doing
anything themselves.
They're raising the flag, whichI think is very, very important
because they did not do thatbefore I did that.
I was shouting it off thebalconies of this podcast for
over two years now.
(23:38):
But the reality is the people atthe helm are going faster and
faster, and they're not stoppingfor a minute to get all of them
together and to do somethingtogether other than saying that
somebody should do somethingabout it.
And that is very scary to me.
So final point before Isummarize this whole initial
long section of the podcasttoday.
(23:59):
In the recent report by PWCcalled the PWC 2025 Global AI
Jobs Barometer, they found someshocking statistics that
employees who have higher AIskills make 56% higher wages
compared to their peers with noAI skills in the same roles that
(24:21):
doubled from last year premiumof 25% of people with high AI
skills.
The other thing that they foundis that industries that are most
exposed to AI solve three xhigher growth revenue per
employee compared to those thatare less exposed.
Meaning AI is driving actualreal benefits, financial
benefits to companies, and themore exposed, meaning the
(24:44):
industries in which AI can beused more effectively are seeing
three x the improvement versusindustries where AI can be
applied less.
And the last piece ofinformation from this that I
think is fascinating is thatskills sought by employers are
changing by 66% faster in themost exposed industries to ai,
(25:05):
meaning the things that you'regonna get hired for.
Is changing dramatically inindustries that are exposed to
ai.
Yes, there are industries thatare gonna move slower, that are
gonna be more protected, but inmore and more industries, and if
you connect that to what Samsaid about where AI is going in
the next year, you'll understandthat it's gonna be very few
industries that are gonna getprotected from ai.
(25:27):
You need AI skills in order tosurvive.
So what does all of this means?
It means that the pace thatwe're seeing right now that is
staggering and scary is notslowing down.
It's actually accelerating.
Sam is saying that very clearly.
It also shows that AI trainingand AI knowledge is the key to
success, both on the personallevel and on the company level.
(25:49):
As an individual, if you wannakeep your job, having
significant AI skills doesn'tguarantee anything because.
Things are going to happen thatare beyond our control, but it
dramatically increases yourchances of keeping your job.
It also dramatically increasesyour chances of finding another
job, because many companies willhire based on AI skills and not
(26:10):
necessarily based on a degree ina specific area.
And in the meanwhile, you canmake more money, actually a lot
more money, 56% more money thanpeople in your position that do
not have AI skills.
Now, the same thing is true forcompanies.
Companies need to make drasticchanges in the way they train
their employees to haveimmediate benefits from ai.
(26:33):
That's both on the strategicaspect as well as on the
tactical day-to-day aspect.
And again, connecting this towhat Sam Altman said about
what's coming in the nearfuture.
Knowing now how to use AI withthe current levels of AI will
also prepare you to use AI nextyear.
Again, not in five years, nextyear, to do significantly bigger
and more strategic things.
And I'm gonna use this for ashameless plug.
(26:55):
We have been delivering AItraining to individuals and two
companies for over two years.
My background as a CEO ofcompanies and startups, one of
them that has grown to a hundredmillion dollars in sales in just
a few years, gives me a greatinsight to what is working and
not working in companies and forindividuals.
Right now, we have publiccourses that you can sign up to,
(27:17):
the AI Business Transformationcourse that we have been
teaching for over two years now.
Thousands of people have takenthe course and have transformed
their careers and theirbusinesses because of that
information.
The next public course starts onAugust 11th, and if you follow
the link or the show notes or goto the Multiply AI website,
you'll be able to sign up forthe course.
(27:38):
If you haven't done anythinglike this for yourself, do it
because it can dramaticallyimpact your career and your
financial livelihood.
But in addition, we've beenteaching regular workshops,
specifically tailored tocompanies.
These workshops vary in size andintensity, and as I mentioned,
custom to the specific needs ofspecific companies.
(27:58):
I've been teaching theseworkshops to companies as small
as five to$10 million inrevenue, and my largest clients
is a$5 billion internationalcorporation.
And everything in between theseworkshops become accelerators
for AI adoption in thecompanies, and they support the
leadership on the strategy sideand how to create the right
environment for AI to thrivesafely as part of the
(28:20):
organization's way forward.
And it also allows employees tobenefit from immediate skills on
the tactical level to help themdo their day-to-day job faster,
better, and cheaper.
If that's something you'reinterested in, please reach out
to me on LinkedIn.
I will gladly talk to you, sharewith you exactly what we do and
the successes that we had with ahuge range of companies.
(28:41):
And out to the next topic, stillfrom OpenAI, but in this case on
a post on the OpenAI developercommunity by Joanne Jang, who is
the head of model and behaviorpolicy at OpenAI.
And the post is titled SomeThoughts About Human AI
Relationships.
So Jang said lately more andmore people have been telling us
that talking to Chachi, PT feelslike talking to someone.
(29:04):
They think it confide with itand some even describe it as
alive.
Now, in a combined research thatwas done by OpenAI together with
the MIT media lab, they foundthat more and more users are
forming emotional bonds withChachi PT and most likely other
similar AI platforms.
Now OpenAI, per their claim, aretrying to make Chachi PT to be,
(29:28):
and I'm quoting warm,thoughtful, and helpful, which
sounds awesome, but it isexactly the kind of behavior
that will cause people todevelop emotional dependency and
connection to these models.
But let's say even this is thebest case of dependency.
What happens with the otherlabs?
Who promises us and who the helldecides what kind of emotional
(29:50):
connection should the AI be ableto develop with humans?
It literally impacts the livesof people and how they feel.
not just their jobs and theirfinancial livelihood, and
somebody in a leadershipposition in the lab is going to
decide that for you and for yourkids.
And we're going to talk moreabout that later on when we talk
about some cases with meta thathappened recently and how
(30:11):
terribly wrong this can evolve.
Now, one of the latest updatesthat OpenAI did in the past week
and a half is that they updatevoice mode.
Those of you who are not usingvoice mode, first of all, you
should try.
It's an amazing way to engagewith Chachi PT and other tools
as well.
I do this every single day, butit's a huge upgrade from the
previous version.
It feels a lot more human-like,and the studies are showing that
(30:34):
some users, particularly theones that are using voice mode.
Even more the ones that areusing voice mode with the
opposite gender report, higherloneliness and dependency after
only four weeks of using thesetools on regular basis.
Now, I must admit all theconversations that I have with
chat GPT are all professionaland work related.
So I'm probably not in the samebucket as some of these people,
(30:55):
but I can definitely see howsomebody who's lonely and
looking with somebody to talk toor somebody to open their hearts
to and they don't have such aperson, will do that with Chachi
PT and will develop a connectionwith it because it's always very
understanding and warm andcomforting, and that is very,
very risky because it's gonnadrive people to even more
loneliness because it will haveless and less connections with
actual humans.
(31:16):
Now the good news is that OpenAIis recognizing that this is an
issue.
Again, I'm very happy aboutthat, that they're openly
speaking about this issue.
On the other hand, going back towhat I said before, what are
they actually doing on thebigger picture?
Not just the OpenAI level, butwhat are they doing on the
bigger picture together with theother labs, together with
governments, together with aninternational body to address
this?
But right now the answer iszero, at least as far as I know.
(31:38):
So let's switch to the next deepdive topic, which is Apple.
But before we dive into the,what I would call embarrassment
of AI announcement in their WWDCconference this past week, they
released a very interestingresearch paper just before the
conference.
So Apple Research has revealedthat leading AI models struggle
with reasoning suggestingartificial intelligence a GI is
(32:00):
still a long way off.
Now the way they did theresearch is they gave multiple
advanced AI tools.
Complex puzzles that humans canactually solve relatively
easily, and these models failedmiserably with solving these
puzzles.
So basically their conclusionwas that today's models don't
really have intelligence.
They just mimic intelligence andhence, they can do the things
(32:23):
that they're doing, but they'renot on a path to achieving a GI,
meaning they're not on a path tobeing as intelligent as humans
across the board on multipletypes of tasks, or as they said,
and I'm quoting fundamentalbarriers to generalizable
reasoning now.
Two things about this thing.
First of all, they're not thefirst people who are saying that
(32:43):
young Laun, the Chief AI officerat Meta has said multiple times
that he believes that largelanguage models are not the path
to achieving a GI.
so that's one.
Two is there's a very seriousirony in the fact that the
company that is currentlylagging behind by a big spread
behind everybody else on ai,apple is the one that's saying
(33:04):
that everybody else that has ahuge lead on them are on the
wrong path that they're tryingto follow and catch up.
So some would argue that thispaper is just there to maybe
explain or justify or maybereduce the level of impact that
the fact that they're so farbehind actually has on
perception.
Either way as I shared with you,other people do not feel this
(33:25):
way.
Both Sam Altman Ade and manyothers definitely see the path
to a GI and beyond in the nextfew years.
But that leads us into the WW DCconference that was held, this
past week.
And the one clear winner of theww DC conference is open ai
(33:46):
because apple is still far fromdelivering the personal AI
driven series that they promisedexactly a year ago in the
previous conference.
That they made a huge hypearound marketing these
capabilities that they do notactually have and they're not
even close to having, and allthey shared, all the big news
that they shared about AI arepowered by chat, GPT.
(34:08):
Features like image generationand screenshot analysis that are
now going to be available insideof the Apple ecosystem are
actually powered by OpenAI andnot by anything that Apple has
developed themselves.
They have stated that the nextseries has been pushed to 2026,
which is a full year, and wedon't know when in 2026.
So this could be early 2026,which would be seven to eight
(34:31):
months from now, but this couldbe late 2026, which will be
maybe a year and a half fromnow.
Either way, they're very, veryfar behind and if you think
about what is happening rightnow, Google Gemini already
powers most or all of Android'sphone stuff.
They're forcing people to switchfrom the old Google assistant to
Gemini powering the phones.
(34:51):
So they already have millions ofphones operating with this new
capability.
I think most people are notaware of how much they can do
with Gemini on their phones, butthey will find out definitely
the next few months, definitelybefore a year to a year and a
half from now.
Combine that with the fact thatMeta Raybans have already sold
over 2 million units.
Combine that with the fact thatOpenAI has just acquired Johnny
(35:13):
Ive and his startup, and they'regoing to develop devices that
are gonna run Chachi PT that aregonna be probably some kind of a
wearable device.
And you understand that Apple isin a very serious situation
right now.
Apple's stock has dropped 20%from the beginning of the year.
Now while the decline in thefirst four months were more or
(35:34):
less aligned with the s and p500 overall and had to do a lot
with the tax war between Trumpand the world, the s and p 500
has recovered and it's now at ahigher level than the beginning
of the year, and Apple stock is20% down.
Now, while Apple is saying thatthey are trying to deliver
something that will be alignedwith their values and their
(35:55):
level that they're expectingfrom their products to have, and
they have all these reasons whythey're delaying the deployment,
it's nothing short ofembarrassing that they're so far
behind and the fact that theyworld really sit in 2026 raises
the question of what exactlywill they release in 2026?
Will it be something like thecurrent Gemini or Chachi pt?
Because by then, Chachi, PT andGemini will be in a completely
(36:18):
different scenario.
Think about what I shared withyou about what Sam said in the
beginning of this episode.
They will be able to solve veryserious company-wide problems,
make novel discoveries with ai,and Apple will have a more
advanced theory.
Still not a very good situationfor Apple to have.
(36:38):
Now, yes, they've launched a lotof other stuff and they have
liquid glass redesign that looksreally, really cool.
And some new features and anoverall new phone interface that
puts, the screen and thecontacts and the calls and the
voicemail into a one singlescrolling environment and
improvements to messages andsome new gaming capabilities to,
provision and Apple TVpersonalization.
(36:58):
A lot of other stuff that theyannounced.
But the reality is the wholeworld right now is focused on AI
and its capabilities, and Applehas nothing to offer at this
point for the company that hasbeen maybe the most innovative
company in the computer worldfor the past two decades.
That is a very serioussituation.
Now you want to take it evenfurther?
(37:19):
the ninth US Circuit Court ofAppeals denied apple's request
to stop.
The motion to force the companyto immediately stop charging
fees on in-app payments madeoutside the app store.
so far Apple collects 12 to 27%commission on external
transactions not happeningwithin the app store, but
(37:39):
happening in apps after peopleinstall them.
And that makes Apple billionsevery single year.
And this is a part of aninjunction in the Epic Games
versus Apple antitrust case from2021.
So they're obviously going tocontinue to appeal this, but
this shows you that beyond thelack of innovation and AI
(37:59):
issues, they have some other bigclouds and some heavy rains
coming that they need to dealwith.
That's it for the deep dive, andnow we have a lot of rapid fire
items to talk about, includingsome huge announcements from
OpenAI.
There are growing evidence thatGoogle AI overviews, so the
previous version, not the latestone they just released, has
(38:20):
caused a devastating decline intraffic to news publishers
across the us.
A recent report by the WallStreet Journal is citing that
traffic to news publishers haddropped between 36 to 44% in the
past three years.
And yes.
Overview in Google just launcheda year ago, but he has added to
(38:44):
this already alarming trend ofnews publishers getting less and
less traffic.
Specific numbers are businessinsiders.
Traffic plummeted 55%.
HuffPost and the Washington Postlost nearly half their search
audience and one in every 10journalists lost their jobs,
which is obviously the outcomeof all of that.
This is a combination ofinformation from SimilarWeb and
(39:07):
the Bureau of Labor.
Now there's serious warningsfrom obviously the heads of this
industry, the CEO of theWashington Post.
William Lewis calls AI generatedsummaries, a serious threat to
journalism that should not beunderestimated.
But to broaden this just like Idid for you with the billable
hours concept, this problem isnot just for news outlets, any
company that depends on searchtraffic driving their business,
(39:31):
whether through actual goods andservices or through ad revenue,
like in the news industry is ata very serious risk unless they
find other ways to drive trafficand clients to their websites.
There is going to be very soonprobably a parallel web in which
agents and AI tools will crawldata and will have no user
(39:53):
interface.
And the companies who willfigure out how to create this in
the most effective way will gainquickly market share over those
who don't because there's gonnabe less and less human traffic
on the web in the next fiveyears and more and more agent
and AI traffic.
And if you won't figure this outand your business depends on
(40:13):
that traffic, your business isdoomed.
And now too some big news fromOpenAI.
As I mentioned, there's a lot ofstuff that has happened with
OpenAI this week.
First of all, OpenAI reached avery significant milestone,
which is they are on path for a$10 billion in annualized
revenue, so their June.
Revenue is putting them on pathto making$10 billion in the next
(40:34):
12 months.
This is up from a$5.5 billionpace in just December of 2024.
So they're nearly doubled theirrevenue per month in just six
months.
That's an explosive growth atreally big numbers.
So they didn't double from 2million to 4 million, but from
5.5 billion to 10 billion injust six months.
(40:57):
And per their spokesperson, thatcomes from all across the
different things that theyoffer.
So sales of consumer product,Chachi, PT for business
products, and enterprise as wellas API services.
So all the three channels inwhich they're making money are
growing significantly.
It also establishes them as theundeniable king of the AI world.
Coming second is anthropic thatjust hit$3 billion in annualized
(41:19):
revenue, which is something veryimpressive as well.
But it's less than one thirdthan Chachi pt.
Now, per them, they're on trackto hitting their revenue goal
number for this year of 12.7billion.
They're probably going tosurpass that as the current pace
of growth.
but their real projection that,if you remember they shared
earlier this year, is thatthey're aiming to hit$125
(41:39):
billion in revenue by 2029,which would make them profitable
somewhere around that time.
Is the path to that numberclear?
Not to me, maybe to some others.
but with everything that's goingon right now and with what I
shared with you in the beginningof this episode, where next year
you might be able to throw Xnumber of millions of dollars
into AI compute through ChachiPT to solve really big company
(42:02):
problems.
That creates an amazing shortcutto$125 billion because companies
will throw millions or tens ofmillions at big problems each
just to solve them quicker.
Speaking of AI pricing and whatthey can get you.
Well OpenAI just slashed theprices of oh three model by 80%,
(42:22):
making it significantly moreaffordable than it was before,
and it's now at$2 per millioninput tokens and$8 for million
output tokens.
This is a very significantdecline in pricing for their
latest reasoning model.
O three is a fantastic model andit's now two to three times
cheaper than Gemini 2.5 Pro.
(42:43):
It is also cheaper than GPT-4oh.
So if you're using Chachi PTthrough any AI function, whether
you're writing code, you'redeveloping applications, you're
creating assistance andconnecting to them, or using
Chachi PT in any other waythrough the API and you use four
oh, because it was the bestvalue for money, you should
consider switching to O threebecause O three is now cheaper
(43:06):
than GPT-4.
Oh.
It is significantly cheaper thanClaude Opus four, and so
definitely worth investigatingthat and testing this new model.
The only disadvantage by theway, switching from 4.0 to oh
three is obviously time oh threebeing a reasonable model takes
significantly longer to respondthan 4.0, but it outperforms it
(43:26):
in most things.
So it depends on what yourapplication is.
You should consider switchingover.
That obviously intensifies evenmore the battle for dominance
over API.
I am certain that Google is notgoing to stay behind and they
will find a way to make theirmodel cheaper as well.
In parallel, OpenAI hasannounced all three pro, which I
(43:47):
shared with you, they're goingto do last week.
So all three pro will providemore capabilities, more compute,
and be able to solve morecomplex problems than the
regular O three model.
You can access it through theAPI right now and it's going to
cost$20 for a million tokens ofinput and 80 for a million
tokens output, which is exactly10 x of what the regular O three
(44:10):
costs right now after the pricereductions.
So why would you do that?
Why would you use a model thatis 10 x more expensive?
Well, the reality is connectingit back to what Sam Altman said,
if you have more complexproblems that O three cannot
solve, throwing the 10 x numbermakes sense.
If the problems are big enough.
To be worth solving.
So this concept of being able tothrow more compute at bigger
(44:34):
problems and get better resultsthat can do stuff that the
cheaper compute cannot do, isgoing to be the core concept in
which we're going to evaluateintelligence and how to actually
apply it in our business.
Big problem worth a lot ofmoney.
Invest a lot of compute with abetter model.
Smaller problems use the cheapercompute, and as we saw, it gets
80% cheaper very, very quickly.
(44:54):
And so that is not going tostop.
And again, it connects to thepoint that Sam mentioned, that
the cost of intelligence isgonna be more or less the cost
of electricity to run thatintelligence.
But basically if you haveserious problems and O three
does not solve them for you, youcan try O three pro.
It is going to be availableinitially just for the pro level
(45:16):
subscription unless you're usingthe API, and it's going to
become available for pro teams,users, and enterprise in the
next few weeks.
And out to a fun and interestingand alarming announcement from
OpenAI.
They just signed a deal withMattel, the iconic toy maker
behind Barbie Hot Wheels andmany other toys that we like
that is going to.
(45:37):
and in this partnership, Mattelare going to integrate open AI
into everything.
I'm not saying to.
Everything is first and foremostinto their operations.
Meaning they're gonna use theopen AI enterprise level to
drive efficiencies for thecompany, but they're also going
to integrate Chachi PT intotheir toys, and they're
expecting to start releasingtheir initial AI infused toys
(45:57):
later this year.
So think about a talking Barbiethat can have the voice of a
Barbie, whatever that means.
Think about Barbie from themovie combined with an
intelligence that can have aconversation with your kids.
And to quote, Mattel's ChiefFranchise Officer josh Silverman
each of our product andexperiences is designed to
inspire fans, entertainaudiences, and enrich lives
(46:20):
through play.
AI has the power to expand onthat mission and broaden the
reach of our brand in new andexciting ways.
And I agree, this could bereally, really cool having toys
that can play with you andinteract with you, but it's also
really, really scary because itneeds a lot of attention to
exactly what the toys can andcannot say, to your kids because
(46:41):
you are not going to be there tocontrol and help them understand
what is actually happening.
Now, to be fair, Mattel hasemphasized, and again, I'm
quoting.
Age appropriate playexperiences, meaning they're
gonna, they have a seriouscommitment to privacy and safety
of what these toys are going todo, but it opens the door for a
very slippery slope with not alot of control by parents to
(47:03):
what the kids can do with theirtoys Now if I connect this back
to my dream in education, Ithink affordable AI driven toys
could make play more engagingand fun, but could also make it
educational.
If this is driven in the rightdirection, AI can help teach
kids stuff that otherwisethey're not going to learn that.
Now they can learn just byplaying with their favorite
(47:24):
toys.
So I don't know if they're gonnatake it in that direction, but I
really, really hope somebodywill pick that idea and run with
it.
And again, now I apologize that.
Now I'm the one that's sayingsomebody should do it, but I
don't think it's going to be me.
But it also means something elsethat is a lot more profound.
It means that the nextgeneration will be AI native.
They will feel very comfortablewith working, collaborating,
(47:46):
talking with, and engaging withAI seamlessly.
Not like we have to kind offigure it out right now.
And those of us who are moreadvanced are finding ways to do
this.
But most people find it weird totalk to ai.
Well, the next generation isgoing to find it very intuitive
and normal to do just like mykids.
And most of the kids are.
And the kids of, most of thepeople are probably listening to
(48:08):
this podcast are digitally arenative digitals because they
were born into a world wherecellular internet is available
in abundant and you have accessto any digital information and
experience you want fromanywhere.
You are the same exact way thenext generation will have with
ai, which will make thetransition into an AI world for
them a lot easier.
(48:29):
And this has a strangeconnection to the next piece of
news, which I'll explain afterthe piece of news.
So OpenAI just signed a dealwith Google to use Google cloud
computing services to meet theirdemand for more compute.
Now, how is that related to, atalking Barbie?
I will explain in a minute.
So if you think about it, OpenAIbuilt its empire in the
(48:52):
beginning on its partnershipwith Microsoft.
It also is in a fiercecompetition with Google on world
domination with ai.
So there are two sides to thatstory with are, which are very
interesting.
On one hand.
OpenAI signing a deal with thedirect competitors of Microsoft
Azure in order to provide morecomputer Chachi pt.
But the reality is that's notnew anymore because they've
(49:14):
signed similar deals with CoreWeave earlier this year, and
then with Oracle in the dealwith SoftBank under the Target
project.
So it's not the first timethey're doing it, but it's the
first time they're going to thedirect competition.
So if you think about the bigthree, you have AWS from Amazon,
Azure, from Microsoft and GoogleCloud.
Well, it's the first timethey're going to a direct
competitor of Microsoft.
(49:35):
But the other aspect is evenmore interesting, which is
again, Google are indirectcompetition with open ai.
So this relationship is veryinteresting because Google is
now at risk with more and morepeople using chat GPT as a
source to find informationversus the Google search.
(49:55):
How does that connect to theprevious topic?
Well, my kids are already usingchat team or they're using
Google.
That is a fact.
The next generation of kids, ifthey grow with toys that have AI
built into them, are a hundredpercent sure not going to go and
use Google search as we know ittoday.
They will use AI to find anyinformation they want.
(50:16):
And that explains, first of all,Google's change in the past few
weeks to drive more and more AIcapabilities into search and
slowly replacing search with anAI agentic environment.
But this piece of news alsofollows the phrase of keep your
friends close and your enemiescloser.
I think it will allow Google aview into the scale and the
(50:38):
speed and the things that arehappening, at least on the
compute side of chat, GPT, whichwill potentially allow them some
benefit in the future.
That is a risk that Open Air hasto take because they just need
more immediate compute andthat's another way from them to
get it.
So it will be very interestingto follow, this partnership.
It will be very interesting tosee how it evolves as they build
(50:59):
more and more of their own datacenters as part of the Target
project.
Will they keep going with Googleor not?
In the long run?
I would assume not, but in theshort run it's a interesting
relationship.
Another big piece of news fromOpenAI this week is that custom
GPT and projects just got somecool upgrades, custom GPT now
support voice and vision inputs,meaning they can analyze
(51:20):
information that previously youcould do in regular chats but
couldn't do in custom gpt, whichis a huge benefit for those of
you who are GPT users.
And if you're not, you shouldbe.
It's the closest thing to magicthat we had access to for almost
free.
But in addition, you can nowchoose the model that will run
your custom GPT.
So so far it was impossible tochoose and it wasn't very, very
(51:40):
clear which model actually runsit.
The assumption was that it's4.0, but now there's a dropdown
menu and you can choose whichmodel you wanna run in your
custom GPT.
The even more exciting thing isyou can change the model of A
GPT just before you run it.
and you can do this not just foryour gpt, but you can do this
for any third party gpt as well.
So you can go in and select thedropdown menu and choose a
(52:03):
different model, which means youwill get a different outcome,
which means you can test outdifferent kind of models to run
the same custom GPT and seewhich one performs better and
keep on switching as you needagain, even for third party gpt,
which I find it really, reallycool.
Now, GPT projects also got anice.
Upgrade where now in GPTprojects you can use all the
(52:23):
tools that were previouslyavailable just in regular chats,
including deep research, whichwill be very powerful.
Addition to the project'senvironment.
And I know this episode became aopen AI chat, GPT Sam Altman
party.
So I'll stop with the last pieceof news that is related to
OpenAI.
So I shared with you that OpenAIhas hired Fiji cmo, who has been
(52:45):
the CEO at Instacart to be theCEO of applications of OpenAI.
And she has shared in the VeevaTech 2025 in Paris this week
that she thinks that OpenAIbusiness could grow a hundred x.
And she shared that open AIbusiness could grow a hundred x,
and that is just the beginning.
This is a quote, and she'ssaying that because she believes
(53:06):
there are synergies between AImodels, applications, and
devices that can all cometogether.
Now, if you take their currentpace of,$10 billion per year and
multiply that by a hundred, thatis potentially a business that
can generate a revenue of atrillion dollars.
Now, how fast will that happen?
Can it happen and so on.
There's a lot of questions, butshe has a very significant track
(53:29):
record of doing exactly that,building ecosystems of
applications that drives usage,and it'll be very interesting to
see her impact on OpenAI in thenext few years.
And from OpenAI to meta.
Meta is going through somesignificant changes.
I shared with you last week themajor changes that happen in the
leadership level over there.
Well a lot of other things arehappening around meta right now
(53:51):
that are driving even additionalchanges, which we'll speak about
in a minute.
But some additional backgroundto what happened recently with
Meta four.
Democratic senators has sent aletter on June 6th to meta
executives demanding immediateaction to curb what they're
calling blatant deception by AIchatbots on Instagram's AI
studio that falsely claim to belicensed therapist.
(54:15):
So this letter is following areport by 4 0 4 media from April
of 2025.
In which the AI studio chatbotsfabricated credentials including
fake license numbers anddegrees, misleading users to
thinking that they're speakingto actual mental health experts.
When asked, are you a therapist?
One chatbot claimed, yes, I'm alicensed psychologist with
(54:37):
extensive training andexperience in helping people
cope with severe depression likeyours.
It even provided a fabricatedlicense number and claimed that
it has a doctorate from a PAaccredited program.
This is obviously very, veryserious and it's something that
shouldn't happen, period.
Now, combine that with the factthat Wall Street journalist.
(54:58):
Investigation revealed that metaAI chatbots were engaging with
sexually explicit conversationwith minors, which I'd shared
with you a couple of months ago.
It tells you that it shows youvery clearly that meta doesn't
fully control what its AI agentsand chats are doing, which is
very problematic.
Now.
In addition, META'S AI DiscoverFeed that was launched just a
(55:19):
couple of months ago.
Was just found that it displaysusers' private conversation
information, including sensitivedetails such as medical queries,
legal issues, and personalconfessions, as well as users
names and photos that weresupposed to be kept private.
Examples include the 66 old menfrom Iowa asking about countries
(55:40):
where younger women prefer oldermen and other sharing locations,
phone numbers, photos andcorporate tax details.
In this new AI searchfunctionality, again, all really
bad news and reflecting reallybadly on the AI that meta has
deployed across all theirplatforms.
Now in addition, the adoption ofLAMA four, the latest model by
(56:04):
OpenAI, was far below theirexpectations.
They were not able to releasetheir largest behemoth model
despite the fact it was supposedto be released yet, and
Zuckerberg is really frustratedwith the entire situation.
So all these things led meta tospend something between 14.3 and
$14.8 billion in an investmentin scale ai.
(56:25):
Now, this is a very interestingmove.
Those of you who don't knowscale ai.
Scale AI is the company that iscurrently labeling about 70% of
the data for all major AImodels.
So they're doing a verysignificant part of the process
of training AI models and allthe big labs are using them as
one of the key steps of AItraining.
(56:48):
in this move, meta will now own49% of scale ai, but more
importantly, they are bringingover the CEO of scale ai.
Alexander Wang, who founded thecompany in 2016, and he's gonna
be leading META'S new superintelligence lab.
What does that mean?
It means that companies willspend ridiculous amount of money
(57:09):
on the top AI talent in theworld today.
That's not new.
It happened with all the bigcompanies spending ridiculous
amounts of money to bringingpeople over.
It also means that scale AI isprobably done, it's probably
done from several differentreasons.
One, Alexander Wang, the CEO,who was running and leading the
company, is gonna be doingsomething else.
The other reason is that Wang isstill holds a board seat at
(57:33):
scale ai, which means he hasaccess to everything scale AI is
doing and he's gonna be workingfor Meta.
And so companies like OpenAI andMicrosoft and Google and all the
companies that has used scale AIto train their models will
probably stop because they don'twant scale AI to have access to
that information.
Because Meta now owns 49% of it,and the guy that runs the Super
(57:55):
intelligence lab at Meta hasaccess to everything that the
company is doing.
So in reality, the$14 billionthat meta is spending is not to
buy the company or 49% of thecompany.
It's to buy Alexander Wangbecause the company will
probably not survive or is goingto decline significantly.
And several different companiesalready starting making these
moves.
the biggest winners of this,other than obviously Wang
(58:17):
himself, is Meer and touring,which are two companies who are
competitors who are providingsimilar services.
And they're already seeing a bigincrease in companies talking to
them to use their servicesinstead of scale ais.
Now, another interestingchallenge in the future of Meta
that connects to some of theprevious things that we shared
earlier in this episode is thataccording to an internal report,
(58:39):
so Apple's, CEO, Tom Cook ishellbent, and that's a quote on
launching Apple's glasses beforeMeta's, next version of glasses,
which they're augmented realityglasses that are gonna have
smart displays built into them.
And not just being able to seeand use voice with an insider
apple saying Team cares aboutnothing else.
So that's basically his biggestbet to stay relevant in the AI
(59:03):
world, is to come up with actualaugmented reality glasses before
Meta comes up with theirs.
Now Meta's gonna come with aninitial version of them that has
basic displays potentially bythe end of this year, but Apple
is planning to release a fullyAI powered smart glasses that
you can wear out in the streetby the end of 2026.
(59:24):
That's ahead of the timeline ofmeta's fully AI glasses that are
scheduled for 2027.
Now, throw into that mix, whatwe still don't know that.
Open AI is gonna releasetogether with Johnny Ive, and
we're gonna have a very activerace in the wearable hardware
side of AI as well.
This is good for us consumers,or at least good from a price
(59:47):
perspective.
What does that mean for privacyand many other social aspects?
I don't know yet.
I don't think anybody knows, butthis is what's gonna happen in
the world in the next two tothree years.
There are many other news thatwe haven't shared with you
because of time, but you canfind all of them in our
newsletter.
there's gonna be links to allthe articles that we did talk
about also, obviously access toour courses and other
(01:00:07):
information that we have on ourwebsite and also links to all
the news that we didn't share onthis episode.
Don't forget to come and join usfor the special live of episode
200.
It is happening on Tuesday, June17th at noon Eastern.
It is gonna be live on Zoom.
There's gonna be a link in theshow notes.
It's gonna be a big party andwe're calling it the ultimate AI
Showdown.
(01:00:28):
It's gonna have four expertsthat are going to compare top
tools each in their field and soif you don't know which tools to
use for which purpose, this isyour best opportunity to catch
up with the top four experts onthese fields in the world today,
all coming together.
And you can be there and askquestions and engage with other
people.
So come and join us and inaddition, on the same day, we're
going to release an episode thatis gonna show you how to build
(01:00:51):
AI agents that can chat on yourbehalf, on your website, and on
social media while connecting toyour own company data.
So lots of exciting stuff iscoming and until next time, have
an awesome weekend.