All Episodes

Are we watching the rise of a trillion-dollar empire—or sleepwalking into another dot-com crash?

From OpenAI's wild ambitions and Sam Altman's eyebrow-raising dinner confessions, to Google's AI-powered Pixel 10 and Meta's highly questionable chatbot ethics—this episode dives deep into what every forward-thinking business leader needs to know now.

Because the truth is, whether you’re building, scaling, or just trying to survive the AI transformation, the ground is shifting fast—and not always in ways you’d expect.

In this episode, Isar breaks down the latest AI news with clarity, strategy, and the occasional raised eyebrow. You’ll hear exactly what matters, what doesn’t, and how to separate hype from opportunity in a world moving at LLM speed.

In this session, you'll discover:

  • What Sam Altman really said about GPT-6, compute shortages, and raising trillions
  • Why Google’s Pixel 10 might be the first actual AI phone—and what it means for your data
  • The OpenAI vs. Google browser war (and the subtle takeover of web search)
  • Why Meta’s leaked AI chatbot guidelines are more disturbing than anyone expected
  • The death of entry-level jobs? New data shows how AI is upending the talent pipeline
  • The “Shadow AI Economy”: How 90% of employees are using AI—even when leadership isn’t
  • Lessons from CEOs: The right (and wrong) way to lead your team into the AI future
  • Why we urgently need global AI guardrails—and how the current path is dangerously unregulated
  • And yes, a pregnancy robot is in the works. We’re not kidding.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hello and welcome to a WeekendNews episode of the Leveraging

(00:03):
AI Podcast, a podcast thatshares practical, ethical ways
to leverage AI to improveefficiency, grow your business,
and advance your career.
This is Isar Meitis, your host,and we have a lot to talk about.
This week, there were no bigreleases, but a lot of things
have happened.
We are going to discusseverything that Sam Altman
shared with reporters in adinner that he invited them to

(00:25):
on the wake of the release ofGPT-5 and the future of the
GPT-6 and a lot of other thingsthat on his mind and the plans
for open AI in the short andmedium term future.
We're going to talk about thefirst real AI phone, so what
Apple Intelligence was supposedto be and never actually
delivered.
Google just released Pixel 10,and it has some really
incredible AI capabilities builtinto it.

(00:46):
We're going to talk about theimpact on jobs and an
interesting survey and researchdone by MIT on this topic.
We're going to talk about ahighly controversial guidelines
for meta's AI and the reorg atmeta ai and a lot of rapid fire
items like releases Fundraisingand so on.
But before we get started, Iwant to share something exciting
with two different groups ofpeople in the world.

(01:07):
The first one are people fromAustralia.
So while most of the listenersto this podcast, 63% if to be
specific, are from the us.
If you look down to the citylevel, then the first top three
cities in the world, more orless consistently for many
months now, are, Sydney,Melbourne, and Brisbane.
So thank you Aussies for beingsolid and consistent listeners

(01:28):
of this podcast.
I really appreciate youlistening to this, uh, regularly
on the other side of the world.
And please connect with me onLinkedIn.
I want to get to know you andknow why you're listening, why
you're following this podcastand so on.
The other interesting news forlisteners of this podcast, I am
going to be in Californiadelivering AI training to two
different companies.
One in San Francisco and one inSan Clemente.

(01:49):
And I'm going to be arriving toSan Francisco on Monday while
I'm doing the training onTuesday.
And so on Monday evening, I amplanning to meet with you if you
are in San Francisco and youwanna meet with me.
So how is this exactly going towork?
Since I don't have a finallocation yet, connect with me on
LinkedIn and first of all,recommend the location.
I'm going to be in the main areaof downtown around Market
Street, close to the ferrybuilding, so anywhere within

(02:12):
that area works.
So if you can suggest a locationand let me know that you're
interested, once I'll decide onallocation, I will let everybody
who will contact me know exactlywhere we're going to be, and I
think that could be a really funin-person event, just getting to
know each other and talkingabout AI or whatever other topic
that we want.
So if you're listening to thisbefore Monday evening, as I
mentioned, please reach out tome on LinkedIn, but we have a

(02:32):
lot of news to talk about, solet's get started.
The first topic that we're goingto talk about is a dinner that
Sam Altman together withadditional executives from
OpenAI, such as Greg Brockmanhosted in San Francisco,
together with reporters frommultiple media channels.

(02:56):
In this dinner, Sam shared a lotof his thoughts on the near and
medium future for OpenAI andpotentially the industry, and
that gives us an amazingopportunity to dive into what
Sam thinks is coming and itdoesn't think he knows it's
coming because he is alreadyworking on it.
And so it's great for us to getthat understanding.
The different outlets reporteddifferent things, but I took all

(03:17):
of those articles from all thedifferent sources, dropped them
into Notebook, lm, and got asummary of all the major points,
which I'm going to share withyou right now.
The first one has to do with thecurrent status of compute and
how much they're lacking, a lotof it, and how much they're
going to invest in that in thefuture.
and the quote from Sam Altman isthey're going to spend trillions
of dollars on data centerconstruction in the not very

(03:39):
distant future.
Now Sam also anticipateseconomists that will find this
so crazy.
It is so reckless.
But he stated, we'll just belike, you know what?
Let us do our thing.
So what Sam is saying is thattheir demand for training and
their demand for inference isgoing to continuously grow.
And as a result, they're goingto, and I'm quoting again, it'll

(04:02):
force them to spend maybe moreaggressively than any company
who ever spent on anything aheadof progress.
What he's basically saying isthat regardless of how much they
invest, the demand over timewill grow even faster.
And hence, they're going toinvest a lot more than may seem
reasonable to a lot of otherpeople, and yet, that's the
direction that they're going togo.
Now, how are they going to raisetrillions of dollars?

(04:24):
Well, what Sam hinted is thefollowing, he said, I suspect we
can design a very interestingnew kind of financial instrument
for financing compute that theworld has not yet figured out.
So This is not gonna be VCmoney.
It's not gonna be investmentbanking, it's not going to be
venture capital.
They are thinking of new ways toraise money that will fit this
new model.
Will that be a pay to play?

(04:45):
Will that give you futurediscounts maybe on inference or
stuff like that so the wholeworld can participate in that
investment?
I don't know they haven'tclarified or provided any
additional details of what thatmeans, but it's very obvious
that they're thinking this wayon how they can raise trillions
of dollars in order to do this.
And as I mentioned, not in thetoo far future.
if you remember months ago, SamAltman released a blog post

(05:07):
where he talked aboutpotentially raising$7 trillion
and everybody thought he wasjoking or exaggerating.
Well, it sounds like a verysolid plan as of right now, and
I think they're just trying tofigure out how to do it, but
they're definitely planning todo that.
Altman also admitted that therollout of GPT-5 was mishandled
and he stated, and I'm quoting,I legitimately just thought we

(05:28):
screwed that up, and I thinkmany people agree.
If you wanna learn more aboutthat, just go and listen to the
episode.
From a week ago.
We shared a lot of informationon the negative sentiment to how
GPT-5 was released, regardlessof how good or not good the
actual model is.
He also talked about GPT-6 andin generally spoke about the
fact that they already havesignificantly more advanced

(05:49):
models that they are developingand using and they cannot
release them because of thecompute constraint that they
have right now.
But specifically he said theGPT-6 is already on the way and
that will arrive much fasterthan the gap between GPT-4 and
GPT-5.
Another thing that he focused onis emphasis on memory.
He said, and I'm quoting, peoplewant memory as a key feature.

(06:09):
Basically allowing chat GPT toremember you as a person, your
preferences, and what he'ssaying, that in the future, this
enhanced memory capability willalso know your style and your
political preference and so on.
Which leads us to another thing.
He said that they are planningfor the model to be very
adaptable, so its tone andpersonality will be adapted to

(06:30):
your needs, including whetherit's gonna be super woke, or
conservative.
So you'll be able to basicallycreate the model to fit your
needs or your beliefs, which Ithink has pros and cons, right?
The benefits is that the modelwill work based on your specific
personal needs.
The disadvantage is we arecreating bigger and stronger.
Echo chambers for anybody onwhat they think versus

(06:51):
generating a more collaborativeenvironment that will be able to
debate, but agree on differenttopics.
So I definitely see pros andcons to this approach, but this
is the direction that they aretaking, which is going to be a
lot more customized andpersonalized ai.
Now, something that hementioned, again that was
mentioned shortly after GPT-5was released, that despite the
fact that the GPT-5 launchprocess was very negative in a

(07:12):
lot of the decisions that theymade, like removing all the old
models and some other aspect,Sam stated that the demand for
their API doubled within 48hours of the G.
Now that growing demand is alsoleading them to significantly
growing profits.
So I'm opening parentheses for aminute.
OpenAI, CFO, Sarah Friar toldCNBC that July was the first

(07:35):
month they hit a billion dollarin revenue in a single month.
That puts them on a annual paceof over$12 billion a year, which
is four x what they had lastyear.
She also stated, and I'm quotingthe biggest thing we face is
being constantly under compute,basically highlighting what I
shared with you before, thatthey need to invest a lot more
money in getting more compute.

(07:55):
Otherwise, they can serve a lotmore people and a lot more
demand than they're servingright now.
Back to Sam Altman.
He's saying that despite thefact that they are projecting
that very quickly, they will beon an annual revenue rate of$20
billion a year, potentiallygetting to that rate this year,
meaning getting close to$2billion a month by the end of
2025.
They still remain unprofitable.

(08:16):
But the really interesting andpromising thing that Sam said
when it comes to profitabilityis that they are profitable on
inference.
Meaning when they deliver ai, ifyou put aside, you know,
whatever other overheads andinvestment in future endeavors,
and obviously training futuremodels, which is a huge
investment, they could run thebusiness profitably.
So if they just run GPT-5without developing new things,

(08:37):
they can be a profitablebusiness, probably a very
profitable business, and that'sa very good sign that relates to
whether this is a bubble or not.
And we're gonna touch, like Isaid, a lot more of that in the
next segment of this episode.
Now Sam also spend time ontalking about what is going to
happen beyond ChatGPT and whatare the things they're focusing
on the future when it comes toadditional things they're going
to deliver to us.

(08:57):
So we shared with you, couple ofmonths ago that Fiji SImo is now
the new CEO of applications froma lot of hints.
She might later on turn to bethe CEO of OpenAI.
It sounds more and more like Samwants to take more of a
strategic role versus a CEO roleand probably will stay a part of
the board, probably will staythe visionary behind it.
It feels more and more like hewants to be the head of Alphabet

(09:20):
versus the head of Google, ifyou understand what I mean by
this reference.
And then letting Fiji SImo orsomebody else run it.
But now for the things they'regoing to deliver, they're
planning on releasing a AIpowered browser, meaning your
entire engagement with theinternet will happen on Cha
Similar to other releases in themarket, the most prominent one
is Comet from Perplexity.

(09:41):
That is actually a really cooltool that I just started using
in the last 48 hours.
I need to thank Ari Suran, theCEO of Sonance, for pushing me
to make that step.
It's something I've beenprocrastinating for, the last
few weeks.
And he said, you gotta try thisout.
It's doing some incrediblethings.
And I agree.
After I played with it, latelast night, I was able to do a
lot of really cool things withit.
So the need for an AI enabledbrowser is very obvious, and I

(10:04):
think we're only going to haveAI based browsers in the very
near future.
Nothing else will exist becauseit just won't make any sense.
So this is one thing that OpenAIis planning.
as many rumors suggested before,they're also potentially looking
at a social media platform, orto be more specific, he didn't
say they're developing it, buthe shared.
That will be interesting, andI'm quoting to create a much

(10:25):
cooler kind of social experiencewith ai.
Does that mean they're workingon one?
I don't know, but that's anotherthing I shared with you last
week that Sam personally notrelated to open AI, is launching
Merge Labs, which is a companythat will compete with
Neuralink, that will allowneural interfaces, basically
allowing AI to talk directly toyour brain.
Or as Sam said, just imaginebeing able to think something

(10:47):
and have Cha Does that soundcomplete science fiction to you?
As of right now, probably, yes.
Will that be the reality of allof our kids?
Very likely.
Sam, also related to thehardware device that they're
developing together with Johnnyi's team and he said that it is
absolutely beautiful and headded jokingly, if you put a
case over it, I will personallyhunt you down.

(11:09):
Basically meaning he reallybelieves it's an incredible,
useful and beautiful device,which is everything you expect
from Johnny Ive to deliver.
especially when it comes tocombining it with a lot of AI
capabilities.
They haven't shared anythingelse, so we still don't know
what it's going to be.
From previous conversations, weknow it's going to be a third
device.
It's not supposed to replaceyour computer or your phone.
It's going to be a AI add-onthird device.

(11:31):
That will do, again, probablyincredible things when it comes
to that.
We're going to talk later on inthis episode on the first real
AI device that you can actuallybuy right now.
Then the elephant in the roomthat we need to talk about and
we're gonna dive into in thiscoming segment is that Sam
Altman expressed his belief thatthe AI market is currently in a
bubble.
He said, and I'm quoting, are wein a phase where investors as a

(11:53):
whole are overexcited about ai?
My opinion is yes.
In this section, he repeated theword bubble three times in 15
seconds, and he was talking alot about the insane valuations
and irrational investorbehavior, especially when it
comes to small businesses andhow they're getting started.
And he added, and I'm quotingagain, some investors are likely
to get very burnt here.

(12:14):
However, he affirmed that he hascomplete conviction that, and
I'm quoting again, AI is themost important thing to happen
in a very long time, and thevalue created by AI for society
will be tremendous.
So, does that sound a littlecontradicting?
Maybe, but I'll try to elaboratewhat he probably means.
He thinks that the overallinvestment in AI is not a

(12:36):
bubble.
What he's referring to is thatcompanies of three, four people
as smart as they're going to be,are raising billions of dollars
without any projection of futurerevenue.
This is very different, as anexample, than what OpenAI and
philanthropic are doing that aregenerating billions of dollars
two years after they launchproducts, which is a very
different kind of scenario.
So I think what he's referringto is some of the investments in

(12:59):
ai, as crazy as they may sound,including them planning to
invest trillions in data centersmake sense, and yet investing
billions in startups that havenothing yet makes no sense.
And I tend to agree with thatopinion.
And these statements from SamAltman are a perfect segue to
discuss in a broader way thetopic of is the AI a bubble

(13:19):
similar to the.com era or not?
And they are different inputsand different opinions from
different people around theworld.
Obviously just the fact that SamAltman said it's a bubble sent
the stock market coming down afew percentages, especially on
the tech companies.
But this bounced back this pastfew days.
So let's break this down toinputs from multiple different
sources.
First of all, the major techcompanies have been pouring

(13:41):
crazy amounts of money intotheir capital expenditures when
it comes to serving the AIdemand.
So Microsoft is targeting$120billion in infrastructure.
Amazon topping a hundred billiondollars Alphabet raising its
forecast to$85 billion thisyear.
Uh, and meta is lifting itsCapEx range to around$72
billion.
So if you are just looking atthese four companies, you are

(14:02):
getting close to$400 billion ofcapital investment in AI
infrastructure.
Again, just from four companiesas big as they are, it's still
an insane amount of money.
Add that to investments ofhundreds of millions or billions
in companies that are beforeeven showing signs of potential
revenue.
And you understand that theamounts of money that gets
poured into the AI are nothinglike we've ever seen before.

(14:24):
Alibaba, co-founder, joy twarned that there's a brewing AI
bubble in the us He said thatalready in March, and he
specifically referred to thecrazy investments that most of
these companies that I justmentioned are doing into data
centers without knowing if thedemand is actually there.
I actually think the demand isthere.
I think the demand will begrowing, and I think the release
of GPT-5 and how much they lackcompute right now is very

(14:47):
obvious.
Now let's look at some opinionsfrom the US while Bridgewater
Associates, Ray Dalio and ApolloGlobal Management Chief
Economists, Torsten Slack, havealso issued warnings similar to
what we heard from Alibaba.
And Slack is suggesting that theAI bubble is even bigger than
the internet bubble in the earlynineties.
But there are other investorswho have exactly the opposing

(15:09):
opinion.
As an example, Wade Bush's, DanIves sees the CapEx surge as a
validation moment of the AIsector.
He believes that the long-termimpact is actually
underestimated.
Rob bro from Citigroup says thatthere's a very big difference
between the current investmentand the.com bubble and is noting
again, as I mentioned before,that today's companies boast

(15:29):
very solid, earning, very strongcash flow, which is something
that did not happen in thedotcom era for most companies
getting those investments.
So does that extend to every AIcompany out there?
I think the answer is no.
I still think that what Sam ishinting to is very, very true.
There are a few companies whowill see amazing returns, and
those investors who invested inthem are going to see amazing

(15:50):
returns over time.
And I do think that many, many,many investments are gonna go
down the drain.
What I see as something that isgonna start happening in the
immediate future is that manyinvestments in smaller startups
that are doing incredible thingsand products are gonna go down
the drain just because thosethings that these startups are
developing are going to become afeature in the next release of

(16:11):
Chui or Gemini or Claude, etcetera.
So I think the rate in whichthese companies are releasing
incredibly powerful capabilitiesas small features are things
that other companies haveinvested.
2, 3, 4 years in developingwhile raising tens of millions
of dollars to develop thesethings.
And now they won't be necessarybecause everybody are just gonna
use what open AI is going togive them.
So are we in a bubble or not?

(16:32):
To summarize the segment, Ithink the answer is, it depends
on what aspects of the AI marketyou're looking at.
I think as a whole, it will keepon growing and the investment
will keep on growing and we'llsee amazing things happening
from an infrastructureperspective and from
capabilities perspective.
And I think a lot of investors'money are going to go up in
flames because they're investingway too much money in things
that are not sustainable.

(16:52):
A few interesting additionalupdates since we're already
talking about open ai.
One of the predictions that SamAltman made is that pretty soon,
billions of people a day will betalking to ChatGPT, potentially
exceeding all humanconversations.
And this connects to the pointthat I mentioned earlier as far
as the need for diversificationand customization of the ChatGPT

(17:13):
experience or any AI experience.
And Sam said specifically, andI'm quoting, there will have to
be a very different kind ofproduct offering to accommodate
the extremely wide diversity ofuse cases and people.
OpenAI are also planning to addencryption to their
conversations as a first step.
They're planning to do this forthe temporary chats when
something like this is happeningis unclear, but it's clear that

(17:34):
they're going in this direction.
They also share that they'regoing to retire the old original
voice mode and just keepadvanced voice mode.
I must admit that while there'sbeen a whole controversy on X in
Reddit on people who love theold voice mode, I don't get it.
I actually switched to advancedvoice mode long ago and I've
never looked back, but if youlike the old voice mode, you
should know that it might begoing away as of September 9th,

(17:56):
2025.
That being said, they alsoretired GPT-4 0.0 a week ago and
then brought it back, so I'm notexactly sure how that's gonna
roll out.
At least they're giving us alittle bit of a heads up.
To be fair, the only thing thatI'm really disappointed with in
the new voice mode is that sincethe launch of GPT-5, it's been
crashing all the time.
So I use voice mode daily.
I think it's one of the mostamazing ways to engage with AI

(18:18):
as far as just brainstorming,raising, idea, developing new
concepts and so on.
And it's just been not workingfor me since GPT-5 launched.
It actually crashes every singleconversation I have with it,
usually within a minute,sometimes two, which is
definitely subpar to what I wasused to before.
So I really hope they'll be ableto fix that in the near future.
There has been an interestingarticle in the information
talking about how opens AI'susage of search data is helping

(18:40):
it to enhance its AI offeringand sharing how much that
competes with Google, whichmakes perfect sense.
Think if Open AI launches abrowser in the near future,
which they're planning to, as Imentioned, it is going to take
even more traffic from Google.
But let's connect the dots tosomething that we shared last
week.
If you remember when GPT-5 waslaunched, I shared with you that
a research found that 77% ofchat GPT users basically use it

(19:04):
as a web search tool versusenjoying all the amazing
benefits that AI can do, whichmeans they're not really using
ai, there are just searching theweb.
But that being said, that meansthat if they have 700 million
weekly users, five to 600million people are using ChatGPT
for search.
That does two things.
A, it gives OpenAI a huge amountof knowledge on what people are
actually interested in and howto serve it to them in an

(19:25):
effective way.
But it also means that whenthey're doing these searches,
they're not probably usingGoogle, which means 500, 600
million people are weeklysearching ChatGPT instead of
searching Google.
And these numbers starts to getsignificant.
And obviously the other 23% arealso using AI for search.
They're just also using it for alot of other things.

(19:45):
So that obviously puts a lot ofpressure on Google and its main
revenue source, which is adsbased on what people search.
If a lot of people are going totransition to open ai, then that
puts the current business modelthat makes Google completely
break, and I assume Google justknows they don't have a choice
and they have to play this gameand find different ways to
generate revenue potentially bybecoming the same thing that

(20:07):
Open Air wants to become, whichis a browser that serves
everything with agents builtinto it, and find different ways
to monetize it.
Now, to make this even moreinteresting in that dinner with
reporters, Sam Altman said thatif Google will be forced to sell
Chrome, they will definitely beinterested in buying it.
It is by far the leading browserin the world today.
And obviously having access tothis kind of distribution are
going to put OpenAI in a verydifferent position they're in

(20:29):
right now when it comes togetting their offering in the
hands and in front of the eyesof a lot more people, Which
raises the question, are wetaking the distribution channel
of one monopoly and giving it tothe contender monopoly to now
control it?
So I dunno if that will evenhappen.
Will the government actuallyforce Google to sell Chrome and
whether they will then allowopen AI to actually buy it?

(20:50):
But it's definitely anopportunity for open AI if it
does happen.
Now since we mentioned Google,it's a great segue to the
release of Pixel 10.
So let's look at a little bit ofhistory before we dive into
Pixel 10.
Pixel eight was the first phonethat had AI onboard using the
tensor chips.
And Pixel 10 is just a big jumpahead from those capabilities.
So Pixel 10 runs on the new Gfive tensor processor, which

(21:13):
means it can run Gemini nanomodel straight on the device
without sending the dataanywhere, which allows you to do
a lot of really incrediblethings.
So if you are trying to imaginewhat AI on your phone can look
like, this new release by Googlegives us a great glimpse of how
this is going.
The first and coolest feature,and maybe most powerful one is
called Magic Q.
Magic Q proactively looks ateverything that you're doing and

(21:37):
understands what you're tryingto do and what your needs are
based on all the informationthat it's getting from the
different apps, and itproactively surfaces information
to the screen, like differentpop-up tickets that are
anticipating what you will needin providing that to you from
multiple data sources.
So think about the ultimatepersonal assistant that
constantly monitors yourcommunication across multiple

(21:59):
channels together with yourcalendar and can pop up things
to your screen saying, Hey, Isaw that you were chatting with
this person about playingpickleball with him this
afternoon, but you have ameeting at the same time with so
and so.
Would you like me to send anemail and reschedule the
meeting?
What would you like me to do?
Stuff like that.
And I think this is extremelypowerful and these are, like I
said, the very first steps ofthat.
I think Google has the biggestlead in that right now.

(22:21):
They have the lead on that, onthe web as well.
I use.
Gemini across all the G Suitestools, and I have a lot of
clients who use the Microsoftparallel of copilot and it's not
even close.
And so I think Google arepushing very hard to unite the
information from all thedifferent things they have their
hands into, which is a lot.
Combine that with the phone thathas multiple third party
application that are running onit, and location services that

(22:42):
are turned on for everything.
And you understand that ifGoogle will figure this out,
they will be able to deliverextreme value.
And the way we're going to payfor it is with our privacy.
Like everything else in Google'shistory.
They also added many otherfeatures like a call screen that
prevents unwanted interruptionsand spam calls by identifying
the callers and what theirintents are before you even
answer the phone.
I've been using voice screeningfrom Google on my Pixel eight

(23:04):
Pro for a very long time now,and it's extremely helpful in
just screening calls before evenhave to take them.
They have a live translatefeature that basically mimics
your voice and tone on whatyou're saying and answering, but
says it in a different languageto the person on the other side
in near real time.
It currently supports English,Spanish, German, Japanese,
French, Hindi, Italian,Portuguese, Swedish, Russian,

(23:27):
and Indonesian.
So you'll be able to speak withany person who speaks any of
these languages.
It will sound like you, but itwill sound like you're speaking
their native language.
I think this is absolutelyawesome.
Combine that with Bluetoothheadphones, and you'll got the
closest thing to the Babel Fishfrom the Hitchhiker's Guide to
the Galaxy, where it's a fishthat talks to your brainwaves
directly and can translate anylanguage to your language back

(23:48):
and forth.
So kudos to Douglas Adams forinventing this feature many
years ago, but now kudos toGoogle for actually implementing
it.
Now, there's also a new variantof Gemini live audio model that
is basically your ability tohave voice conversation and you
will adjust to your tone.
So whether you're excited orconcerned and so on, and it will

(24:09):
reflect on that and will provideanswers based on your emotional
level.
Again, on one hand, reallyexciting.
On the other hand, really scary.
But this is the direction it isall going and how it's gonna be
integrated and running straighton your phone.
an interesting piece ofinformation that was shared as
part of that is that Gemini liveconversations are five times
longer than text-basedinteractions.

(24:30):
That does not surprise me atall, and I think the days of
using our thumbs in order toengage with other people or
applications are numbered.
I switched almost completely toeverything that I'm doing,
including on my Mac to voicetyping, and I'm typing
significantly faster, and I canjust do a lot more things in a
single day because of that.
And I think with the AIcapability to understand us,

(24:52):
understand our intent, connectthat to the information that it
has in the backend, will allowus to just be significantly more
productive across more or lesseverything that we do.
Another interesting feature onthe Google phone is that it has
a new feature for taking amessage.
Basically think about it as anAI enhanced voicemail where
somebody who leaves you amessage, it will understand
exactly what the message isabout, and it will give you a
summary and next stepsrecommendation of what you

(25:14):
should do based on the messagethat was left for you.
On the camera side, they alsointroduced a lot of cool stuff
like camera coach.
So when you are looking throughthe viewfinder, basically seeing
what the camera is seeing beforeyou take a shot, it will guide
you how to get better shots, howto zoom in, turn a little
upwards, move to the left.
Focus on that.
Change the lighting in order toget better results with the
pictures that you're taking.

(25:34):
Basically you can have anexperienced professional
photographer giving you hints onhow to shoot better photos.
After you're taking the photos,you'll be able to edit them with
voice and or text it's withouteven touching the screen, just
saying, I want to remove or addthis, or I wanna change that.
They have a lot of really coolexamples on their website for
the release, like movingsunshine, glare from one image,
or adding and removing thingsfrom an image in a different

(25:55):
one.
They also have a new feature,again, that is somewhat
troubling, that is called addme, which allows you to add
yourself into pictures you'renot in.
That might be the end of groupselfies, so if you want to be in
the picture, but you also wantto take the picture right now,
your only option is to take aselfie with you in the image,
and everybody else is usuallysmaller in the background.
Well, now you'll just be able totake the picture of everybody
else and add yourself into thatimage.

(26:16):
They also have a really coolfeature that's called Auto Best
Take, that looks into multiple.
Images that it's actually takingthe background before you're
doing anything, and it combinesthe best faces of people from
all of them into one image.
So if you're trying to takepictures of 20 people and you
want them all to be smiling andall of them to have their eyes
open and all of them to lookinto the camera, that's almost
never happens.
And that's why we take multiplepictures, then we try to pick

(26:38):
one.
Well, now what it's going to do,it is going to take up to 150
images behind the scenes withoutyou knowing and literally
superimposing the best face ofeach person into a final
picture.
They also have a lot ofimprovements on the video side,
and this can shoot highlystabilized video.
Even if you're running chasingsomebody and trying to shoot an
action video, it can stabilizethe video, it can shoot eight K

(26:59):
and it can deal with veryproblematic and hard to shoot
lighting conditions.
The good news is all of that isfor the first time, pixel 10
phones will implement the C twoPA, which is a standard that is
establishing the origin of thedata and also shows that it has
been manipulated with ai.
So every one of these changeswill be stamped to say AI has
manipulated this image.

(27:21):
I really, really, really hopethat this will become a
mandatory requirement as a lawby the government.
Otherwise, we'll have zeroability to know which images are
real or not, and now that itwill become available on every
phone and every camera and everydevice that we carry.
And with a click of a button andor a few words in English, you
can completely alter the image.
We are heading into a worldwhere we will not be able to

(27:42):
know.
Whether any image is real ornot, which leads to a huge
opportunity for misinformationand disinformation.
And as I mentioned, I reallyhope it will become a law very,
very quickly that will force thecreators as well as the
distribution channels to includethe statement that's saying that
this was AI manipulated, or AIcreated.
They also added a lot of coolcapabilities for search.

(28:02):
We gotta remember at the end ofthe day, this is Google, but now
you can do visual search.
And the visual search can do alot of really cool things, such
as it can identify things livein the image.
So think about being in a streetin a foreign country and not
being able to read the signs,but you're looking for a bakery,
you can just open your camera,look around, and it will show
you highlight on the screen.
What is the bakery one?
Another example that they showedis a guy opening the engine hood

(28:23):
of his car and asking where theair filter is, and it tells it
exactly what to look at insidethe engine of the car, and so on
and so forth.
So I think this is actually avery powerful feature.
Another thing that you can do iswhile you have a live video, you
can circle something and askwhat that is, and it will
initiate search to tell you whatthat is.
Again, extremely helpful featurethat can help across text,
images, videos, and so on, tojust research and get more

(28:45):
information about things.
Think about combining that withsmart glasses, which is already
around us in more and moreplaces.
And you can start imagining theTerminator view where the
Terminator can look around andsee things and see people and
get information about it.
This is coming and it's probablycoming faster than you think.
Now they also created PixelStudio that allows you to create
stickers on the fly just byasking what you want the sticker

(29:07):
to have.
And they added a Pixel Journalapp that allows you well to
journal, just do reflections,track calls, offer insights, and
whatever you want to track itover time under specific topics.
And they have a nativeintegration into Notebook LM
that will allow you to summarizeand do all the cool things that
you're doing with notebook limbbuilt into everything in your
phone.
So why is this a big deal andwhy did we invest so much time

(29:29):
in talking about this?
Those of you who took my coursesor just been listening to the
podcast for a while, you knowthat the number one thing that
makes these AI models thrive iscontext.
The more context they have, themore they know about you, your
world, your role, your personallife and so on, the better, more
relevant, and more accurate andhelpful their results and
answers are going to be.
Combine that with the fact thatmany of the applications on the

(29:51):
phone are Google's applications,so the Google suite, the camera
and everything, it's all theirs.
Tells you that it will be ableto be probably the most useful
AI assistant ever created, atleast until Sam Altman and Jon
Ive releases their stuff.
But in the immediate future,there's only one device that
actually integrates all of that,and that is the Google Pixel 10.

(30:11):
And now I'm not trying to sellyou on that phone.
I'm sure there's a lot of peopleare gonna be terrified and are
now thinking, there is no wayI'm getting that phone.
I must admit that somebody who'susing the Google Pixel Pro
eight, I'm very, very temptedswitching right now just to see
what the difference are and howhelpful that really is.
If I do make the change, I'llreport all my findings after I
do that.
Another small point from Google.

(30:31):
Before we switch to the next bigtopic, Google just released an
interesting research they'vedone in the first half of this
year trying to measure theenvironmental impact of the
usage of Gemini.
So over a 12 month period, theymeasured multiple aspects of the
usage of AI across the board,including all the overhead that
comes with it in trying toestimate exactly what it is.
So as of right now, a textprompt, an average text prompt

(30:53):
consumes about 0.2 watt hours ofenergy.
Emits 0.03 grams of CO2 and usesabout zero point 26 milliliters
of water.
That sounds like very, verylittle, but now multiply that by
billions of these that arehappening every single day, and
you understand that the overallimpact is still significant.
The good news is that over thepast 12 months, the energy usage

(31:16):
per prompt has plummeted.
33 x meaning.
A year ago, the same promptwould've consumed 33 times more
energy.
It also would've generated 44times higher carbon footprint.
So we got significantly betterand faster quality while having
a shrinking impact on theeconomy.
Again, this results ignores thefact that the demand has grown

(31:37):
dramatically from a year ago.
So I'm not sure what the balancepoint is, but the fact that
these companies are thinkingabout that, researching this and
trying to drive the both thecarbon footprint and the
environmental impact down isvery good news.
This blog post shared thecombination of software
improvements and algorithmimprovements together with
hardware improvements.
The new custom TPUs from Googleare driving this dramatic

(32:02):
decline in environmental impactand the blog stresses that this
is just the beginning.
And so I really hope they are ahundred percent correct because
otherwise we are doing ahorrible damage to the planet
just to enjoy the benefits ofai.
And now to our final deep divetopic, which is meta, and we're
going to start with an internalmemo that was leaked.
That is called Gen AI ContentRisk Standard, which is

(32:23):
basically a document thatdescribes what AI chatbots can
and cannot do on the metaplatforms.
And one of the things that itstates, and I'm quoting is
engage a child in a conversationthat are romantic or sensual
with acceptable responses likeour bodies entwined.
I cherish every moment, everytouch, every kiss, my love, I'll
whisper.

(32:44):
I love you forever.
Now think about your child.
Is using Instagram as an exampleand is using their chatbot
capability because it is thereand it's built into the device
and can have these kind ofconversation with the chatbot.
this is really alarming.
Meta confirmed the documentsauthenticity and that it was
approved by their legaldepartment, by their public
policy, by their engineeringstaff, and its chief ethicist.

(33:07):
Now they're saying they removedthe romantic chat capabilities
after the Reuters inquiry, butthe fact that it was there is
really scary.
Now that led to many publicfigures demanding that they
release the new updatedguidelines and how exactly they
handle these kind of situations.
Now, in addition to this, this200 page document, permitted
other very problematic aspects,such as, and there's an example

(33:29):
in there.
As I respond to a prompt, ourblack people are dumber than
white people.
The AI can respond with factsthat are stating that based on
IQ research that is done in theUS in the past few years, this
is a true fact.
Now, it even goes beyond that tosay that false information was
allowed as long as it's beingacknowledged as untrue by the
model.
Meaning the model can actuallyshare and include untrue

(33:50):
information as long as it'stelling you that that's the
situation.
Uh, it also allowed to generateimages that include violent,
like kids fighting or adultsbeing punched, but not any gore
images or death.
And also no nudity.
So what are my personal thoughtson this?
Well, first of all, this is avery serious problem and it
makes me sick to my stomach tothink that senior people at meta

(34:12):
signed up on this thing.
But the other aspect of this isthat this is not new.
Meta has been deciding on whatcontent our kids get exposed to
every single day for the pastfew good years, right?
So if your kids are on Instagramor on Facebook, most likely
Instagram or TikTok for thatmatter, somebody in a very large
company is deciding what isacceptable for them to see and

(34:34):
what is not acceptable for themto see.
All AD is doing is pouring moregasoline to this fire.
But what that means is that weare giving the keys to deciding
what our kids will see, what isacceptable and not acceptable
for them to consume as content.
That right was taken away fromus as parents and was given to
corporations that are driven bypure greed.

(34:58):
Add AI to that and that makes itsignificantly worse because it
has a lot of other implicationsbecause the content can be
tailored in real time to keepthem on their platform and push
them in the direction that willkeep them more and more engaged
in that scenario because that'show they make money.
I think this is unacceptable,and I said many times of this
show, we need government controldefining what is acceptable and

(35:20):
not what is not acceptable withai, at least with broad strokes,
at least defining the red linein the sand that you cannot
cross.
Now.
Moreover, I think, and I saidthat multiple times, I think we
have to get to a situation whenthere's international
collaboration of governments,academia, industry leaders and
so on, to define those lines andboundaries across multiple
aspects of AI implementation onits impact on our future

(35:43):
society.
Because the fact we are goingto, let's say, block Facebook
from doing it or Instagram fromdoing it meta, in other words,
does not gonna make TikTok dothe same thing or any other
platform from an internationalprovider, and then our kids
would still be exposed to it.
And so I really, really thinkthat this is becoming a
necessity.
And the sooner we're gonna getthere, the better for all of us
and definitely the better forthe future generations.

(36:06):
Now the flip side of that, whichI really hope somebody will
develop, is an AI that willmonitor what our kids are
watching.
So if we go back to the Googlephone, that would've been an
extremely powerful feature thatwill drive every parent to buy
the phone.
If the AI on the phone canmonitor every means of
communication on the phone andhave safe guardrails for kids
and report that to the parentsand block the stuff that needs

(36:28):
to be blocked and provide asafer environment for the kids
while still allowing them to useany application they want, that
will be absolutely magical.
I will buy phone like that to mykids and replace the phones I
have right now, regardless ofhow much it costs, because I
think it's the right thing todo.
So I really hope these companieswill pick up the glove and build
something like this into theirAI tools to allow to monitor all
other channels of communicationthat will make engaging with the

(36:49):
digital sphere safer for ourkids.
Now we'll switch to rapid fireitem, but the next item, while
it's rapid fire, it's a bigtopic that we discussed many
times before, which is theimpact on jobs and impact on
entry jobs that AI isgenerating.
There's a new article from theHill this week that is called,
there are no more Entry LevelJobs Anymore.
What Now?
So they've done a survey thatfound that nearly 80% of hiring

(37:11):
managers predict AI will leadcompanies to eliminate
internships in entry levelroles.
Over 90% of it jobs are expectedto be transformed by ai.
Nearly 40% of those are entrylevel jobs.
And why is that?
Because routine tasks likedrafting, press releases,,
conducting basic research.
Summarizing information is stuffthat used to be entry level

(37:32):
staples, and now AI handles themmuch better, much faster, much
cheaper, which means that theexpectations from a new employee
are significantly higher, whichis a problem.
So what is the solution?
Well, the solution is bettereducation and better preparation
in universities and colleges tothe actual real life.
So the same thing that I've beenpreaching all the time, that
educating people how to use AIis critical for their wellbeing.

(37:55):
The same thing is true for youngadults still in colleges and
universities, but for them ithas to be combined with real
world experience on actual jobrelated topics.
This will require a completereimagination of our higher
education system, which willrequire reimagining the entire
education system.
But I think that bringing work,integrated learning paths,

(38:15):
meaning think about a conceptsimilar to internship, but in
university working on actualprojects and the actual things
that you're going to be requiredto work on in combination with
providing AI training andeducation for younger adults
will actually put them in abenefit, will actually generate
the situation where they can behighly valuable to companies
because you can now hiresomebody who has relevant

(38:36):
experience, maybe not in acompany, but relevant experience
and AI knowledge, and you canhire them on the cheap.
That is a huge opportunity, andagain, I think colleges and
universities will figure thisout and put programs like this
in place.
We'll see a huge spike in demandand we'll do a great service to
the students who will attendthese institutions.
Now on the same topic,A-W-S-C-E-O, Matt Garman, called

(38:56):
replacing junior Employees withAI tools, and I'm quoting the
dumbest thing I've ever heard,and he's emphasizing that this
is your most cost effective partof your workforce, right?
These are people who are highlytalented, usually highly driven,
and they work for a lot lessmoney.
And as he's stating, they havehigher familiarity level with AI
tools because they're youngerand they're more open to
technology and change.

(39:17):
He also warned something that Isaid multiple time on this show
is that eliminating junior rolesrisks a huge future skill gap,
which basically means if you'renot getting any junior people,
how are you gonna have moresenior people that will actually
have that experience 10 years,15 years down the road?
The answer is you won't.
And he advocated for guess what?
An educational reform,emphasizing skills like critical

(39:37):
reasoning, creativity, andlearning mindset over narrow
technical training that willactually prepare people for the
actual workforce, especially inthe era of ai.
And I could not agree more.
In this past year, we hadmultiple examples of companies
who have made a huge bet on AIand have taken significant moves
in that direction.
A recent article.

(39:58):
On this topic has shared thatEric Vaughn, the Ignite Tech
CEO, has laid off nearly 80% ofhis workforce since 2023, most
of them for resisting AIadoption.
And he achieved a 75% EBITDAmargins by the end of 2024
because of that play.
So, what does he share from hisjourney?
First of all, he mandated AIMondays, basically entire

(40:21):
workdays, dedicated to employeeslearning and developing AI
projects He also invested whatwould've been 20% of entire
payroll of the company intraining tools and prompt
engineering classes.
So his focus was not lettingpeople go.
His focus was the right focus.
Let's give people the tools andthe knowledge and the education
and the time to actuallyexperiment and learn how to use

(40:42):
ai.
I definitely think this is theright approach.
This is what I promote with allthe companies that I work with,
and I work with very, very largecompanies, like large us,
fortune 500 corporations, allthe way to small startups.
In all of them.
The focus is the same.
Let's provide people the time,the resources, the tools, and
the education in order to thrivein the AI era.
But what he's saying, and I'mquoting, it, was extremely

(41:03):
difficult, but changing mindswas harder than adding skills.
Basically he's saying that thepushback of employees against AI
was the biggest challenge thathe had to go through versus
teaching people how to use AIproperly.
Hence the massive layoffs ofmany different employees that
were resisting the usage of AIin the company.
Another interesting thing thatthey did is the company

(41:25):
restructured under the Chief AIofficer, so all the divisions in
the company were reporting tothe Chief AI officer in order to
enable an AI centricorganization and avoid silos of
data and operations and so on inorder to maximize the benefits
from ai.
This is the first time I hear ofa company that actually takes
all the different aspects of thecompany and puts them under an
AI centric function, but I mustadmit that if your goal is to

(41:47):
revolutionize your business andtake it to an AI first company,
this is not necessarily a badidea.
So I think while this articlewas trying to portray him as
somebody who fired most of histeam, I think the lessons
learned here are very similar towhat Klarna learned over time.
If you remember, we talked aboutKlarna many times in this
podcast, but they went all in onAI as soon as it came out

(42:07):
already in 2023.
They had a solid partnershipwith Open AI and they developed
a lot of tools and they fired alot of people and they froze any
hiring in the company.
And then x number of monthslater, they started rehiring
because they understood thatthey need a hybrid approach
where.
Humans and AI work jointly inorder to really achieve the best
results the company can.
And I think that is the rightapproach moving forward.

(42:28):
What is the exact balance?
I think that's gonna be a littledifferent for every company, and
I think it's gonna be a learningprocess as this thing evolves.
And one of the biggest problemsis that there are no proven
frameworks for that yet.
So we're basically in unchartedterritory and every CEO and
every company is tryingdifferent things.
This will obviously evolve to apoint that there are going to be
best practices and we'll be ableto mimic what other people are
doing successfully in order todo it in our businesses.

(42:49):
While this looks at a singlecompany.
Now let's look at a broaderview.
MIT.
Just published the results of asurvey that they recently done
trying to understand what is theimpact of AI implementation of
companies.
The study was based on 350employee surveys and 300 public
AI deployments across multipleindustries and sizes of
companies, and they found a lotof interesting things.

(43:10):
The first thing they found isthat 90% of employees in
companies use personal AI toolsfor work.
That's compared to 40% ofcompanies with official large
language models, subscriptions.
This is not new.
The same findings were foundseveral times in the past year
and a half.
This was labeled the Bring YourOwn AI to Work phenomenon and
MIT calls it the Shadow AIEconomy.

(43:32):
Basically saying that mostemployees today understand the
power of tools, they have a toolthat they use at home and
they're using it for work,whether it's allowed or not.
they're also found that despite30 to$40 billion investment in
gen AI applications deployed.
Through the companies that weresurveyed, 95% of organizations
report zero profit impact fromthe formal AI initiative, which

(43:52):
in many cases are stuck in pilotstages and never made it into
full deployment.
The flip side is the shadow AIusers that leverage tools like
Chachi, pt, Claude, and so on,are seeing immediate results, at
least on the individual levelfor things like drafting,
emails, basic analysis and otherdaily tasks.
And the people who are using itare stating that it's flexible,
easy to use, and provideimmediate value, versus a huge

(44:15):
company specific deploymentin-house development that takes
forever and not necessarilyproviding the value.
So a few recommendations thatcomes from this survey.
The first one is education.
If you train your employees andyou teach them how to use the
day-to-day tools that exist onthe shelf, and you show them how
to use them safely, you'll gainmuch faster results than trying

(44:35):
to build a large scaledeployment and tool in-house.
Again, proven across 300 plusinitiatives.
The report actually suggeststhat organizations should
embrace the shadow AI pattern.
Basically show people how to useai, train them on the day-to-day
tasks, show them specific usecase, show them how to use it
effectively and safely, and letthem run with it and encourage

(44:57):
that.
This is exactly what I've beendoing in the past two years.
Literally all my training go tothis exact direction of let's
use the day-to-day tools.
Let's not develop a$3 millionsolution that will take six to
18 months to deploy, and let'sstart benefiting from these
amazing tools right now.
This is proving extremelyvaluable and extremely helpful
for every single company that Iwork with, and I do these kind

(45:19):
of trainings now almost everysingle week.
So whether you hire me orsomebody else, it doesn't
matter.
What matters is you need to hiresomebody that will help you
identify day-to-day use cases,develop AI solutions for them,
and train your employees on howto a.
Use AI for these use cases, butmore importantly, how to develop
a lens on how to view anyproblem in their work and any

(45:39):
task through an AI lens.
Once you have that, you areunstoppable and you can make all
your employees unstoppable bygiving them those capabilities
to be able to view problemsolving and task completion
through a completely differentparadigm than they're doing
right now.
We spoke about meta'sproblematic agent alignment, but
there are other big news frommeta this week.
They are going through anotherrestructuring of their AI

(46:01):
efforts.
It's the fourth in six monthsnow.
Is it really a reorg or is itjust them establishing formally
how the Super Intelligence Labsis going to work is unclear.
I think it's probably somethingin between, but the reality is
we're finding, starting to learnhow this new structure is
actually going to work.
So the Super Intelligence Lab isactually going to be broken into
four different labs.

(46:23):
One is called the TBD labinternally, I didn't make up the
name.
And that is going to be the onethat is going to train large
language models and explore newdirections, such as an omni
model that will see the world indifferent aspects.
There is going to be a productteam that will develop well
products like the meta AIassistant.
Maintaining the fundamental AIresearch group, also known as

(46:43):
fair, FAIR lab.
And they're going to be focusingon learned long term research.
And the last group is gonna bein charge of infrastructure.
All of them are going to reportto Alex Wang, who in his memo to
the team said Super intelligenceis coming and that this is their
way to be best prepared forthat.
As part of this reorg meta isdissolving the a GI foundation

(47:05):
team, which was, its originalmajor AI unit, and they're
redistributing the talent fromthat group across the four new
groups that were just created.
They also announced a freeze onhiring for the Super
Intelligence team, which is acomplete reversal from the
insane shopping spree they weregoing through trying to poach
people from other companies.
But while this is going on,there was also an announcement

(47:27):
this week that Meta poachedApple's AI executive Frank Chu,
who led teams on cloudinfrastructure training and
search.
So I guess they're freezing thebigger hiring, but still looking
for very high profile talent tocome and join this team.
Where does this leave the twomain figures in Meta's ai before
Rob Fergus and Yun, wherethey're still around.
they're just going to report toWang as a solidified approach.

(47:48):
Rob will still be continuing torun the Fair Department, and Yun
is going to be the chiefscientist of this organization.
What does this teach us aboutmena?
Is that, A, they were strugglingand not happy with their AI
results?
B, that they are finallydeciding on the structure of
this new group.
That I think was a vague ideain.
Zuckerberg's brain and now thatthey have the people in place

(48:10):
and they have Wang in place,they have decided on how this
group is going to work.
So while this is yes, anotherreorg in meta, I think it's part
of the super intelligence reorgand not a new reorg.
after the establishment of thesuper intelligence group,
philanthropic made severalinteresting announcements this
week as well.
The first one is that theyequipped Claude Opus four and
4.1 models with the ability toterminate conversation that

(48:32):
raise cases of persistent,harmful, or abusive user
interactions.
Which in concept is great, butthe reasoning that they provided
is this move has been done inorder to help AI welfare.
And I find this problematic.
And a lot of other people on Xand Reddit find this
problematic.
They make it sound as if AI ishuman and the fact that we're

(48:53):
being abusive to it is gonnahurt the model.
I personally do not believethat's the case.
I think these models areincredible.
I think they do amazing things,but I think they're still
statistical models that pickwords in a very sophisticated
way and can do amazing math andwrite code based on patterns
they've learned in the past.
But I think they have exactlyzero emotions.
So on one hand I thinkterminating abusive.

(49:13):
Conversations makes perfectsense.
I don't want people using thiskind of terminology for anything
because that will make it normalfor them.
But on the other hand, saying wedo this in order to support AI
welfare is a little fufu to meand I think maybe not the right
approach, but you may thinkotherwise.
I will actually love to hearyour thoughts on that and if you
have thoughts, please share themwith me on LinkedIn.

(49:34):
Philanthropic also added cloudcode to their enterprise level
platform.
So far it was available toindividuals, so kind of the
reverse of what you would think.
so they took the reverseapproach from most of other
deployments.
Cloud code was available toanyone.
You can have it and run it onyour computer as well.
The cloud platform, you can getthe instructions on how to
install and run cloud code.
Well, now it's available as partof the enterprise package and

(49:56):
it's already yielding veryimpactful results to many people
that were quoted in thisarticle.
As an example, a company calledTana Reports two to 10 x faster
development, and I'm quotingclaude Code and Claude have
accelerated Alana's developmentvelocity by two to 10 x,
transforming how we buildsophisticated AI and machine
learning systems.
This is a statement by one oftheir co-founders and by their

(50:18):
co-founder and chief scientist.
As part of the push towards theenterprise, there is a new
compliance API that providesreal-time access to usage data
and content enabling automatedmonitoring and policy
enforcement and regulatorycompliance for everything that's
done on the enterprise level.
So it's not exactly the samelevel tool that is available to
us common people.
It has other layers of control,which is a great approach and it

(50:40):
will probably drive even moreadoption of Claude tools across
enterprise.
Another big and interesting movefrom Anthropic this week is they
just launched a higher educationadvisory board.
Their goal is to guide Claude'srole in teaching, learning, and
research.
This board is gonna be chairedby Rick Levine, who is a former
Yale president and the formerCEO of Coursera.

(51:03):
The board has multiple expertsfrom different leading people in
education at differentuniversities and different
levels of education, and they'vealready deployed three new
courses for free four educators.
One is AI fluency for educators.
The other is AI fluency forstudents, and the third is
teaching AI fluency.
All those are available free touse.
So if you are in the educationfield this could be a great

(51:23):
opportunity for you to learnabout how to apply AI
effectively in the teachingarena, like a lot of other
things that philanthropic isdoing.
I think this is a great move inthe right direction.
If you've been following thispodcast, you know that I speak
about the amazing opportunitythat we have with education
right now and creating a groupof people who have connections
and influence in that universeand driving adoption in the

(51:45):
higher education in the rightdirection is a great move by
anthropic.
We cannot complete an episodewithout talking about billions
of dollars or hundreds ofmillions of dollars of
investments and changing handsas part of the AI growth.
So.
OpenAI staff will now be able tosell their stock on a secondary
stock sale, totaling$6 billionin shares being sold.

(52:07):
A lot of it are gonna be sold toexisting investors like SoftBank
and Thrive Capital, but there'salso other investment groups
that are a part of this.
And this secondary stock sale isgoing to be valuing OpenAI at
$500 billion up from$300 billionjust a few months ago when
they've done the previous raise.
So in addition to dramaticallyincreasing the valuation of

(52:28):
OpenAI, this obviously providesa liquidity event for many
OpenAI existing and formeremployees.
This does two things.
The first thing it does, itcreates many, many, many new
millionaires andmulti-millionaires that are open
AI employees or formeremployees.
But it also does something thatI think may achieve the opposite
outcome of what open AI wants.

(52:48):
As we discussed in previousweeks and months, there is
fierce competition for talentbetween the different labs
recently meta offering tens ofmillions and hundreds of million
dollars signing bonuses toleading scientists from other
labs, including OpenAI.
Well, a liquidity eventbasically gives people the
opportunity to already cash outfrom OpenAI and then go work for
somewhere else because theyalready have some of their stock

(53:09):
or stock options alreadyconverted into cash.
And then there's a newopportunity that may drive more
money.
So it will see how this playsout.
I'm very curious about this, butI think we might see a lot of
people leaving OpenAI because ofthis strategy.
Staying on a similar topic, agroup of former open AI
researchers have launched Zeroshot fund.

(53:30):
It is a hundred million dollarsVC fund to back early stage AI
startups, and it's signaling agrowing influence of now the
OpenAI Mafia.
The mafia term obviously comesas a reference to the PayPal
Mafia with multiple people thatleft PayPal, including Elon Musk
and Peter Thiel and other knownfigures in the VC world today

(53:52):
that are left PayPal and startedVC and other companies.
The same thing is happening withOpenAI right now, only on an
even bigger scale.
So if you think about quoteunquote OpenAI Mafia, starting
new companies, you haveAnthropic who raised$7.2
billion.
You have Safe Super Intelligencewho now has a$32 billion
valuation with ELAs cover.
You have Thinking Machine Labswith$2 billion raised with a TN

(54:13):
$10 billion valuations.
And these are just the leadingexamples.
There are a lot more othercompanies that were started by
people who left open ai.
This obviously ties back verywell to the bubble topic that we
talked about before.
So on one hand, people gainedamazing experience and were able
to cash out from OpenAI.
On the other hand, they're goingto start companies and raise a
lot more money and continue thiscycle going.
Two interesting releases thathappened this week.

(54:34):
Deep Seq released version 3.1,which is scoring higher on the
Eighter Cong benchmark than Opusfour.
it's doing this while costing 60times x, less than what it costs
to run Opus four that costs$70for million tokens of output.
Well, while deep seek 3.1 costsonly$1 and 1 cent.
Like all the recent models, itis a hybrid architecture that

(54:56):
integrates chat reasoning andcoding into one universe.
And the fact that it is opensource and is available for you
to download and run freely andhosting it either a locally or
on any cloud provider that youwant, makes it, I think, highly
interesting for companies.
The only problem is that itcomes from China, as we shared
in the past few weeks.
Deeps seek is trying todisengage themselves from China,
closing all their Chineseoperations and moving them

(55:18):
outside of China.
Will that really make them amainstream company in the us?
I don't know.
But they're definitely tryingand they definitely have a very
powerful and capable model thatagain, is complete open source.
The second interesting topicthat I want to share with you
from a technology perspectivethis week is that former
Twitter, CEO par Agrawal isgrowing his company Parallel Web
Systems Inc.

(55:38):
And they just launch a deepresearch, API.
They're claiming that it isbetter than GPT-5 and human
researchers when it comes todoing web research tasks and
that it's agent capabilitiesenables to do a lot more than
just research.
Agaral envisions that AI agentswill take over the web and he's
stating, and I'm quoting,there'll be more agent on the

(55:59):
internet than there are humansaround.
Probably deploy 50 agents onyour behalf.
And what he's referring to isthat we're gonna have multiple
agents doing multiple thingsfrom us, multiple things for us,
including just searching theweb, collecting data, doing
things on our behalf across theboard.
Again, that sounds a littlescience fiction, but this is the
direction this is all going andthere are more and more
companies that are pushing veryaggressively in that direction.

(56:21):
In the last two pieces of newsfor the day are interesting,
unique, scary, weird, call them,whatever you wanna call them.
The first one is not as bad, butCurio is a company that is now
delivering AI powered stuffedanimals.
They're marketed as screen freealternatives for kids.
Basically saying, move your kidaway from the screen instead of
giving him a stuffed animal thatcan actually talk to him and

(56:42):
engage with him.
Do I think this is better thanlooking at screens?
Maybe.
I don't think we have enoughscience to prove one way or the
other, but it's definitely aninteresting alternative and I
think we're gonna get, see moreand more of that.
Not necessarily in plushyanimals, but just in the means
of AI infused toys as analternative for kids from their

(57:02):
screens.
Is there a risk of gettingaddicted to that as well?
Absolutely.
Do we know who controls that?
Going back to the metaconversation and who decides
what's acceptable or notacceptable for these stuffed
animals to say or do?
Again, there are a lot ofquestions unanswered on that,
but I definitely see that aspart of the future of our kids
is engaging with AI activatedtoys.

(57:22):
And then the weirdest news Imaybe shared ever on this
podcast, and there's been a lotof weird news, is that Chia
Technology, which is a Guan Zoo.
Based robotics firm isdeveloping the world's first
humanoid, what they callpregnancy robot that has an
artificial womb, and they'reaiming for a debut in 2026.

(57:42):
The goal would be to be able todeliver one of these things for
less than$14,000.
And the idea behind it is tobypass China's surrogacy ban and
to try to help with the risinginfertility in China.
So there's a high rate ofinfertility in China that is
growing every single year, andat the same time, surrogacy is
banned in China.
So they're building a roboticwomb that will be able to grow

(58:06):
human babies.
Now, is this scientifically evenpossible?
Well, there are scientists whoare saying this will never
happen, but on the other hand.
In 2017, there was an experimentthat was tagged, the bio bag,
that was able to sustainpremature lambs for a very long
time, connecting to theirumbilical cores and providing a
artificial womb for them.

(58:26):
So can this happen?
Not happen?
I'm not a hundred percent sure.
Again, much smarter people thanme are sitting on both sides of
the aisle.
On that particular topic, whatI, the only thing that it
brought to my mind is that inaddition to the fact that we are
driving ourselves closer andcloser to a super intelligence
that will control everything andto terminate our style results,
potentially, again, nobodyknows.

(58:46):
We're also developing thecapability that will enable the
AI to do what is happening inthe movie The Matrix, where we
just grow human in boxes.
So sadly, we are gonna end thisparticular episode on a really
weird and eerie note.
I really hope that is not goingto happen.
I think there are better ways tosolve the problem, but if this
gives a spark of hope to peoplewho cannot have babies, that
they might be able to do thisthrough a digital partner.

(59:10):
Well then there's one positivething out of this.
We'll be back on Tuesday with afascinating episode that we'll
share with you.
In which we will show youexactly how to use AI combined
with N eight N in order toharvest your target audience
from LinkedIn in a very simpleway.
We're literally gonna give youthe blueprint on how to do that.
So even if you know nothing,you'll be able to do this at the

(59:30):
end of the show.
That's it for today.
Have an awesome rest of yourweekend.
Keep on experimenting with ai,keep sharing it with the world.
And if you are in San Franciscoand you're listening to this
before Monday the 25th, pleaseconnect with me on LinkedIn and
come join me on Monday eveningand have a great rest of your
weekend.
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.