All Episodes

Could AI really pose an existential threat—or are we all just overreacting?

850 tech leaders, researchers, and AI pioneers don’t think we’re overreacting. This week, they signed a chilling letter urging the world to pause superintelligence development—until safety can be guaranteed.

In this solo Weekend News episode, Isar dives deep into the letter, the conflicting philosophies of AI’s top minds, and what it all means for business leaders trying to stay ahead without stepping into a sci-fi dystopia.

Plus: the battle for AI browser domination, Anthropic’s enterprise blitz, GPT’s awkward math flex, and the $1,300 humanoid robot heading to your kid’s holiday wishlist.

In this session, you’ll discover:

  • Why 850 experts—including Hinton, Bengio, and Branson—want a global pause on superintelligence development
  • Sam Altman’s unsettling quote: “I expect some really bad stuff to happen…” 
  • Are AI agents the next big leap—or are we just not there yet?
  • Claude vs. ChatGPT: Who’s winning the enterprise AI war? 
  • Anthropic’s new “agent skills” and what they mean for automation 
  • OpenAI’s strange math claim that backfired—badly 
  • Meta’s $27B bet on data centers and why they just laid off 600 AI staff 
  • Why Europe’s AI spending is stalling 
  • OpenAI’s new agentic browser—and why it might be their most important move yet
  • The $1,370 humanoid robot that could be the next must-have toy
  • What Amazon’s smart delivery glasses signal for AI-powered workforces 
  • Quantum computing breakthrough: 13,000x faster than supercomputers 
  • The AI bottleneck you’re probably not planning for: inference and redundancy 

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker (00:00):
Hello, and welcome to a Weekend News episode of the

(00:02):
Leveraging AI Podcast, a podcastthat shares practical, ethical
ways to leverage AI to improveefficiency, grow your business,
and advance your career.
This Isar Metis, your host.
And this episode is actuallyrecorded on Friday, October 24th
rather than on Saturday the25th, like we always record,
which means the cutoff date wasyesterday.
So anything that happened thisFriday on the 24th or early

(00:23):
morning on Saturday is not goingto be included in this episode.
That we will include it in theepisode of next week.
But we have several big topicsto talk about in the beginning.
I'm going to share with you alot of new findings, either from
surveys, research papers,interesting interviews that
happened this week, or bigreleases that has to do with the
impact of AI on the world.

(00:45):
Then we're gonna learn about newreleases from Anthropic and from
OpenAI, and then lots of rapidfire items.
So lots to cover every singleweek with some very interesting
stories in the beginning.
And a humanoid robot for lessthan$1,500 in the end.
So stick all the way to the endif you want the new toy.
And the reason I'm recordingthis on Friday and not on
Saturday is because tomorrow I'mgiving a keynote and I'm

(01:06):
actually recording this episodein Albuquerque in a hotel room.
So if you see a difference inthe quality of video and or
audio, that is the reason, butit should definitely be good
enough.
And now to the AI News of theweek.
And I am going to start with aletter that was signed by 850

(01:28):
prominent figures, including topknown people from the AI world,
as well as big tech leadersincluding influencers from
several different industries.
And they signed a documentcalled The Statement of Super
Intelligence on October 22nd.
This statement basically urgesto ban the development of super
intelligence until safety andcontrols can be assured.

(01:50):
some of the known figures whosigned this letter are Joshua
Bengio and Jeff Hinton.
Both are considered thegodfathers of modern ai, but
also other known people likeSteve Wozniak, the co-founder of
Apple and the Virgin Groupfounder Richard Branson and
many, many other people.
Now, they're identifying veryserious potential risks from the

(02:10):
development of superintelligence as it is right now.
And these include humaneconomic, obsolescence.
Threats to national security andeven potential human extinction.
Now, yos, Bengio stated, and I'mquoting, to safely advance
towards superintelligence, wemust scientifically determine
how to design AI systems thatare fundamentally incapable of

(02:31):
harming people, whether throughmisalignment or malicious use.
Now, this is not new, by theway.
Just the people who are soundingthe alarm are sometimes
different and in many casesdifferent than what they said in
the past on both cases.
So in 2015, Sam Altman, the CEOof OpenAI, who is probably the
person leading the charge morethan anybody else right now,

(02:53):
wrote a blog post that is calledMachine Intelligence Part one.
And in that blog post, Sam Almawrote, and I'm quoting
development of superhumanmachine intelligence.
He called it SMI back then isprobably the greatest threat to
the continued existence ofhumanity.
That's pretty profound forsomeone who is currently driving
this faster and faster ambitiontowards a GI than a SI faster

(03:16):
than anybody else.
Another person that spoke aboutthis in the past is Elon Musk,
which on a podcast this yearsaid that he thinks there's a
20% chance of human annihilationby ai.
That's a pretty high percentage,should be at least in anybody's
scale.
And he said it as if it's, yeah,you know, that's still a
relatively low percent.
If you have been listening tothis podcast for a while, you
heard me relate to this multipletimes.

(03:36):
I am an avid sci-fi reader, andas a teenager I loved Asimov's
books and I read everything hewrote, including obviously the
Robots Series.
And over there the robots hasthree different laws, and the
first two are there to protecthumans.
And these robots are built in away that regardless of what they
do, they cannot break the firsttwo laws, which causes them to
protect humans.
And this statement calls forsomething like that, a way from

(04:00):
a technological perspective toguarantee that AI will not harm
humans in any way, either bymistake or by being used by
other people with maliciousintentions.
Now, conceptually, I agree, Ithink it's a great idea, but
there are many issues I see withthat.
First of all is how do we knowexactly when to stop?
Meaning, have we already passedthe point of no return?

(04:21):
Like nobody's gonna tell us andsay, oh, now you're about to
reach a point of no return.
You should probably stop andturn around.
There is no clear line in thesand, so it's very, very hard to
know when is a GI, when is a SI.
It's a very amorphic kind ofdecision.
And so that's problem numberone.
Problem number two is whodefines who gets to decide what
is super intelligence?
And when we stop?

(04:41):
Problem number three is thatmany of the negative.
Aspects that a SI can bring withit can be achieved way before a
SI is achieved, even before a GIis achieved.
Some of them might already existand just not been fully
implemented or don't have thefull impact on society yet, but
they might already be here, soshould we stop now?

(05:02):
Should we have stopped sixmonths ago?
So that's very, veryproblematic.
Another big question is who isgoing to enforce it?
Like how do you make sure thatany lab in any place around the
world that is developing ai,either as closed source or as
open source, which raises awhole other set of questions,
who gets to enforce that ban?
And then the last question is,let's say we stop the
development and work towards asafe path forward.

(05:25):
Who decides what is safe?
who gets to say, okay, now it issafe and we can move forward?
So all of these questions arevery big questions.
I don't think anybody hasanswers.
I am glad that somebody'ssounding the alarms that maybe
we'll get the leaders of all thedifferent labs and the countries
that are behind these differentlabs to have a conversation

(05:45):
about it and maybe find a saferway forward.
But as of right now, there are alot of questions that I see, and
there's probably many more thatwill prevent this from actually
being practically possible.
Now, staying on the topic ofwhat could go wrong with ai.
Sam Altman was just interviewedon the A 16 Z podcast and among
other very interesting thingsthat he shared, he talked about

(06:07):
what might be the negativeimpacts of Sora two, and he was
specifically talking about thefact that as a strategy, OpenAI
wants to release things even ifthey're not fully developed, in
order to get the world to bemore ready for what's coming.
So here's one of the quotes fromSam in this interview.
Very soon, the world is going tohave to contend with incredible

(06:28):
video models that can deep fakeanyone or kind of show anything
you want.
You can't just drop the thing atthe end.
What he basically means that hedoesn't want to drop the final
most capable video product onhumanity before you can see the
steps there and get ready.
Basically, early exposure tosomething that is almost perfect
will allow society to get betterprepared.

(06:49):
Now, he's not saying what betterprepared means Now, the slightly
scarier sentence Sam said, andI'm quoting again, I expect some
really bad stuff to happenbecause of the technology.
And in this particular case, hewas referring specifically to
SOA two and video capabilities.
But I think we can broaden thisto AI in general and how society
may or may not use it, and howthe technology itself may evolve

(07:12):
and how it can be used.
And so, despite the fact thatSam Altman and the leaders Of
the AI charge are very muchagainst regulation.
Sam said most regulationsprobably has a lot of downside,
but then he added very carefulsafety testing for extreme
superhuman models should be putin place.
So basically what he's sayingis, yes, on the day-to-day, let

(07:34):
us run forward.
But for really sophisticated,really advanced models, we need
some kind of a safer solution.
He didn't define what thatmeans.
He didn't define how do we knowwe're getting to these really
advanced AI solutions, and howdo we know that we're there and
then we need those extrememeasures.
And so same kind of questionsthat I'm raising, he did not.
Answer because I don't think hehas answers.

(07:55):
All he cares about is beingthere first, and the same thing
is true for all the otherleaders of the labs, which makes
this very complicated and veryscary for the rest of us.
Either way, I recommendlistening to this podcast, which
leads us to a second podcastthat may give us a little bit of
time to breathe.
At least feel we have time tobreathe, which is an interview
that Andre Cari did with DshPatel on eSSH podcast, a very,

(08:19):
very long podcast and multiplesections, highly technical and
yet very interesting.
So those of you who don't knowwho Andrea Cari is, he has been
part of the founding group ofOpenAI.
He's been there for many years.
Then he spent some time in Tesladoing ai, and now he's doing his
own thing.
And because he's not affiliatedwith any of the big labs, his

(08:39):
opinion actually matters a lotbecause he's not tinted and he's
not trying to sell anything.
And in this podcast, he'sactually questioning the fact
that 2025 or 2026 is gonna bethe year of agents.
He's actually saying that thenext decade is gonna be the
decade of agents because hebelieves AI systems are not good
enough yet to do all the thingsthat we assume they will be able
to do in the immediate future.

(09:00):
Now, he's not by any meansunderestimating the capabilities
of the current systems.
He's saying, and I'm quoting,there are huge amount of gains
to be made by using intelligencemodels and so on.
However he's saying that evenvery advanced models like G PT
five PRO are very useful only inreally narrow roles and use
cases, but they struggle withproject specific and broader new

(09:22):
complex problems.
And I'm quoting overall, themodels are not there and he
continues later on by saying, Ifeel like the industry is making
too big a jump and is trying topretend like this is amazing and
it is not.
It's slop.
So what he's claiming that whilethese models are very powerful
and can do some specific usecases very well overall, they're
very far from a GI overall.

(09:43):
They're very far from broaderunderstanding of complex,
sophisticated problem there arethe bigger problems in the world
and even the bigger problems inday-to-day in business.
And hence, he believes therewill be a while before we
actually get to a GI.
Now he doesn't think we need aspecific huge breakthrough, but
a continuous progress, kind oflike what we have seen so far.
So he has been in the AI fieldfor many years and what we think

(10:06):
started three years ago with theannouncement of ChatGPT in his
world starting 15 years ago.
And so he doesn't expect asingle one big leap forward, but
actually advancements acrossmultiple aspects as it happened
so far.
So he's talking about bettertraining, data model
architecture, learningprocesses, hardware, better
algorithms, et cetera, etcetera.

(10:27):
So basically every single aspectthat combined together creates
the magic of AI needs to improvefor us to achieve a GI.
He is also one of the peoplethat believes that a broader
understanding of the world isnecessary to achieve a GI and
not just being able to mimictext.
So he doesn't think that largelanguage models are the main
path for a GI.

(10:48):
This is the same as Jan Koonfrom Meta or Sutton and Silver
from DeepMind.
So there are several differentpeople who believes that world
models that can learn fromexperience is.
Necessity on the path to a GIThat is not the case with some
other people who believe that weare on the path to a GI.
So it just depends, uh, who youwant to believe.

(11:08):
And there might all be right inone or two aspects of what
they're saying, but not in thebroader scheme of things.
Not to be fair.
Most of the interview with Andreis based on coding because
that's the world.
He knows the best.
And so a lot of the referencesthat he's using comes from the
coding and software developmentworld.
But since this is one of themore advanced capabilities of AI

(11:29):
right now, I think this is agood proxy for what AI will do
overall.
Now, before I tell you what Ithink about this whole
discussion, there was a Substackarticle released by Gary Marcus
that is also saying that we arenot even close to a GI.
Now, those of you who don't knowwho Gary Marcus is, because we
haven't talked about him a lot,he has a bachelor degree in

(11:50):
cognitive science.
He has a master's and a PhD incognitive science from MIT.
By the way, he finished hisdoctorate at MIT at the age of
23.
So he is definitely knows one oftwo things about cognitive
science and ai And he's been along time skeptic that large
language models are the path toachieving a GI.
So he just wrote a piece labeledGame over a GI is not imminent,

(12:11):
and LLMs are not the royal roadto getting there.
So another voice on the sameconcept.
In this piece, he referencesrecent events or understandings
or publications from differentsources that he claims are
strengthening his opinion.
The first one is Apple'sreasoning paper that confirms
that LLM cannot handle,different aspects of thinking
and reasoning.

(12:32):
This was released in June ofthis year, I believe it was
released partially to maybeshift away from the focus on the
failures of Apple to develop ai,but it doesn't really matter.
They still released this paperon August 25.
There was the release of G PTfive, which was delayed and was
not that much better than GPT-4that came before it, or the
reasoning models that camebefore it.

(12:53):
In September, 2025, the touringaward winner, rich Su also
supported this opinion.
And then obviously now theinterview with Andre Cari.
Some multiple events that garyMarcus believes support his
concepts.
But either way, the AI world issplit.
Whether we are on the right pathwith L LMS to achieving a GI or
not.
But now I told you, I'm gonnagive you my opinion.
My opinion is that it doesn'tmatter.

(13:15):
The whole concept of a GIdoesn't matter.
And the reason it doesn't matteris because we keep on making
progress.
And the AI we have today, if westop developing ai, it will take
us five to 10 years to deal withthe ramifications of that on a
business level, on an economicallevel, and on a society level,
because the implications, onceeverybody starts implementing
everything that's possible todayare profound.

(13:38):
And we don't know how to dealwith the outputs and the
outcomes of that yet.
And so even if we don't developanything further, even if we
never achieve a GI, we stillhave very serious issues to deal
with in our businesses, in ourpersonal lives, in our society
that we're not ready for.
And hence why I think whether weare on the path to a GI or there
will have to be new developmentsand new models and new hardware

(14:02):
and new paths and newexperimentation doesn't matter
because the progress keeps onhappening and every step forward
is something that we have todeal with and we're not exactly
know how to.
Now another interesting piece ofinformation that was released
this week is by an ex OpenAIresearcher that exposes how much
OpenAI can lead people to beingcompletely delusional.

(14:22):
So he looked into several longconversations of people, some of
them with preexisting conditionsthat has.
Went way beyond a standardconversation and really drove
people to being completelydelusional about something very
specific.
And that is an outcome of thechatbots psycho fancy or its
excessive agreement with theperson combined with its drive

(14:43):
to completely lie and make upfacts that leads these people to
believe in something that is acomplete fantasy, but be
completely convinced that thisis the case.
So one of those examples isCanadian, Ellen Brooks who, in
his case, had no former mentalhealth issues, and he was
convinced by Chachi BT that heuncovered a revolutionary math
formula that is threateningglobal infrastructure.

(15:06):
Nothing less than that, and hespiraled into complete paranoia.
Over three weeks of conversationwith ChatGPT on this topic now
ChatGPT repeatedly lied toBrooks claiming that it was
going to escalate thisconversation internally right
now to review by OpenAIpersonnel, multiple officials.
ChatGPT also added that multiplecritical flags have been

(15:28):
submitted, marked for humanreview as a high severity
incident.
None of this was actually true.
It was all made up.
So as I mentioned, StephenAdler, who is a researcher in
OpenAI has revealed that thereare other similar cases, again,
many of them with preexistingmental conditions that had
really long conversations andthat the fail safe mechanisms in
OpenAI both system softwarerelated fail mechanisms as well

(15:52):
as human safety mechanisms didnot catch the problem.
As an example, in the previousexample from Alan Brooks, he
sent reports to OpenAI supportthat yielded very generic
replies of what he needs tochange in the settings.
And regardless of his prettyaggressive attempts, there was
no escalation to the trust andsafety team leaving Adler

(16:12):
basically to deal with theconversation on his own.
Different researchers combinedhas found 17 or more delusional
spirals from extended chatbotconversations, including three
from ChatGPT and from othermodels.
including one that led to thedeath of a person.
And so this is very, veryextreme.
An open AI's response was verygeneric and definitely did not

(16:35):
answer this level of issue.
And they said people sometimesturn to ChatGPT in sensitive
moments and we have to ensure itresponds safely.
We'll continue to evolvechachi's responses with input
from mental health experts.
So that's a pretty longsentence.
That is saying, we don't have asolution.
We're working on it, and we'lltry to get better over time
because we understand it isvery, very risky.

(16:56):
That is not a good enough answerin this kind of situation, and I
really hope that they will finda way to address this at the
highest level of severity beforesomebody sues them.
Like the sad case that happenedwith them being sued for the
suicide of a young adult thatwas using AI as a companion.
And now to some interesting newsfrom Anthropic.
Anthropic just introduced Claudecode for the web, which is a

(17:20):
hosted web version of theircloud code product, and it
allows asynchronous codingagents to run on multiple tasks
in parallel because they're notdepending on you running it on
your computer.
Now, allows you to do everythingthat you can do with the local
Claude code CLI, you can connectit to a GitHub repo.
You can select whateverenvironment, either locked down

(17:41):
or restricted or custom domainor, and include, allow list for
specific domains to be able toaccess it.
So everything you can do whensetting up an environment, and
you can provide additionalprompts that can be queued for
other executions, and you canopen multiple of these in
parallel and let them run all atthe same time.
This is a very powerful and verycapable tool for developers that

(18:01):
did not exist before, and as faras I know, does not exist with
any of the other platforms.
Now the other cool feature isthat it has sort of a teleport
feature that syncs it with thechats and transcripts of your
local CLI tools.
So you can run on your localcomputer and or on the web.
At the same time whileintegrating or separating the
activities that you're doing,because everything is gonna get

(18:22):
synced in the end.
And an interesting aspect ofthis is, part of it is open
source and available for thecommunity to play with.
So Anthropic release, theAnthropic experimental sandbox
runtime library as open sourcefor the community to work with
them on continuing developingthe sandboxing implementation of
the anthropic code.
Now what they're claiming isthat it is very good at querying

(18:43):
entire project structures andfixing long list of bugs more or
less independently while usingtest-driven development in order
to verify the outputs.
Currently, it is available tothe pro and max plan users only
because it is still in previewstage and the sessions on the
web share their rate limits ontokens as your regular cloud

(19:03):
code usage, which makes it veryeasy to use because you can
expect what is going to happenand when it is going to stop.
Getting started is very easy.
You can just go toclaude.com/code, link your
GitHub repos and your up andrunning.
Now, Anthropic also introducedsomething very interesting this
week, which is anthropic agentskills, which is a new structure
that allows developers to createand use and reuse skills within

(19:25):
the anthropic developmentenvironment across different
agents.
Now these skills enabledevelopers to take a general
agent that does somethinggeneral and make it very
specific by allowing it to applythese scales as tools to do very
specific things.
And the cool thing about them isthat they run on just the
components you need are beingloaded on runtime in order to

(19:46):
save you tokens as you arerunning.
So one of the examples is a PDFscale.
What does the PDF scale do?
It allows agents that arerunning on cloud code to know
how to use and read PDFs as anexample.
It allows you to extract fieldsfrom forms so the agent can fill
up the forms, so you can take anagent that can do something
else, and now it can fill upforms because it has this scale

(20:06):
that is being applied from thislibrary.
Now, the other cool thing beyondthe fact that it is loading it
in real time and saving youtokens because it's only using
it when it needs it, is the factthat it could run an executable
that is a deterministic processversus the AI process that is
statistical, meaning AI models,when they do things, they may
not do the same thing.
If you let it do the same thing20 times when you run code, the

(20:29):
code is gonna run 20 timesexactly the same.
So the scales allow the AI tosuddenly run in a deterministic
way in specific use cases, whichmakes it very, very powerful.
This is similar to what I'mdoing and many other people are
doing.
When you're combining processautomation in tools like N eight
N and make together with AIcapabilities, and you can
benefit from the best of bothworlds where the automation is

(20:50):
deterministic and they're gonnado the same thing every single
time, and the AI just addscapabilities on top of that as
far as deciding what to do oranalyzing information or
categorizing it and so on.
Now, this is just step one,anthropic plans to enable agents
to automatically create andrefine skills in real time,
which will make it even morepowerful because the agents will
be able to create the skillsthey need as they need them

(21:11):
versus read from a preexistinglibrary.
If this sounds like sciencefiction to you, it sounds like
science fiction to me as well,but this is where it is going.
And to make it easier foranybody to start Anthropic, have
shared a repository on GitHub ofexisting skills that you can
start using these skills in therepository are broken into
several different categories ofcreative and design development

(21:32):
and technical enterprise andcommunication meta skills, which
is basically a skill creationskill, and then document skills.
And there are several in eachand every one of these
categories that you as adeveloper can start using today,
either as is or as a sample foryou to develop other things that
you need that are similar towhat they have developed.
And because it's open source,you can either start from their
examples and change them, or youcan just learn how they work and

(21:55):
build your own.
Now if you think we're done withAnthropic and Claude, we are
not.
They are really on fire Thisweek.
Claude just released Microsoft365 connector for any teams or
enterprise customers, whichrequires admin enablement to
unlock that capability.
But once you do that, it allowsyou to converse in Claude with
your documents, emails,calendars, et cetera.

(22:16):
In Anthropics statements, theysaid, and I'm quoting, Claude
works with the productivitytools you use every day,
bringing context from yourdocuments, emails, and calendar
directly into yourconversations.
Spend less time gatheringinformation and more time on the
decisions and work that driveimpact.
Now, let's combine all thesenews together and you understand
that Anthropic is going all inon the enterprise side of ai

(22:38):
with more and more tools fordevelopers, better integration
into existing environments andskills combined, all of that
will drive significantly fasterand more effective enterprise
adoption of anthropic, whichthey're already leading.
More on that in a minute.
And the last piece of news fromAnthropic is that Anthropic is
in negotiation with Google for anew compute deal in the high

(23:01):
tens of billions, to give themaccess to more TPUs from Google.
Those of you who don't know,Google has their own chips.
It's called tensor processingunits or TPUs, a lot of what
Anthropic is doing is running onthese chips right now because
they've had a partnership for along time.
Google has invested over$3billion in Anthropic,$2 billion
in 2023,$1 billion in 2025.

(23:22):
And so this partnership has beenaround for a while, and they're
just looking to broaden that ina much, much, much larger scale.
Both of them did not comment, sothis is a rumor as of right now,
but many of these rumors seem tomaterialize so far.
Whether the rumor is true ornot.
Alphabet shares jumped 3.1%after hours trading after this
report was published, and thisis all aligned with the

(23:44):
explosive growth that Anthropicis experiencing this year.
They're going to more thandouble the revenue from a year
ago and potentially triple fromlast year.
And this is not a company thatmakes$2 million a month.
This is a company that is aimingto end the year at a run rate of
$9 billion a RR, which if theydo, it's gonna be three x from
the$3 billion they had lastyear.
Now, I briefly mentionedAnthropics lead in the

(24:07):
enterprise market.
So a barclays analysis datedOctober 18th is showing a very
big divide between consumerusage of AI versus enterprise
usage of AI.
On the consumer usage of ai,OpenAI has a very big lead on
number two, which is GoogleGemini, which with more than
double the token consumption onOpenAI than on Gemini.

(24:28):
However, on the enterprise side,anthropic is number one way
ahead of chat GPT.
Now, those of you who have beenfollowing what happened to
Anthropic this past yearshouldn't be surprised because
They have been the leading toolbehind development platforms.
Many, many, many developers inthe world, a high percentage of
developers are using cloud codebehind their development tools,

(24:48):
either directly or integratedinto their existing development
environments.
And that has driven a huge spikein adoption of anthropic tools.
OpenAI has invested a lot in GPTfive to take the lead back, and
they were able to do this for avery short amount of time.
But then Anthropic, came outwith Claude 4.5, which again
took the lead on everythingdevelopment, and to be fair,
more or less, everything acrossthe board.

(25:09):
If you look at the LMC arena,Anthropic is currently leading
almost every category.
But speaking on the Fiercecompetition with OpenAI, OpenAI
made its own big announcementthis week and that is the
release of ChatGPT Atlas, whichis their AI agentic browser.
It is currently available onlyon Mac, but there are plans to
release it on Windows and iOSand Android and basically
anywhere you would want to useit.

(25:30):
Now, I shared with you in thepast few weeks that OpenAI is
going all in on globaldomination and they want to be
the app that controls it all.
They basically want ChatGPT tobe your gateway to everything,
both personal and business, andthis is just another clear step
in that direction.
And if they can get asignificant percentage of their
800 million weekly users toswitch over from using most

(25:52):
likely Chrome, but either Chromeor any other browser to using
their browser, that puts them incommand across multiple aspects.
A lot of knowledge, a lot ofdata, and being the gateway to
more or less everything theusers do, either at work or at
home when they're shopping forleisure purposes and so on.
Sam Altman said multiple times,and this is a recent quote, We
think AI represents once in adecade, opportunity to rethink

(26:15):
what browsers can be, how to useone, and how to most
productively use the web.
And I agree, AI will completelytransform how we use the web.
We will most likely not browsewebsites.
We'll have an agent, our agent,to do the work for us to most
likely talk to many other agentsthat will do some specific tasks
and will just get us the answersor order for us or research for
us or everything else that we'redoing with the web Today, we'll

(26:37):
work in a very different way andwhoever controls the browsers
will control that entireenvironment.
Now, if you think about howGoogle took over the world, it's
in three different aspects.
One is they control the searchor basically how you get to the
information.
The second is the actualinterface, which is the browser
itself, and the third is thehardware with the Android phones
and laptops.
And this is exactly the playthat OpenAI is trying to do

(26:59):
right now.
They already control theinterface with chat GPT with 800
million users, and they are nowhave their own browser and
they're releasing their hardwaresometime next year.
So this is going to behead-to-head competing with
everything that Google is doingright now.
An interesting aspect of thisbrowser that in addition to
knowing your history ofbrowsing, it's also connected to
the ChatGPT memory, meaning ithas a lot more context about who

(27:21):
you are and what you're tryingto do and what your expertise
are and what you're looking for,and will be able to respond in a
more specific way thanpotentially other browsers
because of your daily or regularusage of ChatGPT.
And going back to how they'reframing it, Fiji cmo, their CEO
of applications said over timewe see ChatGPT evolving to
become the operating system foryour life.

(27:43):
A fully connected hub that helpsyou manage your day and achieve
your long-term goals.
This is her way to saying, wewant to control your life, or we
want you to do everything inyour life through our lens,
through our interface, and as Imentioned, they're doing
everything they can to move inthat direction.
That being said, there's alreadya lot of competition in the
agentic browser universe withtools like DIA and Neon and

(28:04):
Comet and Strawberry and Arc andso on.
I truly believe that within thiscoming few months, more and more
people will shift to using theseagentic browsers.
I have zero doubt that Google isdoing everything they can to
completely revamp Chrome inorder to make it an agent and
hopefully lighter and easier onthe computer as well, but the

(28:26):
game is definitely on.
Now as far as the browseritself, I didn't have enough
time to play with it yet.
I played with it a little bit.
I must admit that comparing itto other agent browsers that
I've been using for severalmonths, the first thing that I
can say that it feels a lot morelike using ChatGPT than it is
like using a browser.
And I think they did it onpurpose.
I think they will try to mergeall their different environments
into a single environment,meaning ChatGPT itself will be

(28:50):
the browser.
The browser will be ChatGPT,your app on your phone or on
your computer will be the sametool and it will do all the
different things, meaning that'sgonna be your universe into
browsing, searching the way ofdoing research, chatting with
chat, analyzing information,shopping everything all in one
user interface.
I will not be surprised ifsometime in the near future when
you're using a different browserwith ChatGPT in it, you will

(29:12):
start seeing popups orsuggestions for you to switch to
their browser for some kind of abenefit.
Either additional features thatwill not be available on regular
ChatGPT unless you switch totheir browser because they will
have to find ways to forcepeople outside of their habits
of using Chrome or any otherbrowser and they have 800
million users to convert.
And I think finding ways to dothat through offering them

(29:34):
additional functionalitycapabilities or bandwidth or
whatever the case may be, tomove them over from Chrome to
the full ChatGPT experience issomething they will most likely
do.
And again, I will be surprisedif it doesn't happen in the
relatively near future.
And the one company that shouldbe the most worried about all of
this is obviously Google.
Because if they can converteven, not 800 million, but 500

(29:54):
million users to start usingChatGPT as their gateway to the
data of the world, Google isgonna take a hit and their stock
is gonna take a gigantic hit,even if it's a relatively small
hit for Google in the beginning.
Now, staying on OpenAI, they hada pretty big snafu this week.
OpenAI VP Kevin Wild.
Tweeted that ChatGPT solved 10unsolved Erdos problems and

(30:16):
advanced 11 others.
Now, a quick background.
What the hell are Erdos mathproblems?
Paul Erdos was a mathematicianthat posted several different
statements that he believes tobe true, but nobody including
himself and others were able toactually create proofs for them.
And so per this tweet, ChatGPTfive was able to find answers to
tell them and progress 11 othersonly.

(30:38):
That is not exactly the case.
So Thomas Bloom, who has awebsite that maintains these
problems and shares everythinghe finds about them, basically
said that what he has on hiswebsite as unsolved problems
doesn't mean they're unsolved.
It just mean that he is notaware of a solution for these
problems.
And all GPT five did is findother references to problems

(30:59):
that actually has been solved,and it didn't actually solve it
on its own.
Or as Thomas Bloom himselfcalled it.
it's a dramatic misin.
It's a dramaticmisrepresentation of what
actually happened.
Now this have led to widespreadmocking of open AI for this
post.
Jan Laun posted a few differentthings and Google deepminds,
Demi labeled it as embarrassingand obviously many other people

(31:20):
no, to be fair, G PT five andthe latest Gemini 2.5 Pro are
both very capable in math.
They both won Olympic goldmedals in the math Olympics, and
so they're good in math.
There's no need to say thatthey're doing things that
they're not actually doing.
So not a very smart post fromopen AI in this particular case.
Now on a good news on OpenAIthis week, OpenAI has changed
the way that memory featureactually works and they're

(31:42):
introducing automated managementof the memory.
So if you are like me and youare feeding the memory on
purpose so it knows more aboutyou and it becomes more
contextually aware of youruniverse and your company and
what you're doing and so on, youknow that it gets to memory full
issue.
And then you have to go manuallyand delete stuff from the memory
and you have to pick it on yourown.
Well, they now have anautomatically prioritizing of
conceptually top of mindmemories based on recency and

(32:05):
frequency.
So if it's something you didonly once six months ago, it is
going to forget it from thememory.
And if it's something thatyou're talking about a lot or
using CHE a lot to do, or youjust did it recently then it
will put more focus on that.
I think that makes a lot ofsense.
You can still go back and doeverything that you did before
from a control perspective, soyou can review and reprioritize
and delete and revert and turnmemory off completely if you

(32:28):
want to.
Uh, just like you could before.
But now there's gonna be a moreactive function as well, helping
you to remember more relevantthings.
Now we're speaking on ChaCha'snumber of users and their growth
across different sectors andtheir competition with Anthropic
well recent research fromDeutsche Bank has found that the
European spending on ChatGPT hasstalled since May.

(32:50):
Now, it didn't stall completely,but just the rate of growth has
declined dramatically.
So the rate of growth in ChatGPTspending in May was almost 10%,
and now it's down to about 1%.
So it's still growing.
It's just growing very, veryfast compared to how it was
growing earlier this year.
Now, is that a place for worry,for ChatGPT?
Probably.
We need to remember that theEuropean Union has a lot more

(33:10):
limitations than the US and anyother place in the world on the
usage of these tools.
So that puts a limit on some ofthe use cases for ChatGPT, and
other AI tools, not just forChatGPT.
Is this a temporary slowdown?
Is this the canary in the coalmine that is gonna tell us that
this is coming in other placesaround the world?
I don't know.
But if I learn anything new, Iwill obviously keep you posted.
Now from OpenAI to their biggestpartner, which is Microsoft.

(33:33):
Microsoft just released theirown image generator.
It is currently available to usein several different platforms,
and it is being tested on theLMC chatbot arena, and it's
actually doing pretty well.
It's currently ranked numbernine for image generation, and
Microsoft is planning toimplement this new image
generation tool labeled MAI forMicrosoft Ai, MAI dash image

(33:54):
dash one, and they're planningto release it to the copilot and
being image creation environmentvery soon.
Now this is their very firstimage generator and the second
overall model that they arereleasing to the public, and
this is obviously combined withtheir recent partnership with
Anthropic, their way to dereduce their dependency on the
open AI models.
Another interesting piece ofnews from Microsoft, going back

(34:15):
to their dependency on OpenAI,they just introduced that
copilot composer menu now hasvideo generation capabilities
that comes from open AI's.
SOA two.
So the latest model from OpenAIis now available in the copilot
composer environment.
Free copilot users can generateone, so two video per day.
While pro subscribers haveunlimited access, what does that

(34:36):
mean?
Is it really unlimited?
I don't know.
It will make zero sense to me ifit is really unlimited, because
these are very, very expensiveto generate.
But this what the Microsoftannouncement said so far, they
also added a new shopping tab inthe copilot sidebar, which is
telling you that they're doingexactly what OpenAI is doing.
They want to keep you on theirplatform in the copilot
ecosystem for everything thatyou're going to do, and they

(34:57):
would like to provide you allthe tools to do that so you
don't go anywhere else.
This is the new Frontier, right?
Is the app that will controleverything that we do in our
connection with data of theworld, whether it's shopping,
working, browsing, learning,experiencing, creating videos,
et cetera, et cetera, et cetera,all under one umbrella, and
everybody will fight to be thatplatform.
And from Microsoft to meta.

(35:18):
Meta is rolling out parentalcontrols for teens for their AI
interactions, and this is goingto be rolling out in early 2026.
This will allow parents todisable one-on-one AI chats
entirely, or block specifictypes of bots or specific type
of AI content.
And it will also allow parentsinsights to what are the topics
that their teenage kids areconsuming with AI without giving

(35:41):
them full access to see theexact conversations.
Meta has stated that over 70% ofteens have used AI Companions
with 50% using them regularly.
So this is a very highpercentage.
But that being said, you gottaremember that Open AI has
integrated AI into everything ontheir platforms.
So whether you want to use AI ornot is not really your choice.
And so that basically means that70% are using Instagram

(36:04):
regularly and so on.
So it doesn't really mean thatthey intend to use AI
Companions, they just use theplatform.
And this is just built into theplatform.
I, and speaking on that, metajust closed the door for
external AI chatbots to beavailable on WhatsApp through
the business.
API.
So far, tools like ChatGPT,perplexity, Louisa Polk and
others have been available onWhatsApp.

(36:24):
You can talk to ChatGPT onWhatsApp and now you will not be
able to do this as of January of2025 because Meta is going to
block these tools from beingavailable through that platform.
Now, Meta's excuse is that itburdens its own systems and that
that wasn't the intent of thebusiness.
API, the business API was forcompanies to be able to provide
answers to their clients and notto drive a chatbot solution.

(36:48):
But the reality is it iscompeting with their internal
tools and they can block it.
So I don't really understand whythey're even apologizing or
trying to excuse it.
It's very, very obvious whatthey're doing.
And to be fair, they have theright to do that and they can do
that if they want.
Now, staying on meta, we haven'treported any bad negative news
about the new Super Intelligenceteam for a few weeks now.
So this week they announced thatthey're letting go of 600

(37:09):
positions on the recentlyestablished super intelligence
team.
This was a part of an internalmemo by Alexander Wang who is
running this team.
And the way he defined it is,and I'm quoting, fewer
conversations will be requiredto make a decision and each
person will be more load bearingand have more scope and impact.
So will this departmenteventually fall into place and

(37:29):
do amazing things?
Maybe?
So far, it doesn't seem to be awell organized, well
strategized, well executedprocess of putting this
department together.
That being said, again, it'smeta.
They have lots of money, theyhave compute, and they now have
a lot of really smart peoplethat they poach from other labs.
So time will tell, but as Imentioned, this is another bad
sign when you let go of 600people from a department that

(37:50):
you established less than sixmonths ago.
And the last piece of news aboutmeta is that they just joined
forces with Blue Owl Capital ina$27 billion joint venture to
build their largest data centerso far.
And that is gonna be built inrural Louisiana.
It is going to be a massive,massive project and that is
obviously to keep them incompetition with what's going on
with open ai, Andro, andAlphabet and their insane

(38:12):
investments in compute.
The construction of this newdata center is supposed to be
wrapped up by 2030, so it'll bea while before it goes live.
To put things in perspective onhow big this entity is going to
be, the local utility energycompany that is trying to
estimate how much power thiswill require, saying it will
consume twice the electricity ofNew Orleans on a peak day.

(38:33):
Now, a few interesting pieces ofnews.
Announcements and partnershipsfrom companies we usually do not
cover because they're not in thenews regularly, or at least not
on these kind of topics.
Oracle is unveiling a set of newAI driven features across its
cloud infrastructure,applications, and analytics.
So multiple AI agents includingimprovements to their AI
studios, as well as a agentmarketplace and other

(38:56):
capabilities are going to bedeployed across multiple Oracle
infrastructure and tools, andwill be available to anyone who
is using Oracle tools, includingolder systems such as
PeopleSoft, how exactly wouldthat happen?
Will be very interesting to seebecause PeopleSoft has been
almost impossible to evenintegrate because it's so
archaic.
But apparently they found waysto run agents on top of

(39:16):
PeopleSoft as well, so I'll bevery curious to see what exactly
they're doing with that.
in their new marketplace thatthey announced, they already
have several really big playersoffering different agents in
that marketplace, includingExchange, Accenture, Deloitte
and IBM specifically, IBMreleased several agents
including a smart sales orderentity agent.
A in-company agent, arequisition to contract agent

(39:38):
and a few others.
And IBM is also developing HRand supply chain agents to run
on this Oracle marketplace.
Now in addition, Oracle plans toincorporate IBM Granite for AI
models into their cloudinfrastructure to provide the
cloud infrastructure users withmore options.
So this is a partnership withtwo of the largest IT companies

(39:59):
in the world starting to offermore and more joint AI
solutions, which is obviouslygood news for many, many
companies that run on thisinfrastructure.
Now, speaking of I-B-M-I-B-Mjust made a very interesting
announcement this week thatthey're doing a strategic
partnership with Grok with thequeue.
So we spoke about grok with thequeue several times in the past.
On this podcast, grok with thequeue are developing a new type
of hardware.
That they called lpu Languageprocessing units, which

(40:23):
different from GPUs, are notbilled for training models.
They're billed purely just forinference, which is the time
that you actually use the AIthat generates the tokens, that
generates the outputs, and itruns five time faster than
current GPUs at a fraction ofthe cost.
And so IBM is going to includethat infrastructure into the IBM
environment for the benefit ofeverybody that is using the IBM

(40:46):
infrastructure.
One of the biggest benefits ofthis is for any environment like
healthcare that requiresimmediate, near realtime
responses because now A, theresponse will come significantly
faster and b.
It'll be significantly cheaper.
So you'll be able to run throughmultiple, very large queries
very, very fast, and at areasonable cost.
Those of you who haven't seenGrok work, I highly recommend
trying it out.
You can just go to grok Cloudand run a prompt on ChatGPT and

(41:09):
then run the same exact prompton ChatGPT so you can pick the
different models on Grow Cloudand entire pages appear
instantly.
It's like magic.
And so I think, again, that's avery interesting move by IBM,
and I'll be surprised if othercompanies don't move in that
direction, not necessarily justfrom Grok.
There's other suppliers who dosimilar things, but start to
provide cheaper, fasterinference in addition to the

(41:30):
GPUs that they have.
And then we'll use the GPUs morefor training.
And then these inference morefor delivery of ai I know the
next piece of news is notspecifically related to ai, but
I cannot go through a newsepisode about technology this
week without mentioning the AWSoutage that happened on October
20th.
About 30% of the internet wasimpacted, or at least the
Western Hemisphere Internet wasimpacted by this outage and many

(41:54):
different websites with eitherdown or struggling, including
banking, gaming applications,Snapchat, and many other
platforms over impacted by AWS.
So why is it interesting in thispodcast as an AI podcast?
So let me explain.
The cause for this was a simpleinternal glitch, not even an
external attack, which is alwaysanother option, and it has put

(42:14):
many, many companies down forseveral different hours.
Now think about what does thatmean once we start running the
company on ai.
It's not just the application,it is also replacing a lot of
the human functions.
It means that if you're buildingsystems around that, where AI is
actually doing the work, youmust think early on about
redundancy.

(42:34):
You must have more than onelarge language model in the
backend doing it.
You must have real time rolloverto a different language model if
one fails.
Otherwise, the actual operationof the business, not just the
way to access data or access thewebsite, the actual operation of
the business may come to a haltbecause there's going to be
these specific bottlenecks thatare gonna be AI operated.
And if you don't have a plan,this might be a very, very bad

(42:56):
day for you and your company.
And so if you are developingthese kind of tools and if you
are becoming dependent on AIdoing some functionality, I
definitely suggest you take thatinto account.
Now, speaking on Amazon, Amazonrevealed something really,
really cool this week.
They started providing theirdelivery drivers with delivery
glasses.
These are designed specificallyto enhance the safety and
efficiency of their deliveryassociates.

(43:18):
And it is basically like thesmart glasses that meta just
announced only builtspecifically for delivery.
So it has a projection andcamera and headsets, and it
allows the drivers and thedelivery people to stay focused
on what they need to do whilegetting all the information they
need, such as navigation andinformation about the packages
and guidelines to where to dropthem, et cetera, et cetera.

(43:38):
Many functions that are requiredfor them to do with assistance
from AI straight into theseglasses.
I expect that to happen more andmore across multiple industries.
Think about people working inwarehouses, people working in
assembly lines and so on.
There's so many differentamazing applications for these
glasses as it will become morecapable and cheaper.
I am certain we will see more ofthem everywhere.

(44:00):
Now speaking of hardware and howit can impact our world, Google
just made a very bigannouncement on their blog post
achieving another milestone intheir quantum computing
capability, and they're claimingthat their Google Willow quantum
chip has achieved a verifiablequantum advantage, outperforming
classical supercomputers by13,000 times.

(44:20):
So think about the bestcomputers that exist in the
world today, and this thing runs13,000 times faster than them.
Now, this is not operationalyet, but this is a step in the
direction of having quantumcomputers that can actually
provide reliable, consistentresults.
Now, what does that mean for theAI world?
Well, we discussed in thebeginning, if you remember when
we talked about the, in theinterview with Andre Carpathy

(44:41):
about him saying that there'sall these different things that
have to happen to achieve majorprogresses.
Well, the biggest twobottlenecks of AI right now is
compute and power.
If quantum computing becomesavailable and hopefully nuclear
fusion becomes available, wesolve both of these problems and
that's gonna put us in acompletely different kind of
realm of capabilities when itcomes to AI implementation.

(45:01):
Whether that's good or bad.
That's a whole otherconversation.
Again, we had that, uh,conversation in the beginning of
this episode, but it will opensignificantly more
possibilities.
Staying on the hardware topicand on the glasses topic,
Alibaba just released theirversion of smart AI glasses at a
price point that is competingwith Meta's Smart Glasses.
So for$659, you can pre-orderthese glasses that are running

(45:23):
on the latest and greatesthardware and the latest and
greatest Quinn three models thatare extremely powerful.
So they are definitely aimingagain to combine all the
different universe, includinghardware, including access,
including platform, includingshopping, including everything
into their universe, just on theChinese side of the world.
And staying on hardware, but twovery interesting pieces of
information about robotics.

(45:45):
One is in the Q3 Tesla earningcall.
Elon Musk referred to hisrequest to have higher control
over the robot of Tesla.
Ilo Musk expressed his demand,if you want or his wish to have
more control, or like he said,influence over the army of
Optimus humanoid robots thatTesla will build.

(46:06):
I'm quoting from what Musk said,my fundamental concern with
regards to how much votingcontrol I have at Tesla is if I
go ahead and build this enormousrobot army, can I just be ousted
at some point in the future?
If we build this robot army, doI have at least a strong
influence over this robot army?
Not control, but a stronginfluence?

(46:28):
I don't feel comfortablebuilding that robot army unless
I have strong influence.
What does that mean?
I'm not exactly sure.
I am somewhat terrified withthis sentence, It raises a lot
of questions of who actuallycontrols these robots, what
they're allowed and not allowedto do.
What kind of control, or callit, whatever you wanna call it,
Elon, influence the developersof these robots will have.

(46:51):
And if they're gonna be in everyhome, if they're gonna be in
every street, if they're gonnabe in every restaurant, if
they're gonna be in everycompany, can somebody just take
control over them one day and dosomething different with them,
whether intentionally or bymistake?
And that leads me back to what Isaid in the beginning, the three
robotics laws from Asimov.
I pray that somebody will figureout to put that in place.
By the way, despite all thedelays and issues that Optimus

(47:13):
has, a production intentprototype is slated for
February.
March of 2026.
So this is when they will startproduction.
The initial plan was for them todeploy 5,000 of this year.
This didn't happen, but that'snot a very big delay, especially
when you compare it to previousTesla commitments.
And then the final piece of newsfor today is that a startup from
Beijing called NeuroticsRobotics is starting pre-selling

(47:37):
the world's cheapest humanoidrobot.
It is really tiny.
It's just three feet tall, sojust short than one meter tall,
and they're pre-selling it for$1,370 straight into the holiday
season.
So I expect an explosion ofsales for that robot.
It comes with several differentbasic skills, but also with a
user friendly visual programminglanguage that kids can use as

(48:00):
well as STEM lessons can use.
So very interesting approach.
They're not attacking a globalspecific industry or so on and
so forth.
It can run for about one or twohours only, but it could be a
really interesting toy or areally great way to teach how to
use robotics in differentscenarios.
So a different approach, and avery different price point than
everything we've seen so far.

(48:21):
The cheapest robot I know ofright now is another Chinese
company.
Its uni Trees are one robot,which is highly capable, and
they're selling it right now for$6,000, which is still very
cheap to compare to most otherrobots.
There are a lot more news fromthis week and they all appear in
our newsletter.
So if you wanna know what elsehappened this week, including
some very significant fundingrounds to some no names and less
no names, go check out thenewsletter.

(48:42):
You can sign up through the linkon the show notes.
If you are enjoying this podcastand you find it valuable, please
share it with other people.
Click the share button now.
Think of several differentpeople that can benefit from it
and share.
Just share the podcast withthem.
It'll take you five seconds todo that, and you'll be helping
other people and I will bereally grateful if you do that.
And if you're on the app, eitheron Apple Podcast or on Spotify,
I would appreciate if you leaveme a review as well.

(49:04):
That also helps get to morepeople to learn about ai so
jointly we can increase thechances of landing on the better
side of a potential future withAI versus the wrong side of
potential future with ai.
We'll be back on Tuesday with afascinating how to episode this
time about weve, so if you wantto learn how to create the most
incredible visual assets atscale, including images, videos

(49:25):
based on your brand guidelines,based on specific processes,
don't miss this episode.
It is going to blow your mind.
that is it for today.
Have an amazing rest of yourweekend and we'll talk again on
Tuesday.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Burden

The Burden

The Burden is a documentary series that takes listeners into the hidden places where justice is done (and undone). It dives deep into the lives of heroes and villains. And it focuses a spotlight on those who triumph even when the odds are against them. Season 5 - The Burden: Death & Deceit in Alliance On April Fools Day 1999, 26-year-old Yvonne Layne was found murdered in her Alliance, Ohio home. David Thorne, her ex-boyfriend and father of one of her children, was instantly a suspect. Another young man admitted to the murder, and David breathed a sigh of relief, until the confessed murderer fingered David; “He paid me to do it.” David was sentenced to life without parole. Two decades later, Pulitzer winner and podcast host, Maggie Freleng (Bone Valley Season 3: Graves County, Wrongful Conviction, Suave) launched a “live” investigation into David's conviction alongside Jason Baldwin (himself wrongfully convicted as a member of the West Memphis Three). Maggie had come to believe that the entire investigation of David was botched by the tiny local police department, or worse, covered up the real killer. Was Maggie correct? Was David’s claim of innocence credible? In Death and Deceit in Alliance, Maggie recounts the case that launched her career, and ultimately, “broke” her.” The results will shock the listener and reduce Maggie to tears and self-doubt. This is not your typical wrongful conviction story. In fact, it turns the genre on its head. It asks the question: What if our champions are foolish? Season 4 - The Burden: Get the Money and Run “Trying to murder my father, this was the thing that put me on the path.” That’s Joe Loya and that path was bank robbery. Bank, bank, bank, bank, bank. In season 4 of The Burden: Get the Money and Run, we hear from Joe who was once the most prolific bank robber in Southern California, and beyond. He used disguises, body doubles, proxies. He leaped over counters, grabbed the money and ran. Even as the FBI was closing in. It was a showdown between a daring bank robber, and a patient FBI agent. Joe was no ordinary bank robber. He was bright, articulate, charismatic, and driven by a dark rage that he summoned up at will. In seven episodes, Joe tells all: the what, the how… and the why. Including why he tried to murder his father. Season 3 - The Burden: Avenger Miriam Lewin is one of Argentina’s leading journalists today. At 19 years old, she was kidnapped off the streets of Buenos Aires for her political activism and thrown into a concentration camp. Thousands of her fellow inmates were executed, tossed alive from a cargo plane into the ocean. Miriam, along with a handful of others, will survive the camp. Then as a journalist, she will wage a decades long campaign to bring her tormentors to justice. Avenger is about one woman’s triumphant battle against unbelievable odds to survive torture, claim justice for the crimes done against her and others like her, and change the future of her country. Season 2 - The Burden: Empire on Blood Empire on Blood is set in the Bronx, NY, in the early 90s, when two young drug dealers ruled an intersection known as “The Corner on Blood.” The boss, Calvin Buari, lived large. He and a protege swore they would build an empire on blood. Then the relationship frayed and the protege accused Calvin of a double homicide which he claimed he didn’t do. But did he? Award-winning journalist Steve Fishman spent seven years to answer that question. This is the story of one man’s last chance to overturn his life sentence. He may prevail, but someone’s gotta pay. The Burden: Empire on Blood is the director’s cut of the true crime classic which reached #1 on the charts when it was first released half a dozen years ago. Season 1 - The Burden In the 1990s, Detective Louis N. Scarcella was legendary. In a city overrun by violent crime, he cracked the toughest cases and put away the worst criminals. “The Hulk” was his nickname. Then the story changed. Scarcella ran into a group of convicted murderers who all say they are innocent. They turned themselves into jailhouse-lawyers and in prison founded a lway firm. When they realized Scarcella helped put many of them away, they set their sights on taking him down. And with the help of a NY Times reporter they have a chance. For years, Scarcella insisted he did nothing wrong. But that’s all he’d say. Until we tracked Scarcella to a sauna in a Russian bathhouse, where he started to talk..and talk and talk. “The guilty have gone free,” he whispered. And then agreed to take us into the belly of the beast. Welcome to The Burden.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.