All Episodes

Could your job—or your entire company—be replaced by a million AI-powered robots by 2027?

This week’s episode of Leveraging AI is packed with breakthroughs that business leaders can’t afford to ignore. From Google's jaw-dropping AI showcase at Cloud Next to sobering predictions of superintelligence spinning out of control, we cover the AI headlines shaking up the business world.

Plus, we unpack the rise of autonomous agents that can not only execute tasks—but build other agents on the fly. Sound like sci-fi? It’s not. It’s this week’s news.

Whether you're planning your next AI strategy or simply trying to stay ahead, this episode is your edge.

In this session, you'll discover:

  • Why Google’s Gemini ecosystem may quietly be overtaking OpenAI and Anthropic
  • The chilling prediction that AI will become uncontrollable by mid-2027
  • What "Super Agents" are—and why they're rewriting how work gets done
  • Why top business leaders are requiring AI use before approving headcount (yes, really)
  • How AI agents like Ava and Aria are coming for sales jobs… and outperforming humans
  • The truth about alignment risks, and why AI may soon be thinking 50x faster than you
  • The latest on ChatGPT memory, Grok 3, and Meta's mind-blowing 10M token context window
  • Why Microsoft is stepping away from bleeding edge models—and why that may be genius
  • How companies like Block are saving 8–10 hours per engineer per week using AI agents
  • The bold new moves from Canva, Amazon, and Shopify that signal what’s coming next

💡 Ready to go deeper? Check out the AI Business Transformation Course starting May 12 — and use code LEVERAGINGAI100 for $100 off.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hello and welcome to a weekendnews edition of the Leveraging

(00:03):
AI Podcast, a podcast thatshares practical, ethical ways
to leverage AI to proveefficiency, grow your business,
and advance your career.
This is Isar Meitis, your host,and every week I'm hoping
there's gonna be some slow downwith a week, though we won't
have a lot to report, but thisis not going to be, this week
there is an explosion of stufffor me to share with you.
Our main topics are going to beeverything that Google shared in

(00:25):
Cloud Next 2025, the big eventthey just held this week we're
going to talk about a doom andgloom projection to where AI is
going by 2027.
We're going to talk about majorprogress with AI agents, and
then we're going to dive intomultiple rapid fire pieces of
news, including a lot of new andinteresting releases from the
biggest players and small onesas well.

(00:46):
So let's get started.
Google held its cloud next eventthis past week, and as expected,
it was huge impact with AIrelated announcement.
If you go back a year and a halfago and a year ago on this
podcast, when Google's respondto the ChatGPT moment was, I'll

(01:09):
be gentle, embarrassing.
I said that it's not a good ideato bait against Google in the AI
space because they haveeverything they need in order to
win this race.
They have the capital, they havethe data, they have the human
capacity, they have and canattract the best minds in the
planet.
They have the computecapabilities, the distribution,
the tools, everything you needin order to be successful in

(01:31):
this space.
And they're the ones thatinvented the transformer that
actually led this entirerevolution.
So they have years and years ofresearch before even the ChatGPT
moment.
So it was very obvious to meback then that they will take
potentially the leadingposition, and it seems that this
is the direction that it'sgoing.
If you've been watching thechatbot arena chart in the past

(01:53):
few months, you see that Googleis on top at number one and
sometimes number one and numbertwo, most of the time you know
that they now have the bestvideo generation tool with VEO
and so on and so forth.
And I think slowly they willtake over more and more
components and more and morecapabilities in which they're
going to be the leading force inthe world and they will have the
ability to combine them alltogether into unified

(02:15):
environments.
And this is more or less whatwe've seen at Cloud next.
It was a spectacularpresentation of their ability to
combine multiple capabilitiestogether, all under the Gemini
umbrella.
And I'm gonna go very quickly onsome of the things that they've
shared during the conference.
First of all, they introducedIronwood, which is their seventh
generation TPU, which is theirAI chip that is now built

(02:37):
specifically for inference,which will allow the test time,
compute these reasoning models,to run more effectively.
And it is going to be availableon Google Cloud later this year.
Compared to prior generation, itoffers five times more peak
compute capacity and six timesthe high bandwidth memory
capacity.
And overall, they're saying it'sproducing 10 x faster results of

(02:58):
every process that it's doing.
Which is obviously veryimpressive and potentially will
present an alternative to NvidiaGPUs.
But they've also introduced thewhole creative side of this,
which was really interesting andimpressive.
So they introduced Lyria, whichis a new addition to Vertex ai.
That's their New voice model.
So now in combination withImagine and VEO two, which by

(03:19):
the way is getting a new VEO twoand the ability to generate
music now allows creators inVertex to create anything they
want.
Voice, images, videos, speech,music, basically any creative
thing you wanna produce, you cannow produce in one environment
within Vertex.
This is nothing like anybodyelse has, and that is obviously
very powerful.

(03:41):
They're claiming that some ofthe introduction videos that
they had in the transitions toeach of the speakers were done
with the new version of VEO2,and it's absolutely mind
blowing.
It's worth for you to watch thefirst few seconds of every
presentation that was done therejust to see what was done.
I'm sure that required a lot ofediting and post-production and
stuff like that, but even ifjust components of this were
produced with VEO O2, it'sincredible.

(04:02):
They're introducing a lot of newtools to Gemini in workspace,
bringing more and morecapabilities into docs and
sheets, and meet and chat, andall the stuff that is available
in workspace.
I must admit I'm using more andmore of Gemini within the
different tools.
I'm just praying for the daythat it's all gonna come
together, and it's not gonna beGemini for Docs and Gemini for
Sheets and gemini for Drive, butactually just Gemini that will

(04:24):
know everything that I'm doing,including my emails and
everything.
I guess we'll have to wait alittle longer for that.
I really thought we're gonna bethere by now, but I guess that's
more complex than I thought.
They also introduced someadditions and improvements to
Agent Space, which is theirplatform that allows companies
and enterprises to developagents.
They announced that they have agrowing AI agent marketplace
where you can go in and see whatpartners of Google have

(04:46):
developed and is makingavailable, and you can buy
agents from different people.
This is a+new vector of theeconomy, if you want, of
companies that will developthese agents and you'll be able
to go and buy them as anenterprise and use them within
the Google ecosystem.
They also unveiled what theycall agent development kit, an A
DK versus an SDK fromtraditional software, which is
an open source framework forbuilding agents while

(05:08):
maintaining control over theirbehavior.
And they also introduced a to a,or agent to agent, which is a
new open source protocol thatgives agents the ability to talk
and collaborate with otheragents regardless of the vendor.
So if you think about the hugesuccess of MCP in the past few
months, and we talked about thismultiple times, it became a huge
success because it is opensource and it allows agents to

(05:31):
connect to multiple datasources.
Well, now this new frameworkfrom Google, this time MCP is
from Anthropic.
So this new framework istargeting the ability to agents
to talk to other agents.
With the amount of hunger thereis for this kind of capabilities
right now, I assume this willcatch in the same speed as MCP
did for connecting to data andtools.

(05:52):
They also shared a new unifiedsecurity solution called Google
Unified Security, which bringstogether all their security
capabilities for the productsthreat, intelligence, security
operations, cloud security, andsecure enterprise browsing.
So they're building a wholesuite of security tools around
their AI infrastructure, whichmakes perfect sense if they want
to see this successfullydeployed across multiple
enterprises and their expandingthe access to their WAN

(06:16):
solution, their wide areanetwork from being just for
internal use across Googlefacilities to be available to
enterprises for significantlyfaster capabilities, services,
and search across everythingGoogle.
So bottom line, Google isshowcasing why they are the
leading developer of AIsolutions in the world, how
their AI cloud solutions aregoing to be in the forefront of

(06:39):
AI capabilities.
And that's obviously in a fiercecompetition with AWS and Azure.
So it'll be interesting tofollow, but definitely this was
a very clear statement by Googleon how they see themselves in
this race.
And from all of these excitingnews that every time I see one
of these events, I always thinkof Cyber Net from the Terminator
movies.

(07:00):
There seems to be people whoagree with me and actually know
a lot more than me and havetaken the time to actually put
this down on how they see thetrajectory of AI moving forward
and what its impact might be inthe world.
So two AI experts.
Scott Alexander, who's theauthor of Astro, Cox 10, and
Daniel Colo, who's a former AIresearcher, has created a

(07:20):
website called AI 27, whichshows a timeline from now till
2027 and what they think mighthappen during that timeframe.
Now, I found it through the dshpodcast.
So those of who don't know, dshis a brilliant interviewer and
he has a podcast where he hasaccess to basically anyone in
the world, and he does a lot ofin-depth technical interviews

(07:43):
with a lot of interestingpeople.
So he brought these two peopleto an interview at his podcast.
I'm warning you on two things.
One, it's a three hourinterview, so even at one and a
half speed, you need quite along drive, but also that it's
very scary and uneasy to listento and in some cases a little
too technical, but I willsummarize most of it for you.
So they're trying to portray howsuper intelligence will develop

(08:07):
and what might be its impact onthe world.
The key concept that they'retalking about is that once we
hit super intelligence and AIagents can build AI agents, or
AI systems can build new AIsystems, we're getting to
intelligence explosion.
Basically a self-sustainedfeedback loop that keeps
generating better and better AIcompletely out of our control,

(08:30):
and they're claiming we're gonnaget to that point around mid
2027.
Now, the way they've done thisis they have identified several
milestones that are required inorder to get to that point, and
they try to estimate the time itwill take to each and every one
of these milestones.
Hence the website, which iscalled AI 2027.
And you can go and look at isactually really, really cool.

(08:51):
Extremely well done, beautifulgraphics charts on the side.
but it has a timeline on theleft.
And as you're scrolling, you cankind of see where you are in the
timeline and how we might hitthese milestones.
In addition to the age agenticworld, they're also claiming
that about a year after hittingsuper intelligence, the super
intelligence itself will allowus to build production capacity

(09:12):
to generate a million robots permonth.
That sounds completely sci-fi tome, but a lot of things in AI
sound completely sci-fi to me,which doesn't necessarily make
them wrong.
A million robots per month,meaning that very, very quickly,
you can take over, more or less,any task and any job in the
world because you're generating12 million robots every single

(09:34):
year.
That can take each and every oneof these robots can do the job
of at least one person thatshows you how quickly it's going
to change the way we know theworld.
Now they're talking about thatonce we hit the point that these
AI tools can run 50 times fasterthan humans in doing cognitive
tasks, what it basically meansis that they can do in one week

(09:56):
the work a human does in anentire year.
That's basically what 50 timesfaster means, and you can spin
up as many of these as you want,as long as you have the compute
power.
So the combination of these twothings shows you how quickly
things can get out of hand.
This basically means that evenif it does not accelerate beyond
that point, it just gets to the50 x speed.

(10:18):
It means that in one year it cangenerate the innovation humans
would generate in 50 years.
That is obviously something wecannot even grasp, and they're
saying we're gonna get to thatpoint in 20 27, 2 years from
today.
Now, one of the main concernsthat they're raising is what
they're calling an alignmentcrisis, basically that our

(10:40):
ability to align or if you wantcontrol and understand the super
intelligence is not going towork.
There's actually a reallyfascinating conversation about
this by Yuval who is definitelya DOR when it comes to ai, and
he's saying that therelationship between us and AI
right now is we're an adult andthe AA is a three-year-old, and
we can control it and we can tryto understand what it's doing

(11:02):
and we can easily identify whenit's trying to fool us.
But he's saying in the very nearfuture, it will be the other way
around that the AI will be theadult and we will be a
three-year-old with our abilityto think and understand, and it
will be able to manipulate us inmultiple ways that we can't even
understand.
Just like an adult can easilymanipulate a three-year-old.
And so this is where we'regoing, and this is why I think

(11:23):
that the research that we talkedabout several times on this
podcast, philanthropic to try tounderstand the inner doing of
the ai quote unquote black box,is so critical for us to be safe
from AI in the future.
The question is that enough?
And the question is it movingahead fast enough compared to
the speed that AI is moving?
Now they're talking a lot inthis timeline about the AI arms

(11:44):
race between us and othernations, mostly China.
And they're saying that theincentive to get there first is
incentivizing, cutting cornersand generating significant
safety risks, especially inalignment research.
Another model also talks aboutChina stealing some of US
secrets to accelerate their sideof the process, what they're

(12:04):
calling agent two.
So there's these different levelof agents that you can get to,
and by stealing agent two, itallows them to accelerate, their
development.
I must admit that fromeverything that I'm seeing from
China recently, I don'tnecessarily think they need to
steal anything from the us.
They're very, very close behind,if even behind the US
development of AI capabilities.
But either way, this is notpainting a positive future of

(12:25):
where the AI race is going.
That being said, they themselvesadmit that they're not sure that
this is the scenario that'sgoing to happen.
Even Scott Alexander himself issaying, and I'm quoting, I'm not
completely convinced that wedon't get something like
alignment by default, meaningthat the models won't behave
just because that's the way theywill behave.
But the thing is, we don't know.
And the truth is nobody knows.

(12:46):
And that's the really scarything about this whole thing.
They're also discussing thepotential nationalization of AI
development, but that there's avery serious problem in the
decision whether it should benationalized or run by private
company.
And now I'm now I'm quotingDaniel colo and he said, I would
summarize the situation asgovernments lacks the expertise
and the companies lack the rightincentives.

(13:07):
And so it is a terriblesituation and I tend to agree.
So what is the bottom line?
The bottom line is this is onepossible scenario, right?
I don't know.
And nobody knows if that's thescenario that's going to happen.
We might have con full control.
Everything might move a lotslower.
There might be hurdles that willslow it down, whether from
government or from people, oreven from specific industry.
But the reality is it's apossible scenario and that

(13:31):
possible scenario is cominginto.
Years, meaning they're claimingthat within two years from
today, mid 2027, AI will be ableto spin up new AI that is so
powerful that we have no abilityto control it.
And it will also be able togenerate physical AI robots at a
speed that nobody thought willbe able to regenerate at that

(13:51):
point in time.
and my goal here is obviouslynot to scare you, but just to
paint potential versions of thisfuture, there's obviously the
flip side of that.
If you listen to Daria Ade, hepaints this future of abundance
and everything is good and wesolve world hunger and no
diseases and so on and so forth.
where is the future?
I'm not sure.
I must admit I am more probablyon the doom side with everything

(14:14):
that I'm seeing and the speedthat I'm seeing things are
moving and the lack of control.
do I think it's as doom andgloom as AI 27 presents?
I don't think so, but I reallydo think that we all need to
work together to educate eachother and to find ways to
control this before it's toolate.

Speaker (14:31):
We talk a lot in this podcast about the importance of
AI training.
Multiple research from leadingcompanies has shown that this is
the numbeR1 factor in success ofAI deployment in businesses
large and small.
I'm excited to announce that wejust opened the registration for
our spring cohort of the AIBusiness Transformation course.

(14:51):
I've been teaching this coursefor two years, starting in April
of 2023 and hundreds or maybethousands of business leaders
has went through the course.
We had people in the recentcohort that ended in February
from India, the Emirates,several different countries in
Europe, South Africa, manyplaces in the us, Canada, and
even Hawaii.
So regardless of where you arein the world, this could be a
great opportunity for you.

(15:12):
In previous courses, we hadpeople as far as Australia and
New Zealand, so weird hours ofthe day, but still getting a lot
of value from this course.
The course is four sessions oftwo hours each spread over four
weeks on Mondays noon Easterntime, starting on May 12th.
If you are looking for ways toaccelerate your personal
knowledge and career, or tochange the trajectory of your

(15:35):
team or your entire business,this is the right course for
you.
It is really a game changer, andwithin four weeks and only eight
hours with some homework, youwill dramatically change your
understanding of how to use AIin a business.
We give multiple hands-onexamples and use cases and teach
you the tools and the processeson how to use them.
And we end up with a detailedblueprint on how to actually

(15:56):
implement AI successfullybusiness wide.
So if this is interesting toyou, go and check the link in
the show notes.
You can open your phone andclick on it right now and go and
check all the information aboutthe course And because you are a
listener of this podcast, youcan get a hundred dollars off of
the price of the course withpromo code leveraging AI 100.
I would love to see you join ourcourse in May.

(16:17):
And now back to the episode.
Now, within those lines of doingit in a safer way, DeepMind
released a 140 page paper thatis discussing safety development
of AGI.
They're predicting that AGIcould arrive by 2030, and that
it could support drug discovery,personalized learning,

(16:37):
democratize innovation, enablingsmall organizations to tackle
big challenges, address multiplebig problems in the world.
Those of you who have beenfollowing Demi Sabes, who's the
CEO and the founder of DeepMind,you know that's his goal in life
is to solve really big problemsfor the human race and the
world, and them putting up thispaper that is called an approach
to technical, AGI Safety andsecurity.

(16:57):
the goal of it is very, veryclear, is telling the world,
this is where we're going, butthis is how it needs to be done
right in order to reduce therisks.
And the risks that they'rehighlighting is misuse.
So basically people using AI todo bad things in the world,
misalignment, basically what wejust talked about when the AI
decides to do its thing and goesrogue versus listens to the
humans and what we wanted to do.

(17:19):
And then accidents, basicallycreating situations where AI
generates negative outputs bymistake.
And they're pushing for morerobust training on the safety
side, monitoring in a differentlevel than we have right now.
And human in the loop checks toprevent harm in every step Along
the way, they're also callingfor strong regulation because,
and I'm quoting Shane Legg,who's the chief, AGI scientist.

(17:42):
He's saying this will be a verypowerful technology and it can
and should be regulated.
And they're pushing and urgingfor global cooperation on that
aspect.
The problem with this is thatthey're claiming that we have
time until 2030, and you haveother experts who are claiming
this may be coming this year ornext year, or as we heard, super
intelligence by 2027.
So the timeline that they'repredicting might not be

(18:04):
aggressive enough to stopwhatever needs to be stopped, if
it needs to be stopped, in time.
To make things even moreinteresting or complicated or
scary, depending on your pointof view, a new MIT study that
was published on April 9th findthat AI models lack coherent
value system.
Now that's good and bad becauseon one hand, they lack value
system.
That sounds really scary.
But on the other hand, theresearch found that they lack

(18:26):
human-like priorities, like selfpreservation, which is something
that was actually assumed thatthey do have.
The biggest thing that theyfound.
And they researched multiplemodels from meta, Google, Misra,
OpenAI, Andro, and they weretesting them across multiple
views to see are they fullyaligned or not compared to human
values and other guidelines andwhat they found.

(18:48):
And I'm quoting Stefan Casper,who is the MIT Doctorate
student, who is one of theco-authors.
He said, for me, my biggesttakeaway is to now have an
understanding of models as notreally being systems that have
some sort of stable, coherentset of beliefs and preferences.
In other words, despite all thetraining and the alignment and
everything that we're trying todo, they do not have the same

(19:11):
core values that drive humans tomake decisions and take action.
And that is a very scary thing,especially with the level of
acceleration that's happeningright now.
Now, let's combine this witheverything that happens in the
agent world.
And there are a lot of agentnews, and we talked about this
many times, that 2025 is theyear of AI agents.
We talked last week aboutemergence AI and their new

(19:32):
announcement that their systemnow allows AI agents to build AI
agents on the fly.
So basically you give it a task,it understands what the task
needs to be, and then it createsagents subagents that will do
the subtasks that needs to bedone in order to complete domain
tasks that you gave the agentAnd their CEO, Satish Pilay has
said the following, this is astep towards AI that evolves

(19:52):
itself.
This is literally what they'resaying that they're doing.
They're not trying to hide itfrom their perspective.
That is the goal.
Combine that with multiple newAI agents running in browsers
and controlling them.
And we'll give several differentexamples in the show in this
episode.
The first one comes from opera.
The browser company.
They just introduced operabrowser operator.
It's another AI browsercontroller, but very different

(20:14):
from open AI operator and thecloud solution.
This is built into the actualbrowser, meaning open AI and
cloud are actually takingscreenshots every second and
trying to analyze a screenshotto see what it's doing in order
to decide what to do next andwhere to move the mouse.
This is built into the actualbrowser itself, which allows you
to be significantly faster andmore accurate with everything

(20:34):
that it's doing.
Now there's goods and bads inthe fact that it's not
image-based, but it's actuallylooking at the browser, as I
mentioned, it will allow it torun faster, significantly less
slack.
It also allows to reducevulnerabilities because the
browser actually understandwhat's actually happening.
but it's still opens the doorfor complete misuse of your
browser to do well, anything theagent decides to do.
Staying on the topic of agentsand their potential impact.

(20:57):
A new interesting startup calledArtisan is developing an AI
sales agent.
That's what they're developing.
They just raised a 25 millionSeries A, but the big deal that
made them really known is thattheir ads are saying literally,
stop hiring humans.
That's what the ad says.
That's obviously verycontroversial, but the goal and
the direction is very, veryclear.

(21:17):
They want to replace humansalespeople.
They're claiming that theirflagship product agent, Ava, now
only hallucinates in one emailout of 10,000 emails that it
answers.
That's significantly less thanhumans, right?
So we saying, oh, it stillhallucinates.
It's one in every 10,000.
Well, people make mistakes whenthey write emails as well, and
I'm sure they make more mistakesthan one in every 10,000.

(21:39):
And they're now coming up withnew AI agents, one's called
Aaron for inbound messages, andthe other is called Aria for
meeting management.
And the three are obviouslygoing to work together in tandem
to do well.
Everything salespeople need todo.
Let's keep going.
A new release was made by PaloAlto Gens Spark, which just
launched what they call SuperAgent, which is an AI system

(21:59):
that autonomously handle taskslike anything in the browser.
They are basically doing whatManus from China is doing, and
they were even able to score ahigher score on the Gaia
benchmark, which tests theseautonomous agents when Manus
scored 86% and they scored87.8%.
For those of you who don't knowwhat Manus is, we talked about
this, but these are fully open,do everything agents that are

(22:23):
really general in their purpose.
It's not geared towards being asales agent or something else.
And they can do everything frombrowser A to write reports, to
creating webpages, to writingsoftware, and combining all of
it in a multi-step process.
I've seen two mind blowing demosthis week of things that Gen
Spar can do and it literallyblew my mind.

(22:44):
In one of them, it created awebsite that allows to enter
specific information and getoutputs for specific vendors of
a specific company.
And this tool doesn't work inseconds like it used to from
gpt.
It actually takes hours to worksometimes, but it can spin up
these amazing solutions.
The other demo was requesting tocreate an ebook that can be used
as a lead magnet for a specifictarget audience.

(23:05):
And it went and it created thisamazing ebook.
It's beautiful, it's designedwell.
It has graphs and charts and theright color scheme, and it's
very visually appealing.
It looks completelyprofessionally produced, which
means it looks like the data isreal.
I don't think anybody will testit, which is part of the problem
because these tools now aregonna generate these really
high-end products.

(23:26):
I really doubt anybody will goback and test whether the data
behind them is actuallyaccurate.
Because what the tool did in 55minutes may take you two days to
test and you just won't do it.
And you'll just believe theinformation that is in there.
And that is another very scaryaspect of this whole thing.
But this is where we are goingwith agents.
This Super agent connects tonine different large language

(23:47):
models and 80 plus tools that itcan use to do the things that it
is doing.
And to tell you how far behindthe government is connecting the
dots to what was said by theauthors of AI 27 in the
A-S-U-G-S-V summit this week,which is potentially the US
biggest Education summit in theworld, the US Secretary of
Education repeatedly calledartificial intelligence, a one

(24:09):
instead of ai, meaning shedoesn't even consistently call
the term correctly when she'sspeaking about this.
Now, she might have been excitedand maybe making mistakes, but
the fact that she made thatmistake multiple times, I've
used the combination of the twoletters, AI probably thousands
of times in the past two years,and I did not say a one even

(24:30):
once.
So what does that tell us orhints to?
It tells us that the people whoare driving some of the biggest
and most important decisions ofour generation, including how AI
will be integrated into theeducation system, which is a
huge opportunity and a hugerisk.
Do not understand basic conceptsof what AI is and what its
implications might be whileindustry is running at full

(24:53):
steam ahead and keep onaccelerating every single day.
So where do I stand on this?
As I said, on the scale betweencomplete optimism and complete
doomers, probably 75% into thedoomers side, but I still think
we have a chance to get thisright if everybody understands
where we're going, and we worktogether to educate, push, and

(25:14):
sound our concerns to the rightpeople to potentially slow this
madness down and get to thepositive results while reducing
the risks.
This is one of the main reasonswhy I'm doing this podcast in
order to help educate you so youcan take your role and make your
own decisions on where you thinkthis is going, so you can take
your own action to try to helpget to the right outcome versus

(25:36):
the wrong one.
Okay, and now two rapid fire.
There's a lot of rapid fire totalk about.
First of all, meta just releasedLAMA four, so this could have
been a whole full item byitself, probably an entire
episode by itself.
And they actually released twomodels called LAMA for Scout and
LAMA for Maverick.
These models are fullymultimodal, meaning they have

(25:58):
text and video and images andaudio all in one seamless model,
meaning it's not callingSubmodels, it's actually all in
one, the same model, which givesit a lot of powerful
capabilities.
But maybe the most incrediblething that they released is that
LAMA four has a 10 milliontokens context window.
So for those of you who don'tknow what a context window is, a
context window is the amount ofdata that can stay consistently

(26:19):
in one chat, meaning beyond thecontext window, the AI starts
getting confused and forgettingthings that it's talking about,
and basically losing coherentthought about the thing that you
were talking about.
And the largest model we had sofar was the experimental version
of Gemini two, and then Gemini2.5.
Not the ones that they'rereleasing in the regular Gemini,
but the versions that you canget in Google AI Studio has 2

(26:40):
million tokens, and now LAMAfour has a 10 million token
context window.
This is five x.
The largest model there is outthere that is, and that is 50 x
the next best model, which isClaude, and then Chachi PITI
only has 128,000 tokens contextwindow.
So that kind of gives you anidea On the amount of data, this

(27:01):
is roughly 15,000 pages.
Think about a big book.
A big book has about 400 pages.
This can fit 15,000 pages oftext in a single chat without
losing the ability to retrieveand understand and reason
against that data.
That changes literallyeverything because companies can
put their entire data in thecontext window or entire

(27:24):
software code in the contextwindow and still work with it in
a coherent way.
As all the LAMA models, they areopen source, meaning their
developers cost-effectivealternatives, as well as the
ability to massage and edit theactual language model and how it
works.
And meta also previewed whatthey call LAMA four behemoth.
Which, as the name suggests, ithas 2 trillion parameters model

(27:45):
that they're currently workingon.
And the main goal of it is toallow it to train smaller models
significantly faster.
As you probably remember, wesaid several times meta
committed$65 billion to AIinfrastructure in 2025 alone.
And it is very, very clear thatthey're all in on their AI
efforts.
And LAMA four is a very goodtestament to where that money is

(28:06):
going.
I personally cannot wait to testit out, and I think this long
context window is gonna be veryappealing, especially that it's
open source and you can run iton your own servers to many
enterprises.
Another interesting announcementfrom this week is on April 9th,
XAI, Elon Musk's AI companylaunched the API for Grok three.
I've been using Grok three for awhile now.

(28:28):
I really like it.
I actually use it to producethis news episode every single
week, but it did not have an APIconnection.
Now it does, and it's priced at$3 per million input tokens and
$15 for the output tokens.
And they also released growththree mini and 30 cents for
input and 50 cents for output.
At this price point, they arealigned with Anthropic Claude
3.7 and slightly more expensivethan Google, Gemini 2.5 Pro and

(28:52):
users on X.
Noted that the API has 131,000context window token limit
versus XAI claimed 1 milliontokens.
There's not been a comment fromX on this to this point.
I will keep you updated wherethat stands, but that's another
API that is now available thatanybody can use to develop
solutions.
We're gonna talk about theGemini 2.5 Pro later in this

(29:13):
episode.
Now, in addition to all the bignews from Google in cloud next
that we started with, Googlealso announced that they are
going to support MCP in itsGemini universe.
I'm quoting Demi Saab, the CEOof DeepMind, MCP is a good
protocol and it's rapidlybecoming an open standard for
the AI agentic era.
So MCPI think is more or less isa final fact.

(29:36):
Everybody's gonna use MCP.
It is taking over the AI world,and now Gemini will be able to
connect to it as well.
As you remember, just in lateMarch, OpenAI announced that
they're going to support MCP.
So now you have the threeleading development companies in
the world, at least in theWestern hemisphere.
Anthropic, who released MCP, andthen OpenAI and Gemini, all

(29:57):
supporting this architecture.
Now google, who just recentlyreleased Gemini 2.5 Pro now has
it with no rate limit andpricing it at a dollar 24 per
million input tokens, up to200,000 tokens and$10 per
million tokens on the outputtokens.
That's cheaper than both Claudeand open AI's APIs.
And they have slightly higherpricing for over 200,000 tokens,

(30:20):
which is a$2.50 for the inputand$15 for the output.
But it's still competitive andit's allowing you to use, it's 1
million tokens context window,which until this week was the
largest context window throughAPI.
And now, as I mentioned, LAMAfour has a much bigger context
window.
That being said, Google saidseveral times in the past that
in their testing they're runningit with 10 million tokens.

(30:42):
So it'll be interesting to seewhether they open that
limitation as well.
Just to stay competitive withLlama, they're pushing this very
aggressively, mostly fordevelopers so I assume they will
try to push it to stay aheadwithin the context window race
as well.
The other thing that Googleannounced this week that is
really important and interestingis as of April 9th, Google Deep
Research is running on Gemini2.5 Pro Experimental, which is

(31:06):
their most advanced model.
And I must admit in my recenttest, it's actually showing
better results.
So I was not too impressed withGoogle's deep research before.
And I think the new versiongives it a big benefit because
now a reasoning model is runningbehind it and not just a
reasoning model, but the bestperforming reasoning model in
the world right now based on thechatbot arena and based on a

(31:28):
human raiders, they havepreferred deep research results
coming from Gemini on a 2.1ratio, comparing it to open AI's
results, what does that tellyou?
It tells you that we're gonnaget better and better research
capabilities for cheaper andcheaper from these companies,
and I think we'll get veryquickly to the point.
It wouldn't matter which one youchoose.
It actually already is itdoesn't matter which one you

(31:50):
choose.
It's gonna do better researchthan most humans at a fraction
of the time.
They've also added a coolfeature where now you can create
a podcast in the deep researchsimilar to the feature that
exists on notebook lamb.
So you can listen to the outputof the research, which you could
have done anyway by just bycopying the output and dumping
it into Notebook lm.
But it's cool to have it in oneunified tool.

(32:12):
As we mentioned the beginning,Google is all in on taking over
the AI world, and they're doinga lot of the right steps in the
right direction.
On an interesting developmentwhen it comes to releases of new
model Open AI announced on April4th that they are going to
release oh three and oh fourmini in a couple of weeks.
So if you remember when theyreleased O three mini, they said
it is going to be the lastreasoning model, and then the

(32:33):
next one is gonna be GPT five.
That will merge all thedifferent models into one
universe, and the model itselfwill know how to do and what to
do and when to do it.
But that's apparently morecomplicated than they thought.
and Sam Altman on X said thefollowing, we're going to be
able to make GPT five muchbetter than originally thought.
He's mentioning integrationchallenges and demand concerns.
And hence he's saying it's onlygonna be in a few months.

(32:55):
So to stay in this crazy race,they're going to release O three
Pro and O four Mini sometime inthe next few weeks to be their
most advanced reasoning models.
Another big announcement fromanother interesting announcement
from OpenAI is that on April10th, OpenAI rolled out a memory
feature that ChatGPT can nowknow and reference past
conversations to tailorresponses to your particular

(33:19):
individual needs.
This is available already forall Pro and plus subscribers and
probably enterprise as well.
It is a setting called referencesaved memories that allows
Chacha PT to use prior chats inorder to give you better and
more personalized, answeredacross all the different
aspects.
So text, voice and imagegeneration potentially reducing

(33:40):
the amount of instructions andinformation you need to give
chat PT in order to getconsistent results.
For some of us like me, thatmight be very problematic
because I use the same chat PTaccount to support multiple
clients and different businessesthat I'm running.
And so I think it's, thinks I'mschizophrenic, but, but for most
that are using chate within thesame hat most of the time, that
is going to be extremelyhelpful.

(34:02):
This feature is not available inthe uk, eu, Iceland,
Lichtenstein, Norway, andSwitzerland due to different
regulatory issues.
So if you are in one of thesecountries, I apologize, you will
have to wait or maybe never getthat capability.
Still on open ai.
There have been in discussion topotentially acquire IO products.
Those of you who don't know IOproducts, IO products is a

(34:23):
company founded by Johnny Ive,who is the legendary Apple
designer, who designed many ofthe favorite products that we
all use and love from Apple.
And he's been financed and someof his financing came from Sam
Altman himself.
So OpenAI is now potentiallylooking to acquire the startup
for half a billion dollars,$500million in order to accelerate

(34:46):
their development of ai,physical, wearable products and
not just stay in the softwareworld.
This news also mentioned thatthey're also considering a
partnership instead of anacquisition.
And it will be very interestingto see where this goes based on
the fact that Sam Altman is amajor investor in that company
and based on the fact that theyjust raised$40 billion that will
allow them to invest in whateverthey want to invest.

(35:08):
I think it is very likely thatthis will move forward one way
or the other.
And now to some interestingdevelopments in the battle
between openAI and Elon Musk.
So 12 former OpenAI employeesfiled a brief on April of 11th
supporting Elon Musk's lawsuitto block OpenAI transition from
a nonprofit to a for-profit,arguing that it actually betrays

(35:30):
its mission to prioritizehumanity safety over profits.
In their brief, those 12 xemployees claim that open AI's
nonprofit governments is vitalto ensure that artificial
general intelligence benefitsall and not just the
shareholders.
They're also raising safetyconcerns, warning that the
for-profit model couldprioritize profits over safety.

(35:52):
Now, if you remember, I sharedwith you in the past that
several nonprofit labor groupsalso joined the claim against
OpenAI and their transition to afor-profit entity And a
significant portion of the moneythat OpenAI raised, including
the very recent$40 billionraise, depend on their ability
to convert to a for-profitentity in the particular recent

(36:12):
raise by the end of this year.
So this is a very shorttimeframe for them to do this,
or they lose X number of tens ofbillions of dollars, which I
assume they don't wanna lose.
So this battle continues.
On the flip side of that,probably because of the urgency
OpenAI just found, filed acountersuit against Elon Musk on
April 9th, accusing him ofmalicious campaign to sabotage

(36:34):
its business.
It was filed in a Californiafederal court, and it seeks to
block Musk from further, and I'mquoting unlawful and unfair
action to prevent them fromdoing the conversion and being
competitive in the AI race.
The lawsuit cites emails showingMusk once pushed for open AI to
become, a for-profitorganization and for him to be

(36:54):
the CEO contradicting hiscurrent stand against its
commercialization.
Not to mention the fact thathe's now running a lab that is
competing with them, that iscommercializing ai.
I don't know where this is goingto go, but definitely there's a
lot more gasoline being pouredinto that fire.
And the last interesting pieceof news from OpenAI is, that now
the free tier users can generateimages with the ChatGPT image

(37:14):
generator.
They are capped with how manythey can generate every single
day, but they at least haveaccess to this very useful
feature that took the world by astorm and added millions of
users to the platform in justone week.
From OpenAI to X OpenAI ThinkingMachine Lab, the company founded
by the ex OpenAI, CTO, MiraMirati is apparently aiming to

(37:35):
raise$2 billion which isdoubling its initial target,
which was only 1 billion, whichmay make it the largest seed
round raise ever in history.
If they're successful in doingthat, they're going to do this
despite the fact they have noproduct or any revenue in any
near future.
And they're raising this amountof money just because of the

(37:55):
elite team that we're able toput together, including Mira
Marra herself and researcherslike Bob McGrew, who is the
former AI Chief Researchofficer, and Alec Radford who is
one of the innovators behind ofsome of open AI breakthroughs.
So do I think this makes anysense?
No.
This is similar to what Ilia isdoing.
Again, another huge name in theAI research field that already
raised$2 billion.

(38:17):
So how much return do I think,investors are going to see from
a$2 billion investment in acompany that has no clear
product or roadmap to revenue?
I don't know.
I think it's complete insanity.
But this is the world we live inand you bring the right names
into a company, and apparentlyyou can raise any amount of
money you want.
And from Open AI to anothercurrent and soon to be former AI

(38:37):
partner Microsoft.
In an interview with CNBC,Musafa Soloman, who is the CEO
of Microsoft AI has shared thattheir focus is going to be
developing what he calls offFrontier AI models.
Basically, he's claiming thatthey're not going to try to
chase open AI and develop bettermodels than them, but actually
stay in the level of models thatare three to six months behind,

(38:58):
but custom tailoring them tospecific business use cases.
That being said, he said, andI'm quoting, it's absolutely
mission critical that long termwe are able to do AI self
sufficiently at Microsoft.
But he also noted that the openAI partnership remains vital at
least until 2030.
What does that tell us?
It tells us that Microsoft caresmore about actual business use

(39:20):
cases for the people who use theMicrosoft products than it cares
about being at the tip of thespear.
I think that makes perfectsense.
This aligns nicely with the newsthat we heard recently of them
stopping the development of manynew data centers.
I think it's a very smartdecision by Microsoft to focus
on actual use cases.
This is what I do in my courses.
This is what I do with myclients.

(39:40):
It's never about having the bestmodel, it's about having the
best implementation of a modelto solve an actual problem that
you have in your business andbeing able to accelerate or do
better in specific things andhence, I think Microsoft is
making a good decision in thisparticular aspect.
Connecting back the dots to thebeginning of this episode and
the ability to take actionswithin a browser.

(40:01):
Microsoft just unveiled copilotactions, which is enabling the
AI copilot to perform web-basedtasks like booking tickets and
making reservations.
Surprisingly, their firstpartners are actually more B2C
are more on the B2C realm,including booking.com, Expedia,
kayak, TripAdvisor, Skyscanner,VRBO, OpenTable, and one 800
flowers.com.
So they're focusing on travel,dining, gifting, et cetera,

(40:24):
versus enterprise applications.
I assume that's where they'regonna go next.
And I assume this is the path ofleast resistance and least risk.
I don't know how many peoplewill jump on that, but it will
be interesting to follow.
But copilot has this capabilityas of now.
Now, if you compare that to twoof the tools we talked about in
this episode already, minus fromChina and Gens Spark from

(40:44):
California, this is a prettylame capability because it's
limiting it to browsing thesewebsites and engaging with these
websites where these other twomore general agent capabilities
can do well everything that ahuman could do and a lot more
because it can write code and dostuff that most humans cannot
do, but it aligns well with whatMicrosoft said.
They want to address specifictasks and specific need rather

(41:05):
than be at the tip of the spear.
Staying on Microsoft.
They released an interestingresearch study that found that
AI models, including the top ofthe line, Claude 3.7, and open
ai, all three mini are reallybad at solving software bugs.
So while they're getting betterand better at writing code, both
these models solved less thanhalf of the 300 debugging tasks

(41:27):
that is in the SWE benchmark.
And the reason they're claimingthis is happening is because
there's no good way to trainthem.
It's relatively easy to trainthe to write code because you
can just show them code thatworks.
But when it comes tounderstanding what's not
working, what they cannotreplicate is the thinking
process of top experienceddevelopers because that data
doesn't exist.
It actually happens in thedeveloper's brain, and there's

(41:49):
no step-by-step process to trainthem on.
And they're claiming that thedevelopment of such data is
necessary in order to take thatnext step in.
Debugging software.
Staying in this universe ofdebugging and writing code.
GitHub announced on April 4ththat copilot code review is now
available to all paidsubscribers.
It was just a preview for peoplefrom before and now any paid

(42:11):
user can get a code review onthe fly as they're developing
code or on a scheduled cadenceAnd it currently covers CC plus
slash coline and Swift and HTMLand TXT coming soon.
So if you are a developer in theGitHub environment, you'll be
able to do code reviews on yourown with AI without waiting for
your team lead or somebody elseto do your code review.

(42:32):
I assume this is gonna be maybejust a preliminary step in most
companies, and then there'sgonna be an actual code review,
but it will definitelyaccelerate the code development
process in every company that'susing this tool.
And now to Amazon.
Amazon just launched Nova Sonic,which is a full suite of voice
AI capabilities.
It includes speech to speech,text tope, speech to text,
literally anything you want.

(42:54):
All in one unified AI tool.
And they're claiming that thisunified environment can model
not just the, what humans say,but also how people say it.
So it understands the nuances ofhow people speak, and it can
respond accordingly.
Now it's priced at 80% lowerthan GPT, four oh voice
capabilities through the API,and it's already available on

(43:15):
the Amazon Bedrock, API.
This is obviously a hugeadvancement that will enable
anybody who wants to createvoice agents in healthcare,
customer service, et cetera, tobuild them through an API, that
per its creators is very, verypowerful and cheaper.
And from Amazon to Anthropic,anthropic just launched Claude
Max, which is a premiumsubscription that has a hundred

(43:36):
dollars tier and a$200 tier permonth directly competing with
open ais GPT Pro Tier.
That is$200 a month.
It offers five times higherusage limits compared to the$20.
Claude.
The$100 tier offers five timeshigher usage than the$20 tier,
and the$200 tier offers 20 xhigher usage limits compared to

(44:00):
the$20 tier.
It is an interesting approachbecause the OpenAI Pro claims
unlimited access to the mostadvanced capabilities.
Also, they're promising thatsubscribers of these higher
tiers are gonna get first divson new models and features that
are going to get released, suchas the upcoming voice mode.
And from anthropic themselves toa very good testament to the
code writing capabilities ofanthropics agents block the

(44:24):
company behind square, the verysuccessful cash application.
Deploying more and more ofClaude's Goose coding agents.
They're saying that now 4,000 ofits 10,000 employees are going
to be using it, doubling it fromjust one month.
They're also claiming that everyengineer that is using it is
saving eight to 10 hours everysingle week.

(44:45):
If you do a quick math on 4,000people on even just eight hours
a week, that means they'resaving or if you want,
accelerating their developmentby 44 months.
Every single week.
That is a very, very big claimto make.
And again, it's showing you howfast this world is moving.
They doubled from 2000 people to4,000 people.

(45:07):
What Goose allows you to do, inaddition to route code, if
you're an engineer, it allowsyou to spin up mini applications
on your own without knowing howto write code.
So you tell it what you need todo in your work and what can
help you do it faster, and itwill create that mini
application for you without theneed to write code.
I must admit that this week Idid a lot of work in Repli, and
I'm gonna record a separateepisode about it, but I was

(45:27):
blown away with how capableRepli is in creating
applications without me writingany code.
I don't know how to write code,but as I mentioned, I'm gonna
record a whole separate episodeabout this, but I totally see
the value of any employee in thecompany that can spin up
whatever tools and capabilitiesand applications they need in
order to do their job faster,can dramatically change the ways
company work.

(45:47):
As long as it maintains thedefined guardrails and safety
and security components thatneeds to be there for an
enterprise level solution.
Now back specifically to block.
They aim to save 30% of employeetime by the end of 2025.
Shifting the focus from codingto innovating per them as
they're trying to push morecustomer facing AI products and

(46:08):
not just internal capabilities.
Now, if you can save 30% of theentire company employee time,
and you cannot grow by at least35%, it usually means one thing
that they are going to letpeople go.
Now, if they can grow fasterbecause of that, that is
fantastic and I really hopethat's the direction that this
is going to go.
Staying on the same topic andthe impact of AI on the

(46:30):
workforce and jobs in the world.
Shopify, CEO Toby Lutkey nowmandates employees to prove
tasks cannot be done by AIbefore requesting new hires.
On a memo that was shared on x,latke wrote the following
before, asking for moreheadcount and resources, teams
must demonstrate why they cannotget what they want done using

(46:50):
ai.
He also stated that using AI isa fundamental expectation for
all 8,100 employees of thecompany in their daily work.
Now, Shopify headcount droppedfrom 8,300 to 8,100.
At the end of 2024 after a 3014% cut in 2022 and a 20% cut in
2023.
So you can see a very lineartrend line of reducing the

(47:13):
amount of people who work in thecompany while pushing more AI
focused approach.
I think this is going to happenin most companies around the
world.
And as I mentioned many timesbefore, I don't think we are
ready for that as a society, asan economy, when many, many
people and how paying jobs aregonna be unemployed, it means
the economy as a whole stops.
And then the fact that specificcompanies can, in theory, make

(47:34):
more money becomes irrelevant.
Because if nobody's going to buythings because they don't have
money, there's gonna be lessShopify stores and then there's
less of Shopify.
And then this whole effort isactually gonna yield negative
results for them and foreverybody else.
But I don't see this stopping.
I literally see this as aninevitable future that we'll
have to deal with and figure outas humans on this planet.

(47:55):
There are many more news that wecan't get to this week, and
there are going to be availableon our newsletter.
and there's a link in the shownotes for you to sign up to
learn even more stuff that wecouldn't get to in this episode.
But I will end with one lastrelease that is very interesting
to me and that is from Canva.
Canva, the Visual design tool,has now announced Visual Studio
2.0 at their Canva Creativeevent.

(48:16):
And they're integrating thingsthat were never a part of Canva
before, including the ability towrite code and create
applications and createspreadsheets and analysis and
data presentation from thesespreadsheets.
So they presented somethingcalled Magic Insights and Magic
Formula is basically allowingyou to dump your information
into Canva and generate amazingvisualizations to present your

(48:37):
data properly.
It does not require any codingskills, and you just need to
speak your ideas and it willgenerate it for you.
This is a very interesting moveby Canva.
If you remember, I said that themove by OpenAI and its ability
to now generate images anddesigns and everything that you
want will dramatically hurtCanva, that's Canva growing in
more directions than justdesign, knowing that's at risk.

(48:58):
That's a very interesting andsmart move by Canva.
I don't know how many people aregoing to use Canva as a
spreadsheet or a coder, butmaybe they have 240 million
active users.
So they have a huge user basethat may actually shift, or at
least some of them may shift touse everything in Canva versus
doing some of it in Canva, someof it in Google, some of it in
Microsoft.
very interesting move and it'llbe very interesting to see.

(49:20):
That's it for this week.
Don't forget, our next AIBusiness transformation course
cohort starts on May 12th.
If you are looking for astructured way to accelerate
your AI practical knowledge,that can,dramatically impact
your career, your company, yourteam, you should join this
cohort.
We do the Open Cohorts only oncea quarter, so the next one will
probably happen in September,and you do not wanna wait until

(49:43):
that timeframe.
Keep on exploring ai, keep onsharing what you learn, and if
you're enjoying this podcast,share it with other people and
give us a review on ApplePodcasts or on Spotify, and have
an awesome rest of your weekend.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.