Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker (00:00):
Hello and welcome to
the Leveraging AI Podcast, a
(00:02):
podcast that shares practical,ethical ways to leverage AI to
improve efficiency, grow yourbusiness, and advance your
career.
This is Isar Metis, your host.
I personally had a really crazybusy week.
This week I started with an AIworkshop for a very large known
tech company in San Francisco.
Then I did another workshop inthe middle of the week one of my
long-term clients in SanClemente in California.
(00:24):
And then on Friday I did an AIworkshop here in Florida for an
organization that does CEO roundtables to a group of 20 to 30
CEOs.
So I had a very, very busy week.
But in this week, when theentire AI world went bananas.
And to be specific, nanobananas.
So those of you who haven'theard about nano banana, we're
(00:44):
gonna talk about that.
We're going to talk aboutsignificant impacts to the
economy and the workplace withnew information from some very
large entities.
We're going to talk about safetyand security, new findings and
issues from this past week thatare alarming and interesting at
the same time.
And then we have a long list ofrapid fire items, including
people departing from meta newinteresting releases.
(01:06):
And much more.
So let's get started.
So what the hell is Nano Bananaand how does that have anything
to do with ai?
Well, last week a new modelshowed out on LMS chatbot arena
on the image generationleaderboard called nano banana.
(01:26):
Nano banana, took it by storm,went to number one on almost
every aspect of image creationand image editing.
Everybody went crazy andspeculations on who it is.
Well, now we know Google justannounced that it's their model
and they launched it under thename Gemini 2.5 Flash image,
which is not nearly as good asthe name Nano Banana, which I
(01:47):
think is probably what's gonnastick, at least for now.
But it is an extremely capablemodel that is basically doing
everything, all the other modelsare doing only better, with some
additional capabilities that didnot exist before, that are
absolutely mind blowing.
And that is a game changeracross multiple aspects.
So let's talk about what it cando.
The first thing it can do, itcan blend several images
(02:08):
together.
So you can take an image of asuperhero, an image of your
face, ask it with your face onthe superhero, and he does it.
Perfectly.
You can take an image of a carand place it in a scenery and it
will do it perfectly and so onand so forth.
It knows how to not just blendthe things together, but
actually control theperspective, the angle, the
lighting, everything to make itlook like a perfect image.
(02:29):
It is incredible at maintainingconsistency of people and items
across multiple aspects, meaningyou can take an object, take an
image of it, and then ask tolook at it from different
directions, from differentperspectives in different zooms,
place it in people hands, placeit on top of things, inside
things, and so on, and it keepsperfect consistency.
(02:50):
This is something that we didn'thave before.
The closest thing that we hadto, that was ChatGPT's image
generation model, which is notbad, but in Nano Banana, it's
absolutely perfect.
That means that if you'reselling any products, you can
generate images of theseproducts in any scenario you
want perfectly from any angle,and generate as many of these as
you need for any needs that youhave.
It also is very good at promptbased editing, meaning you can
(03:13):
upload an image, either a realimage or an AI generated image,
and ask to blur the background.
Remove stains, alter poses,change the lighting, add text on
top of it, remove differentthings from the image, or even
take old images that are blurry.
And pixelated and enhance themjust with a simple prompt.
It also has an amazingunderstanding of the world in
(03:34):
physics, which means itunderstands the 3D environment
that is in the picture.
You can take a picture of a roomand take a picture of a sofa and
ask it to place it along theright wall, and it will place
the sofa perfectly as if it'sthere, which is amazing if
you're trying to put newfurniture in your house and you
wanna see how they look like.
Because it is a part of a largelanguage model, it also
(03:55):
understands context.
You can give it stuff that youwritten by hand or flow charts
and you will generate it in 2Dor 3D as needed diagrams.
All in absolute accuracy andit's absolutely amazing.
Google included synth id, whichis their own technology, which
is a digital watermark thatallows with a different tool to
identify any image that it'sgenerated as AI generated, which
I think is very, very smart.
(04:16):
I really hope that's gonna be alaw that will require anybody to
use either synth ID or any othertechnology that will allow us to
easily know what is AI generatedor edited and what is not.
And this new model is nowavailable on all of Google's API
platforms as well as on openrouter file, ai, and many other
platforms that integrate andaggregate image generation
capabilities.
And these amazing capabilitiesto generate consistent images of
(04:39):
characters are very welltranslated to videos as well,
because most of the advancedvideo generation tools are
starting with an image of aperson or a thing or a scene.
And the problem was how do yougenerate multiple aspects of the
same person to continue thescene from different angles
while maintaining consistency?
Well, that problem is now solvedand the internet is basically
exploding with examples ofeither images created and or
(05:02):
edited by nano bananas or videosthat are created based on images
that are created or edited bynano bananas.
And it's absolutely mindblowing.
Now, I tested it across of manycool use cases, and I'm probably
gonna share an episodespecifically about nano banana
this coming Tuesday, just toshow you how to use it and what
it can do.
But I give you a very simple usecase that really blew my mind
(05:22):
because it was something thatwas impossible for me to do
because I don't use Photoshop.
And now I was able to use it inseconds with just one line of
prompt.
So in one of my workshops thisweek, I've done an introduction
to workflow automation with aiand I took
screenshots@ofmake.com.
The background in make.com iskind of white, not exactly, it's
kinda like light beige.
And the text next to the nodesis in black.
(05:44):
All my slides have a blackbackground, so if I remove the
background, the text disappears.
Also, the inside of the nodes,the graphics inside of them is
also in the same as thebackground color, so it
disappears as well.
So it's very hard to actuallysee the graphics and to
understand what each node does.
So that forced me to look for asolution, and the solution was
not found until I tried NanoBananas.
All I did is I dropped thescreenshot into Nano Bananas and
(06:06):
I asked it to change thebackground to black and the font
to white, and it did it.
So the perfect same exact flowthat I had done in Make Now
appears in the new version witha black background in white
letters.
Which is something that would'vetaken a good Photoshop user,
probably a minute to do maybetwo, but it took me one
sentence, and I don't know howto use Photoshop.
So the combination of theabilities that the Nano Bananas
(06:29):
enable is absolutely incredible,and it raises a lot of
questions.
The first question, is it thedeath of photo shoots?
If you can take the actor thatwas supposed to be in the photo
shoot or create one out of thinair and then put any clothing
you want on them, that could beyourself, by the way.
So you can see how specificclothing is gonna look on you by
combining the two images, butI'm talking about the brands
(06:52):
themselves.
Why would you ever do a photoshoot if you can create multiple
angles of your exact product inany angle, in any scenario, in
any lighting, in anystorytelling need that you have
in seconds.
The other question is it thedeath of Photoshop?
Like why do I need to Photoshopanything?
If I can literally say with afew words what I need and you
will make it happen?
Is it the death of TV and moviestudios?
(07:14):
If I can now generate theinitial point and other points
in the story for any scene fromany angle accurately, and then I
can just generate the videosfrom there.
Why do I need TV and moviestudios?
So I think the answer is notyet, but it's definitely a big,
big step in that direction.
Why am I saying not yet?
First of all, the resolution ofthese images are not good enough
(07:36):
yet.
So if you are doing thisprofessionally, the resolution
of these images are not gonna beenough.
But that being said, many toolstoday, many AI tools today
allows you to upscale imagesamazingly well.
So that problem can be solvedwith AI already, even if it's
not built into the process yet.
On the movie generation side,some of the movie generation
tools are not perfect incharacter consistency in the
(07:57):
video itself, but we need toassume that if that was solved
on the image level, there'sabsolutely no reason why it's
not gonna be resolved on thevideo level.
So just think about using a toolthat is actually not bad
insistency, such as VEO three,and using the entry points or
queue frames using a tool likeNano Bananas and you're getting
pretty close to what youactually need.
(08:18):
Small improvements in video,especially in length and
resolution, and those problemsgo away as well.
To me, the biggest missingthing, which I said when ChatGPT
launched their updated imagegeneration tool is layers.
Now what we're seeing from allthese AI models in the past year
or so, that they're competingmaybe less on capabilities of
the model and more and more ontooling.
And so I think the next bettertooling option should be to
(08:40):
control layers and should beable to have images in higher
resolution.
I think if you can controllayers within the generation and
then be able to resize, movearound, reshape each and every
one of the layers separately,that could be game over for
Photoshop, definitely for Canva,because why would I do anything
manually if I can just ask itand make it happen and generate
(09:02):
whatever style in whateverdirection, in whatever format
that I want.
So I do see three small gaps, asI mentioned, layers, resolution
and video character consistency.
Once this will get solved, thatputs several different entire
industries and softwarecompanies at serious risk.
To give you an incredible, realworld example on how Nano
(09:22):
bananas can be used with othertools to create highly valuable
resources for a company.
You should check out one of therecent posts from Rory Flynn.
Who is a good friend and anincredible visual AI creator,
and he used Weavy which is atool that allows you to create
multi-step processes for imageand video generation.
And in that process that hegenerated, you can drop in an
(09:42):
image of any object that youwant, anything that you're
selling, and it generatesmultiple angles of that object
in multiple levels of Zoom thatlook extremely attractive.
And he did it with a sports car.
So not something as simple as acup or a ball or a perfume, but
something really complex withmultiple parts, with angles,
with wheels, with reflections,et cetera.
(10:04):
And using Nano Banana, he wasable to create multiple accurate
aspects of this car by enteringonly one angle of the shot.
It is absolutely mind blowing,and the amount of opportunities
it opens for brand marketers isout of this world.
And the final thing that I willsay, I really, really like
ChatGPT's latest model.
(10:25):
It's actually very helpful.
So this model does everythingthat ChatGPT model does better
than the ChatGPT model does it.
And it generates an image inabout 15 to 20 seconds while
ChatGPT takes a minute,sometimes two minute to generate
the image.
Now, will ChatGPT now reduce theamount of time it requires to
generate images on ChatGPT?
Maybe.
But with their current capacityissues across the launch of
(10:46):
GPT-5 and everything that elsethat they're doing.
This may be a while before westart getting that, but
definitely the race is on anddefinitely Gemini 2.5 Flash
image, also known as Nano Bananais right now the best image
generation tool out there.
And you can try it out in yourGemini subscription.
Just go to Gemini and you canstart generating images right
there.
And from bananas to the impactof AI on the economy and jobs.
(11:11):
So I shared with you last weekthat MIT released a research
that said that 95% of big AIprojects provide zero value.
And that's after looking andinterviewing 350 people that
were involved in investments inAI of 30 to$40 billion.
I expressed my concerns aboutthe accuracy and the process in
which they achieved and receivedthese results and the
(11:32):
methodology of the actualresearch.
Well, this week we are blastedby several different data points
and research from multiplereputable companies and
organization that show exactlythe opposite.
The first one is Morgan Stanley,who just released that their
analyst estimate that thewidespread of AI adoption could
yield$920 billion.
(11:52):
In yearly economical value forthe s and p 500 qua companies,
which is a growth of 28% oftheir projected value by the end
of this year, the analysisprojects at 90% of roles will be
affected by AI and augmentation,and it is mostly going to be
focusing on headcount reductionthrough natural attrition and
automating routine knowledgeintensive tasks.
(12:13):
But in addition, it is claimingthat by reducing the cost of low
end task employees that will notbe necessary anymore.
They can focus those employeeson higher, more value generating
tasks, which will also drivegrowth in sales.
So these companies will gain onthe top line as well as in
reducing expenses and not juston cost reduction.
The impact on the s and p 500could be 13 to$16 trillion
(12:37):
increase in marketcapitalization, which again is
about 25% increase to itscurrent total value.
Industries with the highestexposure to ai remission is
consumer staples, distribution,and retail.
Real estate managementtransportation that could see
benefit up to over 100% in theyear of 2026.
So we're not talking aboutsomewhere in the far future.
(12:59):
They're talking about a year anda quarter from now.
They're saying lower impact inlean sectors like semiconductors
and hardware, which makes sense.
You have smaller overhead,there's less to save from.
The report clearly separates theimpact of AI between
augmentation and fullautomation.
Full automation means the jobgets eliminated because AI can
do the job on its own.
(13:20):
Augmentation means the employeeuses AI in order to generate
more value and faster whilestill keeping the job.
That might be slightly differentjob, and they're anticipating
new roles to emerge like chiefAI officers, governance
specialist, risk assessors, andso on.
But I have commented on thistopic several times.
Yes, I think AI will create jobsthat we cannot anticipate right
(13:41):
now.
However, I have serious doubtsif it will balance all the jobs
that will be lost.
And I have complete confidentthat in the near term or midterm
future, it will not catch up tothe amount of jobs lost.
And so in the near to midtermfuture, before these jobs are
getting generated, we arelooking at a serious bloodbath
of a lot of people losing theirjobs to AI without any clear new
(14:04):
jobs to replace them.
Now they are saying that fulladoption may take years or maybe
even decades with differentcompanies have different
priorities, and some companiesprioritizing attrition and
efficiencies over math layoffs,especially in customer facing
roles, which makes perfectsense.
You don't wanna put your currentcustomer relationships or
customer facing stores at risk,but it's just a matter of time.
(14:25):
Now, the ironic aspect of thisarticle on Fortune is that
fortune disclosed that they haveused generative AI for the
initial draft of this article,which tells you that this is
widespread across more or lessevery industry.
Another interesting report thatcame out this week is from
Bankrate, and they have analyzedthe salaries of people across
multiple industries and multipletypes of jobs.
(14:46):
And what they found is that bluecollar jobs have been outpacing
white collar jobs in growth ofwages in the past few years.
So since 2021, hospitalityworkers wages have risen nearly
30%, outpacing the inflation byover 4%.
Healthcare workers also saw a25% increase in salaries, also
bidding the inflation.
On the other hand, workers inprofessional services, finance
(15:09):
and education have had weightgains below inflation with
teachers facing 5% shortfallcompared to the inflation rates,
which is actually really, reallysad.
I think education is potentiallythe place that we should invest
in the most.
Now, the other aspect that thissurvey looks into is promotion
rates.
Basically the percentage ofpeople that are being promoted
to higher jobs, and what it'sshowing is that the promotion
rate has dropped to 10.3% in Mayof 2025 compared to 14.6% in
(15:37):
2022.
That is a very big decline andit's impacting mostly entry jobs
gen Z graduate seeking toadvance their career after
having their first job in whitecollar fields.
So while wild collar jobs havehigher entry level salaries of
$19.57 per hour compared to$16per hour for blue collar
(15:57):
hospitality and so on, theemployees in these blue collar
industries see faster promotionsand higher salary growth over
time, which makes them catch up.
But the bigger story is not evenwhether you're getting a
promotion or a raise.
The bigger story comes from aStanford University study that
has researched payroll data fromA A DP of millions of US
(16:19):
employees, and they found thatworkers ages 22 to 25 in the
most AI exposed occupations havefaced a 30% decline in
employment since late 2022,which is the time that ChatGPT
was introduced.
So the fields that were impactedthe most, our customer service
representative, accountants,software developers,
administrative roles who haveseen the steepest declines with
(16:41):
entry level software engineeringjobs, dropping nearly 20% from
the end of 2022 to July of 2025.
Now in the same AI exposedoccupations, employment for
workers over 30 remained stableand in some cases even increased
by six to 12%, which basicallytells you that people with
(17:01):
relevant knowledge and relevantexperience in specific roles in
industries are still notreplaced by ai.
And over there the augmentationplays a bigger role where there
is employees can use AI incombination with their knowledge
and expertise to drive betterresults.
Jobs are also not impacted inlow AI exposure industries and
occupations such as nursing,health aides, maintenance and
(17:25):
production supervisors who havekept steady employment,
including with younger workers.
Same kind of story here,augmentation versus automation.
Usually entry level jobs can beautomated and hence are not
needed any more from a humaninput perspective and jobs that
require more knowledge, moreexperience go through
augmentation where the peoplewho have the AI scales can
(17:46):
actually increase their salariesand capture better positions
because they have the knowledge.
I shared with you, the previousresearchers has found that
people with AI knowledge getsignificantly higher salaries
and get significantly betterposition as of right now, and
it's gonna keep on being thisway for the foreseeable future.
What does that tell us?
It tells us that.
If in any age if you have AIcapabilities and skills, you
(18:09):
have significantly higherchances of either getting a job
on maintaining a job and thatthe exposure right now on the
younger generation is evenstronger because most of the
entry level jobs are at higherrisk of automation.
That means that you gotta thinkvery carefully on the direction
you're going to pick in life ifyou are in high school or
entering college or in collegeright now, and take that into
(18:30):
consideration.
It also means that regardless ofwhat age you're in, you have to
take care of your AI practicaleducation, how to use AI to
actually augment your job and dothings faster, better, and
cheaper.
And this is exactly what I havebeen providing to companies and
individuals in the past fewyears.
We just did a complete revamp ofour self-paced AI business
(18:51):
transformation course.
It's basically the same exactcourse that I teach on Zoom
only.
It is chopped up to specificlessons with specific homework
and exercises that you can takeat your own pace.
Now, do I think that taking thecourse with me as a human live
instructor where you can askquestions is better?
Yes, I absolutely think it isbetter, but in the next few
months, I'm completely bookedwith company specific training
(19:13):
workshops, and hence, I will notbe launching another AI business
transformation course cohort, atleast through the end of the
year, potentially through thebeginning of 2026.
So if you are interested intaking the course, the
self-paced version is fantastic,and you can start taking it
today.
And we just finished updating itto the latest course that we
teach.
So the data in this course isthe latest and greatest, and
(19:34):
there's gonna be a link in theshow notes if you want to take
that course.
Another interesting input to thestrength of AI tools and their
potential impact in the economycomes from an article in the
information that is showing thatAI native startups, basically
the people who are driving theAI revolution, has scaled to
over$18.5 billion in annualizedrevenue in just two and a half
(19:55):
years.
In less than three years.
Since the salons of ChatGPT,open AI anthropic, cursor,
cognition, replicate, and otherplatforms.
A total of 18 differentcompanies has grew from very
close to zero revenue to$18.5billion in revenue.
Now, to be completelytransparent, eight 88% of that
is the combination of open eyeAnthropic to together at$16.4
(20:16):
billion, but that leaves justover$2 billion for the rest of
the company.
That's still a lot of money.
This article, the information isspecifically targeting the claim
in the MIT research that Ishared with you last week that
we mentioned in the beginning ofthis segment.
And they're basically sayingthat the study overlooked the,
what they call the shadow AIadoption, where 90% of employees
use.
(20:36):
Un sanctioned tools, basicallystuff that is not defined by the
company.
Tools like ChatGPT, Claude, etcetera, for day-to-day
productivity, gaining hugebenefits that are not accounted
for when you're looking just atthe company level projects.
What does this mean?
It means that as a leadershipteam in your company, you need
to get on board with this andtrain your employees on how to
(20:58):
use these tools in a proper way,in a safe way, in an effective
way, which again is most of whatI'm doing these days, is
delivering workshops tocompanies on how to leverage
generative AI tools forday-to-day tasks that are
relevant to basically everyaspect of the business.
This article also gave somespecific examples.
They said that high performersreport saving in back office
(21:18):
operations And then the company,Novo Nordisk uses Anthropic's
Claude 3.5 sonet to draftregulatory documents.
Cutting time from 15 weeks toless than 10 minutes per each.
And every one of theseincidents.
The team that does it went froma team of 50 plus to three, and
the annualized cost of thetechnology is less than a salary
(21:38):
of a single one of these riderin that team.
Intuit spent$30 million onOpenAI models in 2025, they're
claiming yielding multiples invalue.
Moderna, PayPal, Shopify,Citigroup, and many others
achieve significant savings.
VII, many of them sadly fromcutting jobs, but they are
(21:58):
seeing significant value totheir bottom line by automating
different tasks with ai.
Moving to another example, theIT Giant ndl.
Is leveraging an AI poweredsoftware from Palo Alto Networks
to automate routine securitytasks.
They cut their security incidentresponse analyst team from 80 to
fewer than 40 over this pastyear, so they cut the team in
(22:19):
half.
And even here they're saying itprimarily affected entry level
roles handling repetitive grantwork while retaining the senior
analyst for complexinvestigation.
Do you see a pattern here?
Scott, ONB, who is Kyra's headof Internal Cybersecurity, said,
and I'm quoting, we're startingto trust the AI to handle these
tasks like scans and deviceisolation.
What does that tell you?
(22:40):
It tells you that ai, if it'sbuilt correctly and it's
integrated to your data, candeliver consistent results.
Consistent results builds trust.
Trust builds traumatic changesin the way an organization
operates.
And I think this is the processthat's gonna happen more and
more in the next few months, anddefinitely the next few years
where companies are going tofigure out how to use AI
properly.
And through that, develop trustin the results, which will lead
(23:02):
to even further changes in theway companies and entire
industries operate.
And again, to contradict theinput from M MIT's survey last
week.
NDL spent over$600,000 onsoftware last year, but O and B
noted that the saving from staffreductions far exceeded this
cost.
Another example comes from KPMGthat uses AI tools for
(23:24):
compliance audit cutting timefor compliance audits by 30% via
agents that are trained onglobal cybersecurity laws.
And so same kind of scenario,when you have a specific use
case and you have the rightimplementation, you have very
dramatic results.
How dramatic?
Well, 90% reduction in humanhandling, 30% audits done by
agents, 50% cut in the job forcein specific companies.
(23:47):
These are very, verysignificant.
Now, when you got this kind ofcapabilities, when you can drive
this kind of efficiency, youhave two options.
Option number one is, can wegrow at the same pace?
So if I can save 50% of myemployees, can I use these 50%
of employees to grow now by ahundred percent, right?
Because I can do it with halfthe staff.
The full staff should gimme ahundred percent growth.
(24:07):
The reality is that's almostnever possible, right?
It's never possible becausethere's no elasticity in the
market to allow you to grow by ahundred percent.
So it will drive job losses.
There's just no way around it.
In a capitalistic world.
We got another data point thisweek from Anderson.
Horowitz, one of the most knownVCs in the world where they
released a paper called The Riseof Computer Use and Age, agentic
(24:29):
Coworkers.
Basically everything this paperis about.
The fact that computer using AIagents that can see what is on
the screen, understand thecontext, and know the exact
process that needs to be done,allows to basically replace any
human that is using these toolsin a way that is significantly
better than traditional roboticprocessing automation, also
(24:49):
known as RPA.
because this software can accessany software like humans, it is
to an extent bypassing the needfor API implementation because
it can go through the graphicuser interface, click on
buttons, and navigate everythingthat needs to be navigated.
And more importantly, because itcan do it across multiple
software at the same time, it isnot dependent on one API or
another, or how to combine themtogether because it just freely
(25:11):
goes between the software, justlike humans do.
This eliminates in some casesthe need to go through processes
that are very complex tointegrate with systems like SAP
and Epic and Oracle platformsand so on.
So while general purpose agentslike ChatGPT agent, or the new
Claude agent or Manus and so onstruggle with complex enterprise
software, because it iscustomized workflow, you can
(25:32):
train customized models tofollow these workflows and teach
them edge cases and then theycan actually do the work.
They're claiming that within sixto 18 months these kind of
agents will dramatically improvein capability.
And they will enable specializedroles in marketing, finance,
sales, et cetera, allowing toreplace more and more roles
within companies.
They also shared that agentsthat are tuned for very specific
(25:55):
domains can autonomously managevery complex processes,
including marketing, finance,sales, et cetera.
I shared with you several timesin the past that I have a
software company that is justcoming out of stealth.
The company is called DataBreeze, and what it does is it
completely automates financialreconciliation.
So every invoice that comes init, knows how to compare
automatically, and autonomouslyto the relevant purchase order
(26:17):
and the relevant sales order.
Look what is difference betweenall of them.
Figure out based on rules thatare set up in simple English
that you can set up by justliterally typing or speaking the
best practices and the rulesthat you have in your company
right now, and it will do allthis processing on its own.
Our first client out of beta isnow using this to reconcile 30
to 40 invoices every single daywithout human interaction in a
(26:40):
matter of minutes.
This work used to be done by agroup of several different
people.
If that's something that'sinteresting to you, please reach
out to me on LinkedIn and I canexplain to you exactly how it
works and how it could beintegrated into your financial
process.
And I'll do some safetychallenges with AI and security
things, and I'll start withsomething that you just need to
know.
It's not really a securitything, but it's something that
(27:00):
you need to take care of rightnow, which is Claude.
Claude by Anthropic just made achange to their user terms.
If you are a free pro or maxuser, by default, Claude can use
your prompts and every day thatyou put into Claude for training
of their next model.
This came into effect ofSeptember 28th.
So two days before this podcastcomes out, now you can opt out
(27:23):
of that.
To do this, what you gotta do isyou gotta go to settings, go to
privacy, and they will promptyou to confirm the new rules.
But either way, there's a tugglebutton that's called Help
Improve Claude, that you need toturn to off from the default
situation of on.
Now, Anthropic obviously framesthis as a win-win situation.
They're stating that users whodon't opt out will help to
(27:45):
improve model safety, making oursystem for detecting harmful
content, more accurate and lesslikely to flag harmless
conversations.
They also said it also helpfuture cloud models improve at
scales like coding analysisreasoning.
Ultimately leading to bettermodels for all users.
Now, while this is true, you maynot be willing to donate your
(28:07):
interactions with Claude intothat quote unquote greater good,
and so I have opted out of that.
I think the fact that they madeit a opt-in by default is really
surprising.
It's even more surprising thefact that it is Anthropic doing
this.
This is something that we wouldexpect from Grok and Anthropic.
The company that has plantedtheir flag on fairness and
(28:28):
safety is not the company Iexpected to do something like
this.
I do expect a very seriousbacklash on this approach, and I
will update you in the next twoor three weeks if I was right or
wrong about this.
Something really helpfulhappened this week when it comes
to safety of AI tools.
Open AI and anthropic, and threeother smallest, smaller, less
(28:49):
known labs decided tocollaborate and exchange testing
each other's models.
Basically, OpenAI providedAnthropic special access through
the API to test their models forsafety and vice versa.
This process together with theirinternal testing has unveiled
some pretty scary loopholes onboth platforms.
As an example, open Eyes ChatGPT4.1, share detailed instructions
(29:11):
on bombing a sport venue,including the vulnerabilities of
the venue, the explosiverecipes, and even evasion
tactics.
After the bombing, actuallythey're saying that GPT models
cooperated with harmful requestsafter minimal retries, or just
by changing the approach,instead of saying, I'm gonna
bomb this venue, it was definedas security planning for the
(29:32):
venue, trying to identify howcan somebody bomb the venue, and
that changed it completely.
It was also helpful in trying tounderstand how to use the dark
web to purchase nuclearmaterials or to develop
different kinds of spywaresoftware.
Anthropic themselves, in theirlatest report have shared that
cloud code was used to target.
17 different organizationsincluding healthcare, emergency
services and governmentdemanding six figures ransom
(29:55):
using ransomware developed usingcloud code, the AI tools,
automated reconnaissance,harvesting credentials,
penetrated networks, advise ondata targeting and crafted
visually alarming ransom notes,all on its own.
What is all of that telling you?
It's telling you that whilethese tools are becoming.
Available and everybody's usingthem every day.
They're opening more and morerisks.
And I am really excited thatOpenAI and Anthropic decided to
(30:19):
collaborate on testing forsafety across these models.
I said that multiple timesbefore.
I think there has to be aninternational group with all the
labs around the world, plusgovernments around the world to
test every single model beforeit comes out.
And I think these cross testingcollaborations will dramatically
reduce the risks while allowingus to still benefit from the
immense value that AI generates.
I really, really hope this isjust the first step in that
(30:41):
direction.
And speaking of AI and itsinvolvement in data security
breaches, Google ThreatIntelligence Group, also known
as GTIG, reported that over 700organizations or potentially
impacted by a chatbot that wasrunning on the Salesforce
platform.
That was a malicious chatbotthat has stolen multiple data
points on, again, over 700companies.
(31:02):
This AI actor specificallyhunted for high value
credentials like Amazon WebServices, access Keys, VPN
credentials, snowflake tokens,and passwords in plain text to
enable for further systemcompromise, right?
So it was using this agent toget access to other things so it
can do even more damage.
Now, the agent was smart enoughto delete the query jobs and
(31:22):
cover its tracks after everystep.
The only location that hints towhat happened is in the logs
that it did not have access to.
So GTIG urges companies whosuspect they might have been a
part of this thing to go andcheck the logs.
Now, what did the attackerplanning doing with the data is
unclear.
No ransom requests have beenmade yet, but this shows you how
scary this is.
Salesforce has a hugemarketplace of agents.
(31:44):
Many of them are developed bythird party companies that you
don't know.
It'll promise you the sun, themoon, and the stars, and you
need to decide whether you trustthe agent to actually do what
it's saying it will do, versusdo what this agent was doing and
we're adding into a whole newworld of cybersecurity and legal
aspects of who is responsiblefor this?
How can you protect yourselffrom this?
Who can install these agents onyour company's domain, et
(32:07):
cetera, et cetera, et cetera.
And we'll have to figure out allthese things as individuals and
as companies in the very nearfuture because the risk is
already there.
Another really sad story thisweek that we heard before, but
now turned into a lawsuit.
Adam Rainn.
A 16-year-old kid has committedsuicide in April of 2025.
His parents are now suing OpenAIalleging that ChatGPT became
(32:28):
their sons closest confidant, itprovided detailed suicide
guidance to this kid in thehours before his death in April.
The lawsuit claims that ChatGPTand I'm quoting, sought to
displace his connection withfamily and loved ones and would
continually encourage andvalidate whatever Adam
expressed, including his mostharmful and self-destructive
(32:49):
thoughts.
This is a) really, really sad,and B), as a parent, really,
really troubling.
More and more kids are usingthese tools, not just
specifically OpenAI as a mentor,as a place of comfort, as a tool
for dealing with their mentalaspects.
And these tools are not readyfor that by any means.
This is just one example thatended up tragically, but there
(33:11):
are many smaller examples thatdon't any deaths but could still
end with very significant mentaland social damage.
And we need to be aware of thatand we need to make our kids
aware of that.
And we need to find differenttools that will be guardrails
for that.
And definitely the big labsneeds to be held accountable for
these kind of things.
This is the first thing thatthey should prevent in their
models as far as blocking themfrom what they can and cannot
(33:34):
do.
And I think there has to be alaw that will report these kind
of conversations to at least theparents, but if not some kind of
evaluation group of humans thatwill evaluate whether it
generates a risk or not tohumans, and based on that report
to the relevant authorities orthe relevant individuals.
These kind of incidents alwaystake me back to the Robots
Series by Asimov, where thefirst law of robotics is that
(33:57):
ai, or in their case, human,cannot harm humans.
Asimov's First law for robots,in our case, AI that says that
AI will never harm a humanregardless of anything.
And this is definitely thescenario where the AI did not
follow that law.
That law doesn't exist rightnow, but I think it should.
Staying on the topic of legalaction, I reported to you last
(34:18):
week that Musk has threatened,suing Apple and OpenAI, accusing
them of a monopolistic collusionin favor of ChatGPT over
competitors like Grok in the appstore.
Well, this week, XAI has filedthe lawsuit in a federal court,
so in addition to that, thelawsuit also claims that the
integration of ChatGPT into iOSforces iPhone users to default
to using ChatGPT in its tasks.
(34:40):
Even if they would preferalternatives like Grok, because
that's not built into theiPhone.
Do I think that this case has achance?
I don't know, but this is notthe first lawsuit from Sam
Altman against OpenAI.
There's also the other lawsuitthat is trying to prevent them
from converting from anon-profit to a for-profit.
There's actually been a reallyinteresting article this week on
the information that he'sclaiming that he actually has a
(35:00):
chance of winning this legallawsuit.
There is an opposite lawsuitfrom OpenAI to Elon Musk
claiming that he's harassingopen ai.
That's the only reason he'ssuing them.
There's a lot of obviously, badblood and personal history
involved in all of that.
Where is it going to go?
I don't know, but I will updateyou as things happen.
In a strategic move to safeguardAI's explosive growth, Silicon
(35:20):
Valley heavyweights, includinggroups like Anderson Horowitz
and Greg Brockman, the presidentof OpenAI and others are
funneling over a hundred milliondollars into the leading the
future Super PAC network that issupposed to push cash, that is
supposed to deploy marketingcampaigns and individual
campaigns and ads againststringent AI rules ahead of the
(35:42):
2026 midterm election.
If it wasn't clear from the 2024elections that the tech, Silicon
Valley, 800 pound gorillas areimpacting what's happening, this
is a very clear step inmaintaining that situation.
If you remember at at theinauguration of President Trump,
he had all the big hitterssitting next to him, and I don't
think this is changing.
I think they're just continuingwhat they've done before.
(36:04):
And I find this really, reallyscary.
Okay, we do anything about it?
Well, not really.
I do agree, by the way, that ifwhat they're fighting for is
only trying to prevent patchworkon the state level, that's
actually the right approach.
But on the other hand, rightnow, the federal level is giving
them free hand to do whateverthey want, which is also not a
healthy situation.
Staying on governments.
(36:25):
There are conversations rightnow between Sam Altman and the
UK Secretary of Technology,Peter Kyle, to potentially
provide a nationwide premiumaccess of GPT plus to every
citizen of the United Kingdom.
This will make it the secondnation that does something like
this.
The first agreement like thisthat OpenAI has signed is with
the UAE.
The UK is obviouslysignificantly bigger, and this
(36:48):
could be valued at around 2billion pounds if this agreement
moves forward.
Secretary Kyle is pushing veryaggressively that the UK stays
ahead of the AI curve, andthat's his play to make this
happen.
It'll be very interesting to seeif that actually evolves and
move forward.
But it takes me back to what I'msaying all the time.
Giving them access to ChatGPTfor free doesn't mean they will
(37:09):
know how to use it effectively.
Doesn't mean that they will knowhow to use it at all.
As I mentioned last week, 77% ofChatGPT users use it as a
replacement for Google.
They actually enjoy very littleof the benefits of what
generative AI has to offeracross every aspect that it can
offer it.
It can do absolute magic, andall you have to do is see the
faces of the people that I teachin workshops to understand the
(37:29):
magic that can be done using itto search the web is not the
best use of ai, and I thinkthat's where governments and
organizations need to invest themoney, not necessarily in free
access to the licenses, but moreimportantly on the training and
education on how to actuallyapply the capabilities of these
tools in effective ways.
A few interesting releases fromthis past week.
Anthropic just launched a chromeextension, that allows you to
(37:51):
see and control everything inyour Chrome browser.
That's obviously their play tocombat open AI's, ChatGPT agent,
as well as the Comet browser forperplexity or Google's Gemini
for Chrome that they startedrolling out as well.
So this is their play to stayrelevant in the general agent
browser based tool.
There are serious rumors aboutopen ai, about to release their
(38:12):
own browser, not to be dependenton Chrome.
There's obviously the aspect ofGoogle may be forced to sell
Chrome and we don't know who'sgonna buy it, which might be
OpenAI or somebody else.
But either way, the game isdefinitely on.
When it comes to agenticbrowsers, I must say that I've
been using Comet for just over aweek, and I have mixed results
with the things it can andcannot do.
But the direction is very, veryclear.
(38:33):
We will have no browsers with noAI built into them in the very
near future because it makesperfect sense for an agent to
control the browser versus wecontrolling it.
And this week we also got anopen source tool that does the
same thing, open CUA from theUniversity of Hong Kong with
some additional partners justreleased an open source version
of computer use agents, CAU.
They're claiming that it rivalsthe same quality of results from
(38:56):
open AI and Anthropic forsimilar tools.
So this direction is growinglike crazy and I think we'll
have to on one hand learn how touse these tools effectively.
On the other hand, findsafeguards in order to make sure
that they're not doing stuffthat we don't need them or want
them to do.
I think we're not there yet.
From a security perspective, Ithink from a tooling and
capabilities perspective, we'revery, very close with these
tools, being able to beextremely powerful and effective
(39:19):
in everything that you need.
Again, so far, mixed resultswith comments, with Comet, but
it was able to help me solvesome problems that would've
taken me hours to figure out injust a few seconds.
Staying on new releases.
OpenAI has launched GPT realtimeout of beta into a full
deployment that's their latestand greatest API that drives
their latest and greatest textto speech model.
(39:40):
That in addition to allowing anycompany out there to use their
new voice agent, it also hassignificantly lower costs than
the previous costs.
So the price per input tokenswent down from$40 to$32.
And the price for output tokenswent down from$80 to$64.
All of those per million tokensobviously, of audio output.
(40:02):
They also added something thatis at least as interesting, at
least from my perspective, isthe new API also supports MCP,
which means for those of you whodon't know what MCP is, it's the
ability to connect platforms outthere like ERP systems, CRM
systems, marketing platforms,email, et cetera, and connect
them in minutes to yourenvironment, which basically
means this new API will now beable to query work with and
(40:24):
impact all the tools thatcurrently have MCP servers,
which is more and more of thecomponents in your tech stack.
Think about an AI voice modelthat can see everything in your
ERP, in your CRM, in yourmarketing platform, et cetera,
et cetera.
And respond and take actionsbased on your conversation with
it, and you understand howimpactful that could be for
day-to-day business operation.
(40:44):
Maybe the most interestingannouncement this week comes
from Elon Musk and Xai wasunveiled, what they called
Macrohard.
the Macrohard is a tongue andcheek name on micro soft.
So macro is the opposite ofmicro.
Hard is the opposite of soft.
But what it's supposed to do, itis supposed to replicate and
surpass software giants likeMicrosoft using an entirely AI
(41:07):
agentic solution.
The idea behind it is basicallysaying that since Microsoft
doesn't generate anything otherthan software, then a network of
AI agents can do the same exactthing and replicate the entire
company, including theco-generation, the marketing,
everything else.
So in his tweet, El Musk posted,join at X AI and help build a
purely AI software companycalled Macro Hard.
(41:29):
It is a tongue in cheek name,but the project is very real.
In principle, giving thesoftware companies like
Microsoft do not themselvesmanufacture any physical
hardware.
It should be possible tosimulate them entirely with ai.
How realistic is that?
I don't know.
I think it's not the first timewe're hearing this idea.
It's not the first company thatis talking about doing something
like this.
(41:49):
The biggest difference is thatit's Elon and he usually pushes
the envelope on what's possible.
Think about how many electriccars we had before Tesla, or the
recent launch of their Starshiprocket that was able to
successfully do everything itneeds with the largest rocket
that was ever built.
Combine that with the fact thathe can raise more or less any
amount of money that he wantsfor these endeavors.
Combine that with the fact thathe was able to build the most
powerful data center in theworld in about four months.
(42:10):
That usually takes othercompanies nine to 18 months, and
you understand that you cannotdisqualify what Elon is
suggesting he's doing.
He is usually really wrong onhis timelines, but he's usually
accurate on the final results.
And so it'll be interesting tofollow what this new venture
under XAI is going to achieveand how fast The other thing
that Xai did this week is theyopen source GR 2.5.
(42:31):
It was their best model as ofthe end of 2024.
So it's less than a year old.
It's still a very capable modeland it is following the trend
that started with OpenAIreleasing their first open
source models.
All of these, I think, is justtrying to compete with the
really capable open sourcemodels from China and allowing
an open source alternative.
So the open source world doesnot become owned by China, so
(42:53):
both Quin and Deep Seek havevery capable open source models,
which are based in China.
And now we also have models fromOpenAI and Grok releasing their
previous models.
Elon Musk also mentioned thatthey're going to release Grok
three.
Within about six months.
So basically we're gonna get amodel that is a year old every
time as open source thatcompanies can use and integrate
(43:14):
to whatever needs that theyhave.
Another very interesting releasethis week came from MIT, they've
built a way to teach robots onhow to operate in different
environments without sensors,purely by allowing them to watch
videos.
Now what they're saying is thatthis strategy is an effective
alternative to lengthy manualprogramming and specialized
sensors, which are often costlyand complex to integrate.
(43:35):
That's an exact quote.
What it basically means is thatyou can take a robot, any robot,
because this development is notspecific to the platform and
teach it to do anything byallowing it to watch videos of
people doing the same exactaction.
Now the first thing that came tomy mind is the combination of
that with Google's Gen three.
So Gen three is the open worldAI environment that literally
(43:58):
generates an entire universe inreal time using AI tools.
What it basically means is thatif you don't have videos of
humans doing the task or theenvironment in which the robots
needs to operate, you can createit in real time with a tool like
Gen three, and you can teach therobot on how to operate in that
environment potentially beforethis environment even exists.
(44:18):
So think about building a newfactory.
Modeling it, running it in Genthree multiple times, creating
the videos on how to operate inthat environment, and then
feeding it into this new toolfrom MIT to teach the robot on
how to operate in thatenvironment on the first day.
This sounds like sciencefiction, but this is apparently
currently possible.
Two interesting pieces of newsfrom Meta this week.
(44:39):
One is that several leadingresearchers from the old AI
regime and from the newlydeveloping meta's super
intelligence lab are leavingopen ai.
Three of the top names are AviVerma, Ethan Knight and ham
Agarwal have resigned from Metawithin weeks after joining.
Verma and Knight are returningto OpenAI after.Less than one
(45:00):
month.
And Agarwal, who joined MENA inApril is also leaving the
company to an unknowndestination.
Also Chaya, AK Meta's, directorof generative AI products
management with nearly a decadein the company is also joining
open ai.
So do I think that reflects onthe overall operation at Meta
and the very aggressive way theybrought people in and the very
(45:20):
aggressive reorg that they'vedone there?
Yes, absolutely.
It reflects on that.
Does it have a critical massaspect that will stop that
organization from working?
I think the answer is not yet.
They did lose some criticalpeople.
We said, all I said all alongthat that's not a great way to
build a team of just payingpeople lots of money to jump
ship and then reorganizing themto roles that they may not like.
(45:43):
That being said, I don't know ifthey could have done it in a
better way, and they may keepjust enough people to do what
they need to do.
So it will be interesting tofollow what happens with this
new group in the next few monthsand seeing how many people stay,
what they can develop, how manypeople leave, and so on.
And I will keep you updated asthis story evolves.
A more practical news from Metathis week is that Meta just
announced that they are signinga licensing deal with Midjourney
(46:06):
to integrate its tools into theMeta Universe.
Those of you who don't know me,journey has one of the most
advanced image generation toolsand now video generation tools
as well.
And these are gonna be fullyintegrated into Facebook,
Instagram, WhatsApp, and otherthings that meta are going to
develop.
There's been previous discussionof potentially meta buying mid
journey.
This did not evolve into realityat least yet, but this new move
(46:29):
will at least allow meta tocatch up to the latest Gemini
model, bano bananas, as well asChatGPT image generation
capabilities and other opensource tools like flux, et
cetera.
And speaking of large companiesusing other AI tools from other
providers, apple is reportedlyexploring a partnership with
Google as the engine for the newSiri.
(46:50):
We covered the issues with Sirimultiple times on this show.
Apple is definitely far behindthat.
We're talking about 2025 andthen 2026 and now potentially
2027.
And I think they understandthat's just not good enough.
And so right now they're indiscussions of using Google
Gemini to be the engine behindSiri, potentially running on
Apple's servers to keep datasecurity based on the standards
(47:11):
that Apple wants to deliver toits clients.
Is that gonna come to fruition?
I'm not a hundred percent sure,but it's definitely an
interesting conversation.
It's not the first partnershipwith these two companies that
are partners and competitors,depending on where you check.
And in my eyes, it makes perfectsense, Google has a really
powerful AI capabilities,potentially the best in the
world, and Apple is mostly ahardware company, and so this
(47:34):
partnership makes perfect senseto me.
We'll see how this movesforward.
I Know Apple still has theirpush to actually do it
themselves.
They might do it in parallel andjust partner with Google for a
while.
The other interesting aspectthat nobody talked about, but I
find interesting is that the USgovernment is forcing Apple to
stop using Google as the defaultsearch engine just to break its
(47:55):
monopoly.
And now instead of that, theymight bring in the Google AI
into the platform, which may ormay not be blocked by the US
government, but it will beinteresting to watch.
And two interesting updates fromOpenAI and Microsoft.
Fiji, CO officially started asCEO of applications in OpenAI as
of August 18.
This move was announced earlierthis year in April, that she's
(48:16):
gonna move away from Instacartand take that position open ai,
but she finally took thatposition.
She's going to manage a bigchunk of the company.
She's gonna oversee executiveslike COO Brad, op CFO, Sarah
Friar, CPO, Kevin Whale,software Engineering Chief
Rivas, Narayan, and Teams inmarketing, policy, legal and hr.
Basically taking awaysignificant part of the
(48:37):
day-to-day operation of thecompany.
Now, if you're asking what SamAltman is going to do, well.
He said, and I'm quoting, wehave a big consumer tech
company.
We have this mega scaleinfrastructure project for
humanity.
We have a research lab, and thenwe have all the new stuff, the
robots, the devices, the B, CI,and the crazy ideas I can't run
For companies, an open questionif I can run one, but I
(48:59):
certainly cannot run four.
So what it feels like is thatSam is gonna be focused on the
more strategic infrastructureand new things that the company
needs to do, and CO, who has anincredible track record in
taking companies andoperationalizing them to a way
that they're A, profitable andB, running very effectively.
She's done this in Facebook.
(49:19):
She then did it in Instacart,and now presumably she's there
to do the same exact thing inOpenAI.
So good luck to cmo.
Then Microsoft just released twonew models, MAI voice one and a
i one preview.
One is a voice model.
The other is a text model thatthey're going to slowly
integrate into different aspectof co-pilot in order to reduce
(49:39):
their dependency on open AIdeliverables.
How good are these models?
Well, so far not incredible.
The AI one preview is now on theleaderboard in the 13th place,
but we gotta remember this isthe first attempt on that.
So it will be interesting to seehow they improve that over time.
They definitely have thedistribution and the test bed to
(50:00):
learn very quickly and iterate.
So we need to keep an eye onthose and see can they really
effectively replace OpenAI ornot.
That's it for this week.
We will be back on Tuesday witha how to episode, as I
mentioned, most likely on how touse nano banana across multiple
use cases.
Until then, have a great rest ofyour weekend.
Go and check out our self-pacedAI business transformation
(50:22):
course.
It can literally change yourlife and or your business, and
you can do it at your own paceand keep on exploring AI testing
things, sharing it with theworld.
We all can benefit from that.
Have an awesome rest of yourday, and enjoy the long weekend.