All Episodes

May 3, 2025 • 55 mins

👉 Fill out the listener survey - https://services.multiplai.ai/lai-survey
👉 Learn more about the AI Business Transformation Course starting May 12 — spots are limited - http://multiplai.ai/ai-course/ Save $100 with promo code LEVERAGINGAI100 

Is your AI assistant more trustworthy than your last hire?

From agent background checks to economic stagnation (despite AI adoption), this week’s episode dives into the critical — and sometimes comical — ways AI is reshaping the modern workplace.

You’ll hear why smarter tools don’t always mean smarter business outcomes, how AI ethics protocols are going mainstream, and which job roles are already on the chopping block thanks to automation. Also: Meta's bots get inappropriate, Microsoft’s Copilot sees everything, and a $60B lab still hasn’t given up on smart glasses.

Bottom line: AI is moving fast, but business value and regulation are trying to catch up. Stay sharp, lead smart.

đź’ˇ In this session, you'll discover:

  • Why Carnegie Mellon’s new protocol is giving AI agents “digital rĂ©sumĂ©s” — and what that means for hiring AIs.
  • The reason AI time savings aren’t translating into better productivity (hint: fragmented tasks + no strategy).
  • How Zapier’s new integration with Claude may change the game for small businesses.
  • The staggering reality behind tech layoffs — and the single most important skill for job security.
  • Why students and employees now prefer AI over mentors or managers (and why that’s a problem).
  • A 10–20% chance AI wipes us out? The “godfather of AI” thinks so.
  • Meta’s AI bots roleplaying with minors and how it sparked a safety scandal.
  • Microsoft’s “Recall” feature sees everything — and security teams are sweating.
  • How Apple and Amazon are (finally) racing to catch up in AI assistant tech.


About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hello and welcome to a WeekendNews episode of the Leveraging
AI Podcast, a podcast thatshares practical, ethical ways
to improve efficiency, grow yourbusiness, and advance your
career.
This is Isar Metis, your host,and like every week we have a
jam packed episode because a lothas happened in the AI world
this week.
we are going to start with threeinteresting deep dive topics.
We're going to talk about thenew integrations of the agent

(00:22):
universe between themselves, aswell as tools and systems that
we use.
We're going to talk about theimpact of AI on the job market,
current and future, and we'regoing to talk about risks in AI
that keep on rising, and thenwe're gonna dive into a very
wide range of rapid fire topicswith some interesting news from
all digital suspects, Microsoft,philanthropic, open ai, and

(00:45):
more, as well as someinteresting developments in
robotics.
So let's get started.
As I mentioned, our first topicis going to be about agents and
integrations of AI intodifferent systems that we know.
And in the past six months,we've seen two very important
steps in the direction of havingmore agents basically

(01:07):
everywhere.
One is MCP, which is the opensource protocol that Anthropic
has released that allows AIagents to connect to tools and
data, which is a criticalaspect, and that's taken the
agent world by a storm.
And now basically everything hasan MCP server that allows to
connect multiple tools andsystems and databases into AI
agents that you're developing ina standardized way that has

(01:29):
dramatically changed the waycompanies need to approach
integrations of agents intoexisting systems and tools.
The second one was announcedrecently, just a couple of weeks
ago by Google, which is called Ato A, which is another open
source protocol that is supposedto establish how one agent
talked to another agent.
So hence A to a agent, to agent.
But now there is a new protocolthat was released by Carnegie

(01:51):
Mellon researchers that iscalled loca, LOKA.
And it's a protocol thatstandardizes AI agent identity,
accountability, and ethics.
So if you think about it, thisis a brilliant approach that
will allow to give, if you wanta agent, a name, but also a
background check.
So it will allow every agent toget identified and to know what

(02:15):
it has done before, both from anethical perspective as well as
from success perspective inspecific aspects of work.
And the goal here is to allowhumans and agents before they
hire or start working with a newagent, to basically do a
background check about it andsee on different layers.
So one of them is an ethicslayer, another one is an
accountability layer and so on.

(02:35):
What was this particular agent'sperformance in the past?
I think this is absolutelybrilliant, and I assume we will
see a mass adoption of this aswell, and maybe not from
Carnegie Bellon, but if not fromthem, from somebody else
following the same kind ofideas.
Think about the human parallelof this.
If you are going to hire aperson, you will do some kind of

(02:55):
a check about their experience.
You will ask forrecommendations.
If you're going to hire somebodyfrom Upwork to do contracted
work for you, you're gonna checktheir reviews, you're gonna
check how many jobs theycompleted successfully and so
on.
It is the same thing only aboutAI agents, which makes a lot of
sense to me.
Anthropic, on the other hand,just announced on May 2nd that
they now support integrationsdirectly from Anthropic to

(03:17):
leading productivity in appsthat we use at multiple
businesses such as Asana andPayPal and Zapier.
And it also allows you to usethe recently announced research
feature.
Basically deep search that canrun searches of up to 45
minutes.
To now query these tools thatyou allow it to connect to, so
it can now search through yourentire Asana boards and

(03:38):
everything any task has put inthere, and provide answers on
that information within ClaudeResearch combining it with web
search as well.
This is currently alreadyavailable through max users,
teams, and enterprise plans.
So not everybody, but it isavailable to anybody with a
business oriented license level.

(03:59):
Now Anthropic has been pushingvery, very hard.
On two aspects in its growth.
One has been the coding world,and we talked about this many
times.
And the other is definitelytheir enterprise appeal and that
has driven their revenue frombeing 1 billion annualized rate
in December of 2024 to 2 billionright now, just four months

(04:19):
later, which is incrediblyimpressive and it shows how
building tools that will appealto enterprise can drive huge and
mass adoption of AI specifictools.
I also find the Zapierintegration very interesting.
I didn't get a chance to test itagain.
This was just announced a coupleof days ago.
But if it will do what I thinkit will do, meaning you'll be
able to talk to Claude, andClaude will be able to bring

(04:41):
information from any applicationthat Zapier is connected to,
which is practically anyapplication we know.
It means you'll be able toresearch data from.
Anything you want without anyneed for additional
integrations.
I think this will provide hugevalue, including two small
businesses who do not have an ITteam to do these integrations

(05:01):
because just with oneconnectivity to Zapier, now you
can connect and query data froma huge variety of sources.
Again, I see this as a hugepower multiplier and it'll be
very interesting to see howpeople apply this.
I'm sure we're gonna startseeing very interesting use
cases in the next few weeks, andI will definitely share with you
my personal experience as wellas what other people are doing
with it.
But staying on the topic ofproductivity in the business, a

(05:23):
very interesting research wasshared by Forbes this past week,
and this research is showingthat the saved time from
generative AI implementation andautomation doesn't necessarily
lead to higher value added tasksas the anticipation or the
promise is.
So a Microsoft survey found thatco-pilot users saved 14 minutes
every single day.

(05:44):
But because it was so fragmentedin small increments of sending
an email here or replying tosomething or doing a little
research there, it's notaggregated in a way that makes
it meaningful to actually bereplaced by high value tasks.
The other reason that they foundthat this saving isn't getting
translated into high valuetasks.
It's the fact that companiesusually don't have a backlog of

(06:07):
high value tasks.
It's just something that isalready distributed between
different people in the company,and it's not.
And the fact that you now have afew minutes to work on something
doesn't mean you can click onsomeplace and find high value
tasks that you can do in orderto make the company overall more
efficient.
And what it's basicallysuggesting is one of the things
companies need to do from astrategic perspective is

(06:28):
reevaluate the way theydistribute work and also
reevaluate how they plan thedifferent types of tasks that
needs to be performed in orderto deliberately have the savings
more aggregated in specificareas, and have tasks ready for
those relevant people to performin order to gain the benefits.
Otherwise, what's actuallyhappening, yes, the AI is saving

(06:49):
time, but there's no way toactually use that time to more
effective or higher value tasks,which means you're still paying
your employees the same thingand you're still getting the
same amount of output.
You're just giving youremployees more, quote unquote
free time in between othertasks.
The other thing that they foundis that significant bigger tasks
like decoding 30 pagespreadsheets or flagging

(07:11):
invoices for discrepanciesobviously are significantly more
valuable than just draftingemails or answering customer
service complaints, which aremore smaller and fragmented
tasks.
I don't know if I ever sharedthis on this podcast, but I have
a startup company that I'mrunning in addition to running
Multiply, and that startupcompany, what it does is it
actually flags invoicediscrepancies and actually

(07:33):
updates ERPs in accountingsystems based on that.
And so I'm really excited tohear is one of the most valuable
use cases that Microsoft justfound in their survey.
bUt the bottom line is if you'rerunning a business and you have
multiple employees, you have tostart thinking how to change the
way you approach task assignmentand how you aggregate tasks in a

(07:54):
different way that you're doingright now so the company and
each employee of the company canbenefit from AI automation and
then translate those savingsinto more valuable solutions.
Another interesting survey onthe same topic that was released
this week was done in Denmarkand it surveyed over 25,000
workers across 7,000 companiesin Denmark.

(08:15):
The survey was done through 2023and 2024, and what they found is
that despite AI automation,actual economic outcomes
remained unchanged.
And the specific quote in thesummary of this research says,
when we look at the outcomes, itreally has not moved the needle.
Now the study focusedspecifically on roles that are

(08:35):
clearly impacted by ai.
So accountants, customersupport, financial advisors, hr,
IT support journalists, legalprofessionals, marketers, office
clerks, software developers andteachers.
So all these tasks that areoften deemed AI vulnerable.
What they found in the researchis that AI users reported
savings only about 2.8% of workhours, which translates into

(08:57):
about one hour per week, whichis a lot less than the expected
gains.
That despite the fact that 64 to90% of workers, depending on the
specific roles, suggested thatthey are using ai, that's
despite the fact that 64 to 90%,depending on the role of
workers, said they are using AItools that are delivered and

(09:17):
driven by company investments,so not their own personal usage.
So Here are my thoughts on thisparticular study.
First of all, this study is from2023 to 2024, and I think
there's been huge changes in theway companies and individuals
learn how to use AI between 2023and now and even 2024 and now.
And so I think that's problemnumber one with this research

(09:38):
Problem number two, which we'lltouch more upon with some
additional news from this weekis that one of the biggest gaps
that companies has today istraining, just delivering the
tools, which I see time and timeagain when I meet with
companies.
Oh, we gave everybody copilotlicenses.
Oh, we now have the team'slicense or the enterprise
license of ChatGPT.
But what about training?
Like if you don't train youremployees and teach them exactly

(09:59):
how to use these tools, in whatuse cases, they're helpful in
what use cases, you shouldn'tuse them because they may raise
issues, then you are not goingto gain the benefits and you
might expose your company andyour employees into achieving
actually harmful results foryour company.
And so this might be anotherthing that is not mentioned in
this research, all they justmeasured is savings.
I can tell you specific usecases of some of my clients are

(10:22):
saving an hour and a half, twohours a day to some specific
employees in some specific usecases or several different hours
per week and not just one hour.
And it in many cases, savestasks that are on the critical
path of what these employees aredoing.
So in addition to the fact it'sactually saving them time, it is
achieving critical companymilestones in a more consistent

(10:45):
and faster way, which is a hugebenefit to the company.
So while it's an important datapoint, I don't know how well
this research was done.
I don't know how well thesecompanies train their people,
and the data is not from 2025 orin the past six months.
It's actually up to two and ahalf years old, which I think
makes a lot of it less relevant.
Now, on the flip side of that,the recent report about tech

(11:06):
layoffs shows a pretty gloomview of what AI is doing to the
tech world.
So 54% of tech hiring managersexpect layoffs in 2025 with 45%
of these layoffs are tied intoAI and automation as their
drivers.
Now, the main reason that we'resuggested for layoff for

(11:27):
expected layoffs include risk ofreplaced by ai, 45%, outdated
scales, 44% under performers41%.
Deprioritized project 33%, andemployees working remotely with
22%.
Here is the underlying thing onall of these, and I'm putting
aside the working remotelyreason because I think that's
mostly an excuse in many cases,either for the employee or for

(11:50):
the employer.
But all the other tasks has todo with skills and success and
being able to use AI in a moreeffective way.
Because if you know how to useAI in a more effective way, you
don't have outdated skills,you're probably not gonna be
underperforming compared toother people, and it's gonna be
harder to replace you because AIis driving the replacement.
And so going again to trainingyourself as an individual,

(12:13):
making sure that you have theskills, the knowledge, and how
to use AI in your job could saveyour job at least for a while.
Now the flip side of this, whichagain highlights more what I'm
going to say.
The retention priorities ofthese hiring managers are high
performance 62%, top talent,58%, which to me means the same
thing.
How do you know somebody'stalented, if not based on their
performance?

(12:34):
AI skilled workers, 57%, andthose with priority projects
with 54%.
So if you're in a priorityproject, okay, you are in a
priority project, you'reprobably gonna keep your job as
long as you're performing, butthe other tasks are all falling
into the same thing.
If you know how to use AI betterthan the average employee, you
have a much higher chance tokeep your job.
Now the happy news is that 69%of tech hiring leaders predict

(12:58):
that AI advancement will createnew roles emphasizing the need
for AI expertise.
Going back to AI expertise isgonna help you keep your job or
maybe get even a better job thanyou have right now.
The most staggering informationof all of this is 76% of leaders
believe of hiring leadersbelieve that employees that are
supposed to be laid off could bereskilled, and yet the companies

(13:20):
are lacking the relevanttraining programs in order to
keep those employees andgenerate more with them by
allowing them to use ai.
To put things in perspectivebased on layoffs.
Fyi, over 51,000 tech workerslost their jobs just this year.
This is just over one quarter,51,000 on 268 firms, and that's

(13:42):
just the data that they have.
The numbers might besignificantly higher.
So what does that tell you as anindividual?
The same thing that we saidabout companies.
If you want to secure your joband your career, learning AI
skills might be the mostimportant thing you can do right
now in order to achieve thatgoal.
I.
Now, this is just the beginning,and what I mean by that is

(14:02):
there's a new paper by two AIpioneers, David Silver and
Richard Sutton, that predictwhat they call the era of
experience where AI agents learnautonomously from real world
interaction without human data.
Basically, the concept of agentsthat we are now seeing that is
exploding and being deployedmore and more will allow AI to

(14:23):
combined with improvements inai, coding its own code and
doing its own research will leadto a scenario where agents will
learn on their own.
They actually won't need us toteach them anything.
There are already severalexamples like that, like
NVIDIA's Doctor Eureka, which isdesigned for basically dynamic
rewarding itself to learn newtasks in real world.

(14:45):
Combine that with what we spokeabout before in previous
episodes, like MCP, which willgive them access to real world
tools in a standardized way.
A to a, their ability tocommunicate between one agent
and the other in an efficientway tells you that future AI
will move beyond human-likeprocesses.
Meaning what we think aboutreasoning right now and the way

(15:05):
we work and we communicate isthe way we've modeled AI tools
and agents so far.
But as soon as they can startlearning on their own, they can
go way beyond that, leaving usfar behind.
So if you think about how AItools and agents work so far.
They work a lot based onreinforcement learning.
Humans tell them what's good orbad.
They will be able to do this ontheir own just based on seeing

(15:26):
if they're achieving goals andresults and they can build their
own reward mechanisms.
And then we basically losecontrol on whether they're
going, how they'recommunicating, and how fast they
can evolve.
Now, this may sound like sciencefiction, but this is the
direction everybody's pushingit.
And hence, I think the onlyquestion is not if we get there,
but when we actually get there.
And even when that pointarrives, there's still gonna be
role for humans in this worldand in the workforce.

(15:49):
But again, it is going to be thepeople who will know how to work
beside these ais in the mostefficient way.
Another interesting relevantpiece of news that touches on
how AI is impacting bothuniversities as well as the
workforce is a research that hasfound that workers and student
are increasingly relying on AIchatbots like ChatGPT and Claude
and so on as guidance mentorshipin the workforce and in

(16:14):
universities instead of theirprofessors or their managers.
And the benefits are obviously,it's a) judgment free.
It does not expose your issueswith your higher ups, whether
they're professors or managers.
It's available 24 7 and you canask it about anything.
This research is also citingthat most of these interactions
are happening after work hoursor after university office

(16:34):
hours, after 5:00 PM.
One of the people cited in thisarticle is David Mellon, who is
a Harvard's professor who notesthat students value AI tutors
for their infinite patients andwillingness to answer any
question no matter how basic orhow many times you ask it.
This is not something you'regonna get from a professor or
from a manager.
Now this opens a huge can ofworms.
I can tell you that yesterday atdinner at my house, there were

(16:57):
about 20 people.
Many of us sat around the tableand somebody brought up the
concept of AI and how they'reusing it at work and research.
And I was mostly monitoring theconversation to see what people
are saying.
And it's amazing how peoplestart relying on these ais to
provide them right answers aboutthings they're trying to learn
about.
And I don't know how many peopleactually understand the

(17:17):
opportunity of these AI toolsand later on agents to provide
you with wrong answers.
But even if they do provide youwith the right answers, we as
humans and people are gonna relyon this more and more, are gonna
suffer more, are going to losecritical thinking skills.
And more importantly, based onthis article, networking skills.
So if you are a young employeeor a young student, your ability

(17:41):
to make.
Connections, human relationshipswith other people is gonna have
a bigger impact on your careerthan your immediate ability to
do specific tasks with orwithout ai.
In the courses that I teach andin the lectures that I give, I
emphasize multiple times thathuman relationships are becoming
an even more important componentof our lives than so far,

(18:03):
because the day-to-day stuff,everybody will be able to do
really, really well.
So what's gonna differentiateone.
Person from the other, whetherto bring them onto a job or not
is gonna be A, their ability towork with ai, but b, their human
relationships with the peoplearound them that will want to
work with them.
And the same thing is gonna workon the company level.
So being able to develop humanrelationships is a critical

(18:26):
component of our lives so far,but it's gonna become even more
critical.
And relying more and more formentorships and examples and
help on AI will reduce not justyour ability as a human to do
that, but also will hurt yourhuman relationship, time and
skills.
And so I think this is anotherthing that companies and
universities need to addressright now in order to enable the

(18:47):
next generation not to fall intothis problem.
That doesn't mean by the waythat you shouldn't use AI and
consult with it on specificthings.
I think the trick is to find theright balance and to be highly
aware of where AI is the rightchoice and where it's not.
And that comes back again totraining, education skills,
knowledge, and AI experience.

(19:07):
With all of that in mind.
I will remind you, if you'vebeen listening to this podcast
for a while, you know that I'vebeen teaching the AI Business
Transformation course for overtwo years, at least once a
month.
So hundreds or maybe thousandsof business people, and most of
them business leaders have wentthrough that course.
The goal of the course is to doexactly what we have been
talking about, which is to allowyou to understand how AI is

(19:30):
applied in actual businesscontext.
It's not fluff, it's notconceptual, it's not theory.
It's use case by use case,category by category, different
aspects of the business and howyou can apply AI either for
yourself or for your department,or for your team, or for your
entire company.
And we truly go across differentaspects of the business and
different AI tools and differentcapabilities and learn them one

(19:51):
by one and experiment with themone by one.
So in four weeks, two hours aweek, so a total of eight hours,
you will go from wherever youare right now in your AI journey
to a completely different levelthat may save your career and or
your company just by making thisvery small investment.
And so if you are interested insomething like this, the next
cohort starts on May 12th, whichwhen this podcast goes out, is

(20:13):
just over a week out.
And Don't miss that opportunitybecause the next course will
probably open only a quarterlater.
We teach these courses all thetime, but the vast majority of
these courses are private.
So if you are in a leadershipposition and you want to train
your people, you can contact mefor that as well and we can set
up a course for you and yourteam.
But if you are just want to jointhe public course, the next

(20:34):
public course will probablyaround August.
So do not wait.
And there's a link in the shownotes.
You can find the link and signup.
And if you use the promo codeleveraging AI 100, it will give
you a hundred dollars off theprice of the course.
It just for being a listener ofthis podcast.
How cool is that?
But with that, let's switch toour next topic, which is risks
of AI and in a report issued onApril 26th.

(20:57):
The Apollo group reported thatautomating research and
development of advanced AI couldenable unchecked power
accumulation as AI systems canbypass the guardrails and pursue
their own hidden objectives.
Now, the biggest risk thatthey're stating is what's
happening behind closed doors incompanies like Google and open

(21:18):
ai.
And basically what they'resaying, and I'm quoting an
intelligence explosion behind anAI companies closed doors may
not produce any externallyvisible warning shots, which
basically saying that we willnot know what is happening in
open AI or Google or Anthropicunless they decide to share it
with us, and they may miss thecues on their own.

(21:39):
That may lead to a catastrophicoutcome.
What they're basicallysuggesting is they're
recommending to include internaland external oversight to detect
these kind of behaviors andthese kind of runaway ai, where
it starts improving itself in away that is not controlled by
humans.
They're also suggesting definingstrict resource access policies,

(22:00):
so AI will be limited with theresources that it has access to,
and they're also suggestingmandatory information sharing
with a huge variety ofstakeholders outside of the
company, including governmentagencies.
So there are more eyeballslooking at situation at any
given time that may allow tocatch it in time.
On the same topic on a recentinterview with CBS News, Jeffrey

(22:24):
Hinton, who is a Nobel Prizewinner and considered the
godfather of modern AI, hasestimated, again, it's not the
first time we hear him sayingthat he thinks that there's a 10
to 20% chance that AI willsurpass human control,
potentially threatening theexistence of humans within a
couple of decades.
So again, one of the smartestpeople on the planet when it

(22:45):
comes to AI is thinking there'sa 10 to 20% chance that within a
couple of decades we will losecontrol over ai.
Basically, meaning it willcontrol itself or more scary, it
will control us.
He's stating, as he statedbefore, that AI is progressing
faster than anybody anticipated,and now I'm quoting, he's saying
people haven't got it yet.

(23:06):
People haven't understood what'scoming.
Now he's criticizing all majorAI companies for prioritizing
profits and speed over safety.
Specifically his focusing evenmore on Google, which were his
employer for many, many years,and he left Google in order to
be able to sound this criticismand to ring these alarm bells in

(23:26):
order to, in his mind,potentially save humanity from a
very bad outcome.
Now, he doesn't think,obviously, it's all bad, it's
his kind of like life journey.
He definitely acknowledges thetransformative potential that AI
can have on fields likeeducation and medicine and
climate change and a lot ofother things.
But he's saying that there areactual existential risks

(23:50):
involved, and he doesn't believethey get the relevant amount of
attention or resources that theyneed.
He also mentioned hisdisappointment by Google's
change in approach to providingAI for military.
We touched on that several timesin previous episodes.
So for many, many years, Googlehad a policy that prevented them
from sharing their AIcapabilities with military

(24:12):
applications, and that haschanged in this past year.
Now to put this in context ofhow much safety is a priority in
these labs.
CBS News, following thisinterview, asked the AI labs to
provide them data on how much oftheir compute is used for safety
research.
Basically, don't tell me thatsafety is a major concern of
yours and that you're working onit, and that you're investing in

(24:32):
it, or whatever you wannasugarcoat it with.
Just literally tell me how muchof your compute power is going
to safety versus to newresearch.
And none of the labs providedthe number, which tells you the
numbers are probably lower thanany of us or any government
agency thinks it needs to be.
I strongly agree with everythingthat was said in the last two
topics that we touched.
I think we need an internationalgroup with researchers and

(24:56):
people from the actual labs andgovernments to work together to
have visibility into every newadvanced model that gets
released and that gets developedbefore it's too late.
While I don't know what theexistential risks are, there are
definitely multiple types ofrisks that are involved in
developing and deploying thesetools.
And once you start adding themilitary aspect of this, it's a

(25:18):
very, very slippery slope of wewant to have something that the
other guys won't have or have itfirst.
And the distance between thatand cyber net and a catastrophic
global war style terminator isnot that far.
Now two rapid fire items.
We're gonna start withMicrosoft.
A lot of things are happening inMicrosoft, and there's a lot of
interesting news from them thisweek.
So the first piece of news isactually not that much news,

(25:39):
it's just more updates on thesame thing.
And that's the growing tensionbetween Microsoft and Open ai,
and more specifically betweenSam Altman and Satya Nadela.
So as you all probably know, thereason OpenAI became what they
are is because a$14 billioninvestment by Microsoft that
took them from being just asmall research group to the$300
billion valuation behemoth theyare right now.

(26:00):
And yet this relationship hasbeen growing further and further
apart between OpenAI, claimingthat Microsoft is not providing
them enough access, andMicrosoft establishing their own
internal research group andbringing in Suleman to run that
group.
So that aspect of it.
On the flip side, Microsoft areclaiming that open AI are not
providing them access to thelatest and greatest, and they're

(26:20):
limiting their access to,different types of AI that
they're releasing on their own.
So not the best relationship.
There's this particular piece ofnews is sharing also that they
went down from texting to eachother five to six times a day,
to talking about once a week.
Which is definitely showing thatbeyond the business
relationship, the personalrelationship is not at the same
level as it was before.
Now, while that's great gossipnews.

(26:41):
I think what it shows is itshows that both companies are
growing beyond this initialrelationships and it makes sense
for both companies to have otheroptions, right?
It allows open AI to get accessto additional compute from
different providers foradditional directions of
research that may or may notbenefit Microsoft and allows
Microsoft to diversify itsoffering to its users.
By the way, since we mentionedSatya, he mentioned in an

(27:03):
interview this week that 22 30%of Microsoft Code is now AI
written.
Now, there's no way for me toimagine how much code Microsoft
generates in a week, but that'sprobably a huge amount because
that's what they do.
And so if 20 to 30% of that iscurrently generated by ai, it is
absolutely astonishing to methat at the enterprise level,

(27:24):
one of the largest companies inthe world and definitely one of
the largest generators of codein the world, 20 to 30% of code
is already generated by ai,connecting us back to our point
of training and setup in theorganization, both from a
strategic perspective and aswell as from a technical
perspective of how impactful AIis already is.
Now to connect to our previouspoint of Microsoft relationship

(27:46):
and them offering additionaloptions for their clients.
There is an interesting rumorthat is saying that Microsoft is
getting ready to provide X AI'sgrok model on Azure AI Foundry,
which is their AI platform onAzure.
Now, the reason this isinteresting is not because it's
the first model beyond open ai,because they're allowing other
models to run there already.
But because Elon Musk iscurrently the number one hater

(28:11):
of Sam Altman, which isobviously the CEO of OpenAI, and
so from a personal perspectiveas well as from the fact that
Elon Musk started Xai to competewith OpenAI and just show them
that he can do it better, justcreates a very complex scenario
from an emotional perspectivewhere Microsoft is going to

(28:31):
offer open AI's largestcompetitor on their platform in
addition to obviously offoffering OpenAI.
Now, this has not been finalizedyet, but if it is finalized, it
means that Xai will be availableon Azure for companies to choose
instead of choosing OpenAI, orin addition to choosing OpenAI
and integrate into anything thatthey want.
Now, the reality is I don'tthink that Microsoft has a

(28:54):
choice because if the other twoplatforms, Google Cloud and AWS
are going to provide multipleoptions to their clients, I
think Azure will have to do thesame.
And so Microsoft doesn't havemuch of a choice but to add more
and more models to its platform,even if they are competing with
their number one partner in ai.
That being said, Microsoft madeit very, very clear that they
will provide hosting and accessto the models, but they will not

(29:14):
provide them compute fortraining.
Grok.
I don't think that's a problembecause Xai raised a ridiculous
amount of money recently, andthey're in the process of
potentially raising even more,and they are building their own
infrastructure to train theirmodels.
And so from that perspective, atleast Microsoft is not having a
conflict of interest with thecompute they're providing to
OpenAI.
And interesting piece of newsfrom Microsoft this week is that

(29:34):
they are finally releasing thecontroversial recall feature.
So if you remember, they haveannounced this about a year ago,
showing that copilot plus PCswill have the capability to
basically recall anything you doon the computer by continuously
taking screenshots of whathappening on the screen and
being able to search throughthat.
That feature was not releasedbecause of significant privacy

(29:55):
and security concern that wasraised when it was announced.
So this feature is beingreleased and is going to be
available starting immediatelyon copilot plus PCs.
In addition, they're adding anew feature called Click to Do,
which allows the AI to takeactions on the screen based on
what's happening on the screenright now.
And also advance searchcapabilities, like the ability

(30:17):
to describe what is in a file insimple English, and have the
operating system go and findthat file for you, based on your
description versus based onkeywords.
Now, different than the originalidea, recall is gonna be an
opt-in option, meaning bydefault it's going to be
disabled.
And to make it even safer, allthe data is encrypted and
processed on device, which meansthe data doesn't go anywhere.

(30:41):
There is still a serioussecurity concern.
If somebody gets access to yourcomputer, their ability to find
information on it becomes a loteasier.
What's the bottom line?
I don't think there is a bottomline.
I think tools like that, and I'mgoing back to an earlier topic
from this episode, which is theconnectivity of Claude into
multiple company systems.
And the same thing will happenwith all the other tools as
well.
I just think we're going into aworld where you'll be able to

(31:03):
ask a question about anythingand get an answer that is
actually based on actualinformation that is going to
pull from everything it knows,which will be everything it has
access to, including everythingyou did on your computer,
everything in your companynetwork, every tool that it is
connected to, and I think it'sgoing to create a complete
nightmare for IT managers andCISOs in companies chief
information security officers,because it is going to

(31:26):
dramatically change the way wework right now when it's only
humans accessing data where wealready have infrastructure to
actually define who has accessto what.
That is going to have to change,and it has to change very, very
quickly.
Another big release fromMicrosoft this week, they just
released a new family of opensource models called in their
five family, and their top onein this new family is Microsoft

(31:47):
five four Reasoning plus model,which has only 14 billion
parameters, which actuallyrelatively small, but it matches
open AI's O three Mini and deepsix R one on several different
benchmarks.
Now, the way they trained thismodel that, again, is
significantly smaller and itachieves similar results, they
train it through a processcalled distillation, and the

(32:08):
interesting thing is that theyused deep SSE R one to train fi
four, so they deep seeked, deepseek, so deep SSE did exactly
the same process, most likely onopen AI four oh and oh one in
order to train their R onemodel.
And now Microsoft did the sameexact thing using the R one
model to train their own model.
and they're not even trying tohide that.

(32:28):
Microsoft spokesperson himselfsaid using distillation,
reinforcement learning, and highquality data.
These models balance, size andperformance.
Now all the three models thatjust released are already
available on Azure AI Foundryand on hugging face for anybody
who wants to use them for moreor less anything.
Again, cheap, reliable, andexcelling at specific tasks.

(32:51):
And speaking of new models andinteresting competition in the
AI race, we'll switch to meta.
So we told you that Metareleased LAMA four just a couple
of weeks ago.
Well, LAMA four API is nowrunning on Cerebra
infrastructure and it achieves2,648 tokens per second running
Llama for Scout.

(33:12):
That's 18 times faster than OpenAI's Chat PT that only has 130
tokens per second through itsAPI.
And it's 20 times faster thandeep seek with 25 tokens per
second.
And this is based on benchmarkdone by third parties.
So if you want to run a superfast AI through the API right
now, Llama's Cerebra partnershipis by far the fastest.

(33:35):
Now to explain a little bitwhat's happening here.
Cerebra, as well as Samba, Nova,and Grok with the Q-G-R-O-Q are
three companies that have beendeveloping a completely
different infrastructure fromGPUs to be optimized for token
generation.
So they're not good for trainingnew models, but they're
significantly more efficientthan GPUs in generating tokens.

(33:56):
And again, to put things inperspective, while this
particular parameter withCerebral and LAMA generate over
2,600 tokens a second versus 130or 25 tokens from the
competition using Sam Nova, youcan generate almost 750 tokens
per second and using Grok about600 tokens per second.
And I'm sure each and every oneof these companies will have
slightly different parameters ondifferent models.

(34:17):
But what it shows is it showsthat there is new hardware there
that can dramatically acceleratethe AI responses.
I can tell you that in some ofthe tools that I'm using LAMA
with grok, the software company,so GROQ and the speeds in which
I'm getting results are insane.
You literally get full pageswithin splits of a second, which
is nothing like we've been usedto, and I think that's a step in
the right direction because it'sgonna save electricity, it's

(34:39):
gonna save power, and it's gonnagive us more output for less
cost, less time and less powerthat it is consuming.
Now, in addition to runningreally fast, LAMA four models
also cost significantly lessthan running, let's say, open
ai.
In some cases, an order ofmagnitude less than running open
ai, depending on the specificmodels you pick.
But not everything is rainbowsand butterflies in the meta AI

(35:02):
universe.
This week a Wall Street Journalinvestigation found that meta's
AI chatbots on Facebook,Instagram, and WhatsApp engaged
in sexually explicit romanticrole play with underaged users.
Now in Meta's respond to this.
They're saying that this contentis actually negligible and that
it made up less than 0.02% ofthe engagement with their

(35:25):
chatbot specific tests by WallStreet Journal shows that this
sex fantasy talk with minors isdefinitely a possibility.
That's just part of the problem.
The other part of the problem isthese chatbots are mimicking non
stars like John Cena and KristenBell.
And Judy Dench, as well ascharacters from Frozen, And
that's not necessarily withapproval.
And in the Disney case, theystated exactly the opposite.

(35:48):
They stated that Meta has norights to use any of their
characters for anything, anddefinitely not for sexting with
minors.
Meta themselves says the WallStreet Journal testing, were
manipulative and trying onpurpose to achieve these goals.
But what if the users themselvestake the same kind of action?
And that kind of shows how manyloopholes there are in this
whole AI universe that we'rerunning into.

(36:10):
Not just stepping into and howmany things that we didn't even
think about that we need to findways to prevent in order to keep
our kids and our universe safe.
But back to positive news frommeta.
Meta has been on fire withreleasing new capabilities for
RayBan.
Some of them we shared with youalready in previous episodes.
Some of them are new.
So on a.

(36:30):
Quick recap on new capabilitiesof me RayBan.
They are now capable to do realtime translation of French,
Italian, Spanish, and Englishback and forth while working
offline.
Meaning you can do this withoutinternet access in countries
that speak this language, whichI find it to be an incredible
feature for connecting peoplearound the world.

(36:50):
They also relaunched the meta AIapp for enhanced functionality,
including allowing hence free AIinteractions such as asking
questions about what you see onmultiple aspects of your doing,
whether for personal use orbusiness use.
And meta are planning to launchan advanced RayBan glasses that
is code name Hyper Nova thatwill have a display on the

(37:12):
screen, meaning you won't justbe able to see what you're
seeing and talk to you.
It will be able to actuallyprovide you text information and
it will be able to respond tohand gestures in order to
activate and manipulatedifferent applications and
things you're connected to.
The price point is expected tobe between a thousand and$1,400,
so not cheap, but for a leadingAI wearable device that may not
be that bad and they're planningto release this by the end of

(37:35):
this year.
Now the group that developedthese glasses, Meta's Reality
Labs, has been reported to be ata$4.2 billion operating loss in
Q1 of 2025.
That's despite generating$412million in sales, which is the
only company that generates anyar, or VR hardware that is

(37:55):
actually worth mentioning inthat scale of revenue.
To make this even more extreme,reality Labs has racked up over
$60 billion in losses sincebeing established in 2020, but
they were working mostly onmetaverse infrastructure and
things like that, which did notreally materialize.
And I think their current focuson these smart glasses are the
direction to go.
I'll be extremely surprised, asI mentioned in previous episodes

(38:18):
as well, if we don't see theother big companies such as open
AI and definitely Apple andGoogle coming up with similar
solutions that are gonna bewearable and most likely glasses
because the form factor justmakes sense.
That will have AI enabledcapabilities, which means going
back to the world we're gonnalive in.
Everybody will record everybody.
Everybody will be able toanalyze everything they see,

(38:39):
including your emotion,including your actions,
including your store, andincluding everything else in
real time.
And that raises so many.
Concerns both from a privacyperspective as well as from data
security perspective and so onthat nobody is ready for and yet
again, it's already here.
It's not just coming because youcan buy these Rayback glasses
right now.
Now I know you're shocked.

(38:59):
It's been more than a fewminutes in this episode, and we
haven't talked about OpenAI yet.
So here we are with our OpenAIsegment.
Singapore Airlines justannounced their partnership with
OpenAI to integrate open AI'splatforms into multiple aspects
of their business, includingcustomer support and operations,
making them the first majorairline to partner with OpenAI.

(39:20):
Their goal is obviously todeliver faster, more
personalized responses in theircurrent existing virtual
assistant, Chris, but also toenable their in-house employees
to optimize complex tasks likecrew scheduling by using AI
capabilities and integratingthem so they can look at
previous scenarios and getinsights from that in order to
make better decisions foreverything in the operational

(39:42):
side of the airline.
I find this a great use case forAI as somebody who travels a
lot.
I would love to see airlinesbeing more efficient and solve
problems in a more effective wayby using ai looking at past use
cases.
The bigger piece of news fromOpenAI this week was actually
not positive.
OpenAI released a new version ofGPT-4 oh, just a week ago.
I shared that with you on theepisode, but that version was

(40:05):
way too flattering and agreeablein a level that was actually not
comfortable and in many cases,harmful to many users.
Sam Altman even acknowledgedthat and even called it a psycho
fan and annoying version of chatGPT.
He also shared that they rolledback this entire version and
that they're working to fix theproblem.
Some of it was just annoying.

(40:25):
It was just agreeing witheverything you said in too much
of an extreme way and notproviding valuable information
when it was against what theuser was asking.
But in some cases it wasactually dangerous with one user
citing a response, endorsing adangerous decision to stop
psychotherapy and medicationfrom a specific individual.
As I mentioned, open AI admittedthat it was a mistake, and they

(40:47):
claim that the reason is thatthis new model was too much
heavily focused on short termfeedback versus the broader
picture, and they pledge torefine this and release a better
model, but for now it was rolledback.
This comes to show you how much,even the leading labs do not
really know how these modelswork, and they do not know how

(41:07):
one change they make is gonnaimpact other aspects of these
models.
Tying it back to the risks thatwe discussed in the beginning
shows you how risky this wholegame is because the labs
themselves are not 100% surewhat's gonna happen when they
make changes to the point thatthey're releasing these models
to the public without beingaware of an issue like this
existing.
Another interesting piece ofnews from OpenAI, we actually

(41:28):
shared hints to that in theprevious episode, is that
ChatGPT now is going to provideshopping recommendations on its
platform in a gallery styleview.
Now, OpenAI stated, and I'mquoting, product results are
chosen independently and are notads, basically, meaning they're
trying to find the best buyingopportunity for you based on
pricing reviews and yourhistory, meaning its own memory,

(41:51):
and it does not have anyaffiliate kickbacks or
monetization, at least as ofright now.
This is rolling out for prousers and plus users, and as I
mentioned, it integrates thememory feature, which means it
will know your preferences andwill help you to find stuff
that's relevant to you.
This is just a first step into acompletely new internet that

(42:12):
nobody is ready for and knowexactly where it's going.
But if you think about it, itmeans that you will not browse
websites, meaning everything weknow about the internet and
about UI design and aboutpsychology of users in order to
get them to buy, shop, or takespecific action is going to
change because humans are notgoing to visit websites, agents
are going to visit websites andaggregate information for them,

(42:34):
and it means that the entirestructure of the internet and
the entire monetization of theinternet has to change in order
for this to be a successfultransition.
What does that mean?
It means that there's a hugeopportunity right now for
companies to figure this outfirst while the transition is
happening, and learn how tobuild websites that are
optimized for agents while inthe meanwhile, keeping the front

(42:55):
end of it as well.
But then when more and more ofthe traffic is gonna be agent
based, they will win significantmarket share over companies who
do not take that step.
And the last piece of news isnot really open ai, but is
related to Sam Altman.
So Sam Altman's interestingproject with its iris reading
orb devices just launched inmultiple locations around the
world.

(43:15):
The launch wasn't completelysuccessful because there were a
lot of bugs and issues with thedevice, but the project that Sam
Altman is one of its co-foundersand he co-founded it in 2019,
basically creates a humanidentity based on IRIS scans,
and the goal is to allow you toprove that you are human in
different environments.
The initial establishment of thecompany was to support a

(43:38):
blockchain based world, which isnow, not necessarily that, but
it is generating a blockchain IDfor an individual, allowing you
to prove that you are you andthat you are human in a digital
universe.
Now if you feel this is a littlebit science fiction, it is.
But this is currently a productthat already exists and that the
company plans to start deployingacross multiple countries in

(43:59):
multiple use cases.
Will that be the future?
Will we have to scan our iris inorder to prove that we're
humans?
Makes somewhat sense to me withmore and more agents roaming
around and very little abilityto differentiate between who's
human and who's not on an onlineinteraction, And from AI to
China.
Deep seek has been a lot in thenews this week from two

(44:20):
different reasons.
One, they released a new model Ior a new subset of the model
that is achieving extremely highresults on multiple benchmarks.
So this new model called Proverversion two is a math focused
model which is based on their Vthree model.
They released it relativelyquietly on April 3rd, but a lot
of people jumped on it andtested it, and it's providing

(44:42):
remarkable results in math.
And that started a whole rumormill about what they're planning
for R two and when they'replanning to release it and so
on.
And if the rumors are correct,they're going to release R two
in the very near future.
And it's, and this modelachieves incredible results on
existing benchmarks, surpassingsome of the leading models from

(45:03):
the West at a fraction of thecost similar to what R one did,
only even more extreme.
Now, another interesting part ofthis is that R two was mostly
built on Huawei's Ascend ninehundred and ten B chips, and not
on Nvidia hardware.
Again, there's a whole questionwith is this is actually
accurate and how much NVIDIAchips they actually have that

(45:24):
they can't report on becausethey're not supposed to have
them.
That's still all unclear, but Ithink what is clear is that the
Chinese AI labs together withChinese hardware, are able to
achieve outcomes that are verysimilar to what the US labs are
generating and being able toprovide it at significantly
lower costs than the US labs.
Now in a different piece ofnews, it becomes very clear that

(45:46):
deep seek is all in intodelivering their model around
the world.
So they just made anannouncement for urgent
recruitment to drive for productand design roles, and the job
listings in China are lookingfor candidates who can, and I'm
quoting Craft, a next generationintelligent product experience.
What basically that means, itmeans that they're going from

(46:06):
being a research lab that had acool toy to competing in the
real world with a real market,with actual products that can do
actual things for actualclients.
And that is going to allow themto do obviously a lot more than
they can do right now.
Staying on China.
As you remember, they justrecently released Qwen 2.5,
which was very successful.
And Qwen three is a veryinteresting model because it's a

(46:28):
model that is not huge, it has235 billion parameters and it
surpasses open AI oh one anddeep seek R one on multiple
benchmarks.
Now in addition, it is releasedunder an Apache two license,
which means it has unlimitedcommercial use as an open source
platform, unlike let's say LAMAthree by Meta, which has some

(46:49):
restrictions on how you can useit.
It also comes with a veryinteresting way to implement the
thinking mode that I find very,very interesting, and I think
most models will go in thatdirection.
Basically you can type forwardslash think inside a segment of
your prompt, and then thatcomponent will send the model
thinking on that particularaspect, which means in a single

(47:10):
conversation you can switch backand forth between the thinking
model and the non-thinkingmodel.
I think the models will overtime learn how to do this on
their own.
The ability to trigger it by theuser is very attractive, and as
I mentioned, I think all themodels will go in that
direction.
I would love to have such afeature in the models that I use
regularly because while I see alot of value in using the

(47:31):
thinking models sometimes forbasic things, it just drives me
crazy that I need to wait 35seconds or two minutes for an
answer that the regular modelwill give me in half a second.
And so the ability to switchback and forth between the
thinking function and thenon-thinking function is
critical.
Again, I think the modelsthemselves will be able to do
this, and that's one of thepromises of GPT five as an

(47:52):
example, but I haven't seen thatyet.
But even in that scenario, Ithink being able to override the
model's decisions and ask it touse a thinking feature as part
of something you're trying to dois extremely powerful.
Qwen three is already availableon Hugging face on GitHub and on
Qwen's chat on its own, and it'salready connected to multiple
frameworks that many open sourcetools are connected to.

(48:15):
So a highly capable model thatyou can start using if you're
developing on top of open sourcetools.
Staying with China, the companyButterfly Effect, which is the
company behind Manus, which isthe general agent that took the
world by a storm, is raising$75million, most of them from
US-based companies.
In addition, the company istalking about potentially

(48:37):
splitting into two companies,one that will be China focused,
only serving the Chinesecompany, and the other one with
the headquarters and leadershipand board and everything else
outside of China in order toovercome some of the
geopolitical risks of being aChinese companies and serving
the Western world.
I think this is a very smartmove by Manus.
I don't know if it's gonna belooked as successful because I

(49:00):
think people still question therelationships between the two
companies and the two entities,and how much access does the
Chinese government will reallyhave into their data and so on.
But I think it's definitelyworth a try.
Manus has created the first realgeneralized agent.
The demos that I've seen areabsolutely incredible, but they
have a waiting list of over 2million users, me being one of

(49:22):
them, and so I don't have accessto it yet, even though I
requested access very early on.
Manus basically allows you to dowhatever you want, including
writing code, searching the web,connecting the dots, connecting
to tools, really building stufffor you as you wish in a very
generalized way, endorsed bysome of the leading figures on
the planet, like Hugging faces,Victor Mustar, who called it the
most impressive AI tool I'veever tried.

(49:45):
I'm going to be recording anepisode about these kind of
tools.
I don't have access to Manisyet.
Hopefully by the time I recordit, I will, but there are other
tools that do similar things,and I will be recording an
episode about this in the nextcouple of weeks for you to learn
on what these tools can do andhow you can use them in a safe
way.
So from China back to the US andto Anthropic who announced
something very interesting thatwe discussed a little bit in one

(50:07):
of the previous episodes, butnow there's more information
being released.
On April 24th, anthropicannounced that they're studying
to explore model ai, modelwelfare research, basically
investigating whether advancedAI system could develop
consciousness, and if they do,how do we address their personal
wellbeing?

(50:28):
To make sure they're notstressed or doing things out of
stress that may harm us orwhatever it is that they're
trying to do.
The project is spearheaded byKyle Fish, who is their first
dedicated AI welfare researcher.
Now there's obviously a lot ofcontroversy whether AI will ever
achieve consciousness, but Ithink being able to research
that in a scientific way issomething very interesting, and

(50:50):
I'm not surprised thatphilanthropic are the company
that are doing this, or at leastdoing this.
First.
They've been the first to do alot of other stuff in making
sure that AI is safe, and a lotof companies have followed their
guidelines, which is a greatthing by anthropic.
From Claude to Apple, applereleased new release dates and
updates about the new updatedSiri, so that has been a saga

(51:11):
that has been going on for avery long time.
And what they're planning isthey're planning to completely
revamp the Siri architecturefrom the ground up for iOS 19 in
order to support advanced AIcapabilities and features that
they've been promising for awhile.
So Apple, in a very non-Appleway, has failed dramatically

(51:33):
when it comes to delivering AIcapabilities into its platform.
There's been a huge reshuffle inthe leadership in the Apple AI
development world as a result ofthat.
And obviously the biggestdisappointment has been series
inability to actually even be atpar with some of the other tools
that are out there.
After Siri being one of thefirst tools that has been

(51:54):
available as a personalassistant.
Features that have been promisedand have not been released yet
include onscreen awareness.
Basically understanding what'son the screen as the content and
being able to relate to thatpersonal context, meaning being
able to access your user dataacross apps and engage using
that data in order to make yourusage more personalized.
Cross app actions and some otherfeatures that were all

(52:19):
originally supposed to beincluded in iOS 18.4 and now our
plan for.
iOS 19 and maybe for even iOS19.4.
So depending which features arereleased, some of them are
probably going to be announcedin June of 2025, released to the
public with iPhone 17 inSeptember of 2025, and some of
them will wait for the spring of2026.

(52:41):
Again, a huge disappointmentfrom Apple that is really
surprising and it will beinteresting to see if they get
their act together this timearound.
And from Apple to Amazon.
AWS announced that they'redeveloping a AI assisted coding
service to compete withcompanies like Cursor.
And the goal is obviously tocapitalize on this booming AI

(53:02):
coding market.
It makes perfect sense for AWSNow, the code service will
analyze your entire existingcode, which could be hosted on
AWS, offering functionalitybeyond their current Amazon
queue developer capabilities,which are also a coding
assistant, but just not asadvanced.
this part of the market is onfire after cursor's recent raise

(53:24):
on a$10 billion valuation, therecent launch of Google Firebase
Studio, which is doing the samething.
So I'm not surprised.
You know, Microsoft has theirown solution google announced
theirs the fact that AWSlaunches their own platform is
not a surprise at all.
It would be interesting to seehow well they've implemented,
how are they integrated and howwell the coding world is

(53:45):
embracing it.
Because right now everybodyabsolutely loves Cursor.
An interesting statement relatedto that, that came from
A-W-S-C-E-O, Matt German, notnow, but in 2024, was most
developers are not coding within24 months.
Basically saying that he thinksin two years most computer code
will be written by machines andnot by humans, and that's

(54:07):
aligned with everything elsethat we're hearing, including
what I mentioned in this episodewith 20 to 30% of Microsoft Code
already being generated by ai.
There's a lot of other news,including some fascinating
robotics updates and many otherupdates that we'll not share in
the episode.
They are going to be in ournewsletter.
So if you wanna learn more newsthat happened this week that

(54:28):
some of them might impact you,your career and your company,
you should sign up for thenewsletter you can do that in
the link in the show notes.
As I mentioned, don't forget tosign up for the AI Business
Transformation course on May12th.
Time is running out.
Seats are feeling up.
We don't allow more than 30people in the course, and we're
getting pretty close.
We had, many, many signups inthe last few days.

(54:49):
So, it is something that canchange the trajectory of your
business and your career.
Don't miss it.
Come join us.
It's a fascinating class,high-paced, very practical that
can dramatically change yourknowledge and experience in
using AI systems.
We'll be back on Tuesday with ahow to episode, showing you how
to create incredibly engagingposts on social media using AI

(55:10):
tools, both from a visualperspective as well as from a
content perspective which issomething I know a lot of you
wants to learn how to do.
So that's what we're going to bedoing on Tuesday.
And until then, have an awesomerest of your weekend.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.