All Episodes

July 19, 2025 • 27 mins

👉 Fill out the listener survey - https://services.multiplai.ai/lai-survey
👉 Learn more about the AI Business Transformation Course starting August 11 — spots are limited - http://multiplai.ai/ai-course/

Is your business ready for an AI that can act — not just answer?

This week, OpenAI dropped a bombshell: a powerful new agentic AI that can think, browse, analyze, and execute — all without you lifting a finger. And they're not alone. China's Moonshot AI also launched Kimi K2, an open-source model that’s not just fast — it’s freakishly cheap and seriously capable.

So what does this agentic evolution mean for your company, your job, and your future?

Host Isar Meitis breaks his own vacation to deliver a special, can’t-miss solo episode where he dissects the monumental AI moves of the week — and how they might quietly rewrite the rules of modern business.

In this session, you’ll discover:

  • The game-changing capabilities of OpenAI’s new Agent and how it combines browsing, coding, analyzing, and executing.
  • How Kimi K2 is shaking up the market with 1/100th the cost of top-tier models — and still competing on performance.
  • Why knowing how to use AI tools properly is now the biggest competitive edge for businesses and individuals.
  • The real difference between agents and regular chatbots — and why it matters more than you think.
  • How agentic AI could disrupt e-commerce, office tools, and even coding teams.
  • The widening AI literacy gap — and why training your team isn’t optional anymore.
  • Real stats from studies showing that untrained AI users can be less productive than non-users.


About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hello and welcome to theLeveraging AI Podcast, the

(00:02):
podcast that shares practical,ethical ways to leverage your
eye to improve efficiency, growyour business, and advance your
career.
This is Isar Metis, your host,and I'm actually on vacation
right now and I was not planningto record an episode this week.
It is my dad's 80th birthday andthe whole family got together
and the nieces and nephews andcousins and my kids and
everybody, and it's a lot of funand it's really, really, great

(00:22):
and I'm very excited for him andfor everybody else for getting
together.
But because of that, I was notplanning to record an episode
today.
And yet a few really big thingshave happened and I could not
leave you in the dark whenreally big things are happening.
And so I decided to record atleast a short episode.
Today we're gonna focus on thereally big things and dive into
what they mean.
We're going to touch about howthe world is turning agentic,

(00:45):
and what does that mean for youand your business?
What does it mean for you andyour life?
What does it mean for the world?
And so there's a lot to talkabout, even though we're gonna
dive into only one or twotopics.
So let's do this.
It all started early in the weekwhen the Chinese startup
Moonshot AI released Kimi K two,which is an open source model

(01:07):
with 1 trillion parameters inthe backend and 32 billion
active parameters, and it is areally.
Good model.
This is an open source model, soit's scoring higher than most or
all open source models acrossmultiple benchmarks, and it's
also scoring better than some ofthe leading closed source
Western hemisphere models.
In addition, it is an agenticmodel, and on top of all of that

(01:28):
it is.
Really fast and really cheap.
So the trick here is a fewthings.
First of all, it's a mixture ofexpert style model, which is
most of the recent models are,which means it has different
areas of the model that arespecializing in different
things, and you can train eachand every one of those
separately, and the agent knowshow to call them separately.
In addition, they have developeda new way to train the model.

(01:49):
And they're calling it one Clipoptimizer.
Now this may sound like Chinesepun intended, but what they're
claiming, and I'm quoting now,it enables stable training of a
trillion parameter model withzero training instability.
What is training instability?
Well, training instability hasbeen maybe one of the biggest
issues with training largemodels, what basically happens

(02:09):
is that it creates issues withreally large training runs,
forcing companies to try to runthem to either restart the run
or implement very costly safetymeasures and accept, in many
cases, suboptimal performance inorder to avoid the model
crashing while it's beingtrained.
Either way, it's a very big taxon training large language
models, and with this newmethodology.
They're able to avoid thataltogether, which allow them to

(02:31):
train the model much cheaperthan any other model of its size
and of its capabilities, whichin return makes the model itself
much cheaper.
So what does much cheaper means?
Well if you use it through theAPI, it's 15 cents per million
input tokens and it's two and ahalf dollars for every million
output tokens.
If you compare that to GPT.
4.1.
GPT.
4.1 is$2 for every input token,so more than 10 x and$8 for

(02:56):
every output token, which isalmost four x.
If you compare that to Claudefour Opus, it's$15 for every
input token.
That's a hundred x moreexpensive and$75 for every
output token, which is 30.
X more expensive.
Now, Is it as good as thesemodels?
Maybe.
Maybe not.
Either way, it's close and it'smuch, much cheaper Now, the way
I always go to measure things ishow much people are actually

(03:18):
using it, especially through theAPI.
So I went and checked one of myfavorite API tools that is
called Open Router.
That allows you to actuallyconnect to one API and through
that, get access to more or lessevery model on the planet.
They take a little bit off thetop, but you can do one
implementation and then getaccess to all the different
models.
I've been using it for a verylong time, and over there there
is a dashboard that shows youhow much people are using from

(03:40):
different models this week, andon the programming side, Kimi K
two is now number five, meaningmore developers are using it
through the API this past weekthan Claude 3.7 sonnet, Claude
four, Opus Gemini 2.5, flashpreview, dipe V three and GPT-4
0.1.
All our very capable models.
The only models are ahead of itare Claude Sonnet four, Gemini

(04:01):
2.5, pro, Gemini 2.5, flash, andGrok forth.
That's it.
Now, does that mean it's betterthan all these other models?
No, it means it's more costeffective than all these other
models, which is what reallymatters.
It means it's good enough forthe task.
These developers are using itfor at a fraction of the cost,
which is an important part ofthe game when you're building
applications around APIs fromdifferent large language models.

(04:23):
Now, in addition to all of that,it is an agentic model, meaning
it knows how to autonomously usedifferent tools and define its
own instructions and pave a pathto complete the goal that you
set for it.
Whether it's browsing theinternet, writing and executing
code, et cetera, et cetera.
It's a very capable agentic toolfor a relatively small cost.
So I was very excited to reportabout this, but I'm like, you

(04:44):
know what?
This could wait maybe anotherweek.
Nothing will happen.
I will report about it a weeklater.
But then towards the end of theweek, OpenAI introduced their
version of the same thing.
But then something biggerhappened.
So every time I reported aboutthese generic agents that can do
a lot of stuff based oninstructions, you give them.
Like Genspark, and Manus.
I had several different episodeswhere I talked about them.

(05:06):
SPecifically if you wanna checkone episodes where we talked a
lot about these tools and how torun them safely.
You should check out episode 196.
It was labeled How to Safely RunPowerful AI Agents like Manus
and Sparq with No Risk.
But in that episode and in manyother episodes, I said the same
thing.
I said, these tools areincredible.
They are the future or thepresent.
If you're a geek like me andthey're changing everything we

(05:26):
know because they'resignificantly more powerful and
capable.
And with knowing very little,you can generate a lot.
But what I said is a lot ofthese tools are A more geeky and
B, a little riskier, and all ofthat is going to change if open
AI are going to issue Their ownagentic model that does the same
thing because OpenAI has thetrust of about 800 million
weekly users right now, with allthe respect to Manus and

(05:48):
Genspark and other tools likethis, they probably all combined
drive less than 10% than trafficthat OpenAI gets on a single day
may be less than 5% of thetraffic that OpenAI gets on a
single day.
And so that event happened onthe 16th.
On the 16th, OpenAI announcedthat they're releasing OpenAI
agent, which is their version ofa generic agent that can do a

(06:09):
lot of things.
And before we dive into what cando.
Let's talk about what it is.
What OpenAI did is they tookseveral different capabilities
that they've developed beforeand they combined them together
into a very powerful tool thaton its own can decide which of
those capabilities to use.
It's a combination of deepresearch operator, data analysis
and coder all combined into oneagent tool.

(06:31):
And what this tool knows how todo is it knows how to research
stuff online in its own littlebrowser, which is actually
really cool.
So the approach they took isinstead of having it used your
browser, which generates a lotof risks because as an example,
your browser usually has accessto all your saved passwords and
sometimes credit cards and soon.
Instead of that, it has its ownlittle browser within the OpenAI

(06:51):
interface that pops up, and thenit runs things within that
browser so it can browse theweb, it can research the web in
similar ways as deep research.
It can also.
Operate webpages like operatordoes, meaning it can click on
things, fill out forms, etcetera, that deep research does
not know how to do.
So just this combination on itsown is extremely powerful.
But in addition to that, itknows how to analyze data, so
the data that it brings fromthese different sources.

(07:13):
It can write Python code, put itin spreadsheets, and analyze it
in multiple ways.
That gives it an even better,bigger benefit.
And on top of that, it knows howto write code and it has access
to terminal and even codeexecution.
So the combination of all thesethings.
Make it an extremely powerfultool in completing more or less
every task that you can imagine,because it can do the research,

(07:34):
it can figure out what it needsto do even deeper.
It can define its own process,it can write code, it can
analyze data, and it can do allthese things very, very quickly.
A few additional cool thingsthat you added is the ability to
interrupt the model in themiddle of work.
So you said something and thenyou watch the model doing its
thing, and that gives you anidea or just you suddenly think
about something you forgot toadd.
You can add it while the modelis working on the thing that

(07:55):
it's working on, and you willtake that into account.
It's very conversational,meaning it will stop and ask you
questions if it's not sure aboutstuff.
Just like a employee hopefullywould.
And it can connect to existingdata sources that you currently
connected your chat chip ptaccount to such as Gmail or
SharePoint or Google Drive andso on, which makes it even more
powerful now.

(08:16):
It also because it can writecode.
Because it knows how to analyzedata.
It can create spreadsheets thatcan be exported to Excel or
Google Sheets and it knows howto create PowerPoint
presentations, includinggenerating the images for the
presentations.
And so what we are getting is anextremely powerful tool similar
to Manus and Spar and these kindof tools on, its coming from
ChatGPT.

(08:36):
ChatGPT comes with a lot moretrust with the population.
It definitely has a biggerfootprint and distribution with
the broader population, meaningwe're gonna have more and more
and more people using reallyadvanced agentic tools.
Now, the.
new agent capability is going tobe rolled out to everybody,
including Pro Plus and Teamsusers.
The way it works is just likeyou pick all the other modes,

(08:58):
like deep research, there's justagent mode as part of that in
the dropdown menu on the left ofyour prompt box.
I still don't have access to it.
I have the plus license, but Iassume it's just rolling out and
just like everything else withchat GPT, it will take a few
days and everybody will getaccess to it.
there's very different limits.
You get 400 runs of this tool inthe Pro license, and you get 40

(09:18):
in the Plus and teams license.
But to be fair.
40 is more than one a day, whichfor the average user should be
way more than enough.
And if you need more, that meansthat paying the$200 a month for
the pro version makes perfectsense to you because you are
using this a lot more.
Now, what can you do with this?
You can do the examples theygave in the actual launch, which
I highly recommend watching thevideo.
We're gonna drop a link to thatin the show notes.

(09:38):
They've shown stuff like doingresearch for shopping and
planning for a wedding.
They also shown an example howto allow the tool to measure
itself and bring informationabout how well it's doing
compared to other models, andyou can also do a lot of work
related stuff with it.
Obviously, like market research,scheduling, analyzing resources,
deploying different things,preparing for presentations, et

(09:59):
cetera, et cetera.
There's probably thousands ofbusiness use cases where this
tool will be extremely helpful.
Now before I tell you what Ithink about it and where I think
this is all going or what's theimpact of that, I wanna share
one more aspect, which is rumorsthat are coming from several
different reliable sources rightnow, which is OpenAI is planning
to take this to the next stepand really build around it.

(10:19):
A tool that has a suite ofworkspace tool just like
Microsoft 365 or Google's GSuite, which means they are
going straight after the maindriver of business for two of
the most successful softwarecompanies ever.
And they're literally goingafter their bread and butter,
the things we use every singleday.
Now, other planning to replacethe office suite or other

(10:40):
planning just to compliment it.
I'm not exactly sure.
I think Tam will tell.
But it is very clear to me thatif I had the choice, if there
were really solid tools withinChatGPT that are fully
integrated with all the otherthings that ChatGPT does, I will
reconsider my usage of GoogleSuite.
Now, will it replace everything?
Probably not in the beginning.
Can you replace everything overtime?

(11:01):
Absolutely.
Can you do it better than whatthese tools are doing right now?
Right now it seems that that'sthe case.
Will Microsoft and Google catchup is a very big question.
From what it seems right now,Microsoft is doing a pretty poor
job of implementing ChatGPTwithin its workspace.
As copilot and Google, whilethey're doing better than
copilot, they're still not doinggreat.
And I've said that multipletimes on this show.

(11:23):
I think that both thesecompanies have an incredible
opportunity.
I thought they will capitalizeon this opportunity before the
end of 2024.
I was obviously wrong, but theyneed to get their act together
and bring together a model thatactually looks into.
Everything in their ecosystem.
I don't want Gemini for slidesand Gemini for sheets and Gemini
for docs and a Gemini for Gmail.

(11:43):
And the same thing with copilot.
I want just one Gemini thatconnects to all these tools that
knows everything that I'm doingand has access to all the
information within thatuniverse, whether it's my G
Suite or my Microsoftenvironment, including
everything that comes with it,whether it's the Microsoft 365
Office Suite, whether it'sSharePoint, whether it's
Dynamics 365, et cetera.
Literally everything Microsoft.
I wanted to know and I wanted tounderstand which of the tools to

(12:05):
use when in order to be mosthelpful to me.
Because that is how they'regoing to win against open ai.
And right now it seems thatOpenAI is doing it to them
because OpenAI now hasconnectors to many of these
environments and you can turnthem on and off in order to
prevent it from going to theinternet and focus it wherever
to get the data.
And now they have these genetictools that can generate outputs
that compete directly with theoutputs that are generated by

(12:28):
the office suite or G suite, andso I think this is a serious
wake up call for Microsoft andGoogle, and I'm very curious to
see how quickly they respond andhow well they respond to this
very big threat.
Now, if you think that's thelast component, there are even
more rumors that OpenAI plans tointegrate a payment checkout
system straight into Chachi.
That will lead to severaldifferent things.

(12:50):
The first thing that it willlead to is you can use the agent
tool to do everything that youwant it to do, to go and
research a specific topic, tocompare different options, to
pick the right options, and thento actually go and purchase that
option for you.
That could be a trip.
This could be clothing, thiscould be food.
This could be booking a place atthe restaurant.
It could be anything you canimagine, including doing the
checkout for you.
Now what OpenAI shared in theirlaunch is that when it comes to

(13:11):
payments, you can decide to putyour payment tool straight into
ChatGPT, or you can ask it justto send you the link to do the
checkout on your own.
I think over time as we give itmore trust, it's gonna be a
no-brainer.
Just like today, we're used tosaving our credit cards on
Google or other sources.
We will probably do the samewith chatGPT, which means you
may wanna see what it's about tobuy for you, but once you trust

(13:32):
it completely, maybe, that'seven gonna be redundant and
you're just gonna allow it to doyour shopping for you across the
board.
This kills many.
If not all, or at least most ofe-commerce websites.
So if I was Amazon, I would bethinking very, very hard right
now, how do I counter this newthreat if I am Shopify?
Well, Shopify made the rightmove.
They've done a partnership withChatGPT and that's gonna be the

(13:54):
first big partner where you'llbe able to go and shop things
across all the Shopify stores,which makes perfect sense to
Shopify.
Makes perfect sense to Shopifyusers.
It makes perfect sense tochatGPT.
It makes perfect sense to openAI.
Now, in addition to providing agreat service, OpenAI will take
a cut off the top.
So they will take a fewpercentages out of every
transaction happens, which willgive them another very

(14:15):
significant potentially revenuestream.
Again, if you think about 800million weekly active users, and
if you think about, there's anopportunity to convert all of
them to shoppers as well,because this will help you shop
across multiple platforms, findthe best price, compare
different options, read thereviews.
Basically find the best optionfor you.
This is way better than anyother option out there today,
which means more and more peopleare going to do it, which means

(14:35):
less and less traffic totraditional e-commerce websites.
So.
Why is this so important and whydid I decide to step away from
my entire family to record thisepisode?
This is a complete game changerfrom my perspective.
The release of an agent byChachi Piti, as I said all
along, is a new gPT moment,meaning it's as big and as
important as the release of theoriginal chatGPT, because it

(14:58):
changes the way we interact withcomputers and with the data
around us.
It allows us to do significantlymore.
With significantly less effort.
And if you still don'tunderstand the difference
between that and a regular chatwith chat GPT in a regular chat
with chat GPT, you have to giveit very specific instructions on
what it needs to do, step bystep one by one, monitor what
it's doing correct as you'redoing it.
Also, when it needs access to atool, if it needs to write a

(15:20):
document, if it needs to browsethe web, if it needs to do
different things.
In many cases, you need to do itfor it, meaning you need to take
the data and now do the researchand bring it back.
You need to take the output andcreate the document.
You need to do all these things,and now you don't have to.
It does all of that for you.
It figures out and corrects asit's doing the process, so you
need to do a lot less forgetting significantly more.

(15:40):
The stuff that I've done withManus and Genspark that now I'll
be able to do with ChatGPT ismind blowing compared to the
amount of investment I had toput into them.
Now the impact of that oneverything we're doing is
profound.
First and foremost, we will bemoving ourselves one step or a
few steps further away from theactual tasks.
So if right now you have toprompt step by step.
AI to do things for you, whichremoves you from some of the

(16:02):
steps in the task.
Now, you won't even define thetasks.
You will define the goal and thetasks themselves are going to be
defined by the ai, which meansyou don't even know what the AI
is doing now.
Yes, you can look right now atexactly what it did, and you can
follow what it's doing and youcan stop it and change it and
fine tune it at any given point.
But once these tools evolve.
And you will consistently seethat they're delivering the
right results.
You will stop doing thataltogether, which means you

(16:23):
won't really know what the toolsare doing.
You will just give it an input.
You define what the output needsto be, and you will then use the
output.
This will be true for universitystudents, for high school
students, for our personallives, and definitely for the
day-to-day in our businesses.
Now, is that scary?
Yes, probably to most of us.
Is that gonna be very helpful?
Well, it's gonna be very helpfulif we know how to use it a
effectively and b, safely,because otherwise it's a

(16:46):
terrible opportunity for reallybad things to happen because we
remove ourselves from theprocess.
Now, the other thing that weneed to ask ourselves is how
good is this tool right now?
Is it currently a cool demotool, like the demos that OpenAI
have done when they launched it?
Or like, I'm sure we're going tosee thousands of demos online
within the next few weeks.
Is it good enough for basictasks or is it at an enterprise

(17:06):
grade deployment level?
I don't know.
If I had to guess, I would saythat right now it's probably
somewhere between a good demolevel to a good enough or basic
tasks level, and depending onthe tasks, but the reality is it
doesn't matter.
It doesn't matter because of twodifferent reasons.
Reason number one, once you openthis Pandora box, there's no
going back.
You cannot put it back in thebox.
Once more and more peopleunderstand a gen capability,

(17:28):
they will want more of itbecause it is really magical.
The other reason that it doesn'tmatter whether it's there or
not, is that it's going to getthere.
In the very short term,companies and individuals are
going to find workarounds forthe big issues that are stopping
them, for using it at a widelevel.
And yes, it will not be able todo everything, but it will be
able to do a lot if you know thelimitations and you can work
around them.

(17:48):
Think about our kids in highschools and universities, having
the opportunities to basicallydo the work of an entire course
in a few minutes by just givingthe right prompt and letting it
run through the entire contentof the course, summarizing all
of it, and creating whateverreport they're supposed to
create.
I don't see.
Any student doing anything elseunless he's sitting in a
classroom with a pen and paperand needs to do it by hand,
which has its benefits, but itdefinitely does not prepare that

(18:10):
young individual to the futureof doing that at the workplace
or in the society.
So there's a lot of questions tobe asked and a lot of unknowns
when it comes to the future ofthese systems.
And then the last reason isobviously Open AI will now have
access to huge amounts of dataof actual real life usage, which
will give them more informationon what is working and not
working with this tool, andallow them to upgrade and update
the tool in order to make itenterprise grade tool that can

(18:34):
be used for more or lesseverything in our work.
Now, as I mentioned, I don'thave access to it yet.
I started seeing people that dohave access to it.
But based on my experience withManus and Gens Spark, which are
similar tools, I can tell it isa complete game changer, and
within the next few weeks, we'llstart seeing more and more
examples from more and morecompanies than individuals
sharing how they're using thetool, how it works, what are the

(18:54):
limitations, and so on.
But then there is the finalquestion that is related to
this, which is how good of aprompter do you need to be in
order to actually enjoy thesetools?
So what this tool does is itopens an even bigger gap between
the people who know how to useAI versus the people who do not.
Know how to use ai.
I meet these people every singleweek.
This is what I do.
I teach courses.
I teach workshops to differentcompanies, and there are more

(19:16):
people right now who do not knowhow to properly use the basic AI
tools like Chachi, pt, ClaudeGemini, et cetera, and they're
using it in a very superficialway without having deep
knowledge on how to do this andthis new functionality is just
gonna wider the gap between thepeople who know what they're
doing with AI to those whodon't.
If you.
Know what you're doing.
That puts you a very significantadvantage, both from a career
perspective, as well as from acompany wide perspective.

(19:38):
If you are in a leadershipposition and you and other
people in your company know andunderstand how to use these
tools, you can run circlesaround your competition, and
that's gonna be even moredramatic now with access to this
tool, if you are not one ofthose people, if you're not one
of these companies, yourcompetition might learn that
first and then then will runcircles around you.
Now if you want proof, OpenAIthemselves just shared that

(19:59):
they've developed.
Open AI Codex, which is theircloud-based coding agent in just
seven weeks from scratch.
So one of the most advancedcoding tools that was developed
using AI by people who know howto use AI, was developed in just
seven weeks.
By the way, codex itselfgenerated 630,000 pull requests,
prs, which is a process in acode development in just 53

(20:24):
days.
That's over 10,000 prs per day,and the numbers are just going
up.
This is outpacing traditionalcoding teams by 50%.
That means the people that areusing Codex are creating code
50% faster than people who arenot using it.
It is very similar to what wehear from other sources.
If you look at the recentinformation from Microsoft, they

(20:45):
have generated 600,000 PRS usingGitHub copilot.
And so similar numbers, a hugespike they're reporting 30%
faster code reviews and bettercode reviews than they did
manually with people before,which means they can develop the
next version faster, which meansthey can now deploy it, which
means they can now develop iteven faster and so on.
We're getting to systems thatare basically accelerating their

(21:06):
own development, are becomingbetter and better, and the same
thing will happen to companieswho figure it out.
Versus companies who do notfigure it out.
Now if you need a little moreproof that agents are the next
big deal or that you have tolearn or you will stay behind
Butterfly Effect.
The Chinese startup behind theviral a.
Gent, Manus, which again I'vebeen using for a while.
It started as a Chinese company.

(21:27):
They then opened an anotheroffice outside of China, and now
they're closing down theirChinese base team and moving a
hundred percent of theiroperations outside of mainland
China.
They have relocated all Core 40engineers to Singapore, and
they've established aheadquarters in the us.
All of that in order to attractUS investors and US users and

(21:48):
disengage themselves from thescrutiny of being a part of the
Chinese economy.
Part of it is just the way theywanna be seen and part of it
because the US government isrestricting AI investment in
quote unquote countries ofconcern, China being one of
them.
So where are we?
We are at a point where, as ofthe next few days, 800 million
weekly users will have access tovery powerful agent

(22:10):
capabilities.
That, by the way, did not existat all by any tool for months
ago.
So the Manus moment happened inMarch.
It's just very, very recent, andagain, it probably has tens of
thousands of users, maybehundreds of thousands of users.
But Chachi PITI has 800 million.
The problem with that is that ahuge part of the people around
the world, even people whosomewhat use AI, are completely

(22:32):
not ready for this.
They don't understand how thetool works, and they definitely
don't have the skills and theknowledge gap and skill gap is
just growing.
As I mentioned, I meet thesepeople every single week and
most of them barely knows how toprompt properly a basic AI tool.
So what are we doing?
We're basically taking somebodywho's learning how to drive and
giving them access to a FormulaOne car.

(22:53):
That is not a good idea.
Now I know what some of you arethinking.
Some of you're thinking thatagents, because they're so
sophisticated and they know howto define their own tasks, may
reduce the need to know how toprompt.
And in the long run, I wouldprobably agree with you, but in
the immediate future, I thinkthere's gonna be a huge
difference between the peoplewho know how to use these tools
properly and the people whodon't.
And to be fair, I think thepeople who don't are actually

(23:14):
gonna waste more time than thetime that they're going to gain.
And if you want proof for that,a new research study by me,
METR, which is a company wetalked about before.
They're doing AI research.
They shared a lot of interestingstuff that we covered on this
podcast.
They did a very interestingresearch about developers using
tools like Cursor Pro and Claude3.5 and 3.7 sonnet to improve
their code writing speed andefficiency.

(23:36):
They took 16 developers, dividedthem into two different groups,
and let some of them use ai andsome of them don't.
The people who use AI expectedto be able to complete the tasks
24% faster than the people whodidn't.
When in reality, it took them19% more time to complete the
tasks with ai.
Now, this wasn't a two minutekind of like research.
They've done the research fromFebruary to June, again with 16

(23:57):
people on randomly controlledtrial with 246 different coding
tasks from bug fixing features,refactoring and so on, and.
These people saw actually adecrease inefficiency by using
ai.
Why is that?
Because it was people who werenot trained how to use AI tools
properly.
A lot of the time that gotwasted was them waiting for the

(24:17):
AI to do its thing, versusdeveloping new processes where
you can jump back and forth andchange context quickly between
different tasks or writing.
Which is something that I do allthe time.
I will give AI a task.
I will go and do a few emails.
I will wait for it to finish.
I will come back when it'sfinished, and then I will
continue from there and jumpback and forth.
This is not traditional way ofworking, but it is the need in

(24:38):
order to make the most out ofthese tools, and so getting
proper training and the righteducation for yourself, for your
employees.
And for everybody in yourecosystem in order to really
benefit from this additionalincredible capability that
OpenAI just gave us, has tohappen.
How do you train your people?
Well, first you know that wehave the AI Business

(24:59):
Transformation course.
The next variation of it startson August 11th, so if you wanna
learn how to use AI effectively.
Don't miss this course becausethe next public course will
probably happen around November,and that's a whole additional
quarter.
We teach private courses all thetime and workshops for specific
companies and organizations.
And if you are interested in ourcourse, you can use promo code
LeveragingAI100 for$100 off theprice of the course.

(25:20):
So take advantage of the factthat you are a listener of this
podcast and enjoy this discountand come and join us on August
11th.
There's a link in the show notesor reach out to me on LinkedIn
and I will gladly help youfigure out what's the best
solution for you.
Or you can go with somebodyelse, but whatever you do, find
a way to train yourself, totrain people in your company on
how to use AI effectively,because the speed in which

(25:41):
people who are using it arepulling away is increasing all
the time.
And this newgen capability justtakes it into a whole new level.
Now, if you wanna learn, whatare the key things that are
important to consider whenselecting a course or what kind
of training you can deliver toyour company, or what can you do
as a leader of a business inorder to drive the most results

(26:01):
from ai?
We just recorded an episode thatis gonna be released this coming
Tuesday, that will share all ofthat in detail, allowing you to
understand what are youroptions, what are probably the
best ones for you, and whataction should you take, either
as an individual or as a leaderof a company.
But there is so much more thathappened this week.
Some of the big news come fromMeta with a very interesting

(26:23):
interview from zuckerbergtalking about investing hundreds
of billions of dollars incompute and how he sees their
future in this race.
Why they're investing in whatthey're investing.
Why does he think a lot oftalent is jumping ship to them?
And I will tell you that it'snot just paying them hundreds of
millions of dollars.
That's probably a big part of itthough.
There are a few more interestingmodel releases, including a very

(26:44):
interesting voice model from MIStrial.
Some updates on the new grok,It's unacceptable anti-Semitic
outbursts last week and manyother news that you can read
about if you sign up to ournewsletter.
So I'm not going to cover themtoday.
I will go back to spending timewith my family to celebrate my
dad's 80th birthday.
But I really wanted to sharethis with you.
But if you wanna know the restof the news that happened this

(27:04):
week, again, there's gonna be alink in the show notes.
You can click on that.
Sign up to get our newsletterwhere we're going to cover all
the rest of the news While youalready have your phone in your
hand, in order to sign up forthis newsletter, click the share
button on your podcast playerand share this podcast with
anyone you know that can benefitfrom it.
This is your way of increasingAI literacy and AI education
around the world, which rightnow becomes more and more

(27:26):
critical.
And if you are on Spotify orApple Podcast, I would
appreciate if you leave us areview as well.
That's it for this weekend.
Keep on experimenting with ai,keep on learning and sharing
what you've learned with otherpeople.
And I will see you back onTuesday.
Have awesome rest of yourweekend.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.