Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Isar Meitis (02:06):
Hello and welcome
to a weekend News edition of the
Leveraging AI Podcast, thepodcast that shares practical,
ethical ways to improveefficiency, grow your business,
and advance your career.
This Isar Metis, your host, andwe have a packed episode like
every weekend, if, to be fair.
But there's a few really bigdiscussions that we're gonna
have in the beginning that aregoing to impact the future of AI
(02:29):
and the future of businesseswith ai, with some very
interesting inputs, such as SamAltman's testimony in front of
the Senate hearing, as well asshifts in entire sets of
services and value thatcompanies are providing.
Blurring the lines betweentraditional industries, combined
with a very interestinginterview done by McKinsey,
(02:50):
talking about the urgency ofCEOs to act now.
And followed by Open AI'sblueprint for AI in the
enterprise with the seven thingsevery company should focus on
per OpenAI, per their experiencewith working with several large
enterprises.
And then we'll dive into therapid fire starting with open
ai, not changing to a for-profitorganization, but there's a lot
(03:10):
of other stuff to talk about,specifically with open AI and
with a lot of other companies.
And there's even someinteresting market-wise, global
news related to Nvidia in theend.
So let's get started.
This week there was a meeting ofthe Senate Committee on
Commerce, science andTransportation, which focused on
(03:31):
AI and the impact it's going tohave on the us It actually
started by Senator Ted Cruzcompletely trashing the policies
of the previous administration.
He took us back to the growth inthe 2000 in the internet
revolution.
He claimed that the size of theUS economy and Europe's economy
were similar before thatrevolution.
And then he claimed that a setof regulations in the EU slowed
(03:55):
the innovation down whileregulations led by Clinton back
then in the US made it very easyfor companies to innovate and
grow different solutions basedon this new technology.
And he's claiming that's themain reason to a huge difference
in size of economies right nowwith the US economy, more than
50% larger than the sum ofEuropean economies.
And he's claiming that the maindriver for that was the growth
(04:19):
of the tech sector in the USthat did not grow in the same
pace in Europe per him, mostlybased on differences in
regulation.
So that set the stage to whatwould be a very interesting
conversation with very seniorguests, people like Sam Altman,
the CEO of OpenAI, Lisa Sue, thechair, and CEO of a MD, Michael
Rador, the co-founder and CEO ofCore Weave.
(04:39):
Brad Smith, the president ofMicrosoft and a few others.
So very senior people withsignificant impact and
connections on the AI revolutionwere testifying in front of this
committee.
Now the second step of the setupwas talking about the importance
of US leadership in the AI race,and it was very clear that the
tone would be that theleadership in this race is
(05:00):
crucial to who's going to moreor less decide the future of the
world.
Is it going to be totalitariancountries like China or
democracies like the us?
And it was stated that currentlythe US is in the lead, but
significant steps are requiredin order to maintain that lead
from mostly China.
Now they spoke about manydifferent things, including
potential benefits of AI as wellas potential risks of ai, and it
(05:25):
was all around how the privatesector and the public sector
must work together in order toguarantee that the US wins the
AI race against China andworking together with its allies
in order to make sure they useUS-based technology versus
Chinese technology.
I.
A big part of the conversationwent around infrastructure and
defining it as a foundationalrequirement for AI leadership.
(05:47):
So it was very clear that asignificant investment in
development of physicalinfrastructure, mostly for power
supply, but also in datacenters, is essential to
supporting the growth of AIeverywhere in the world, in the
US as well.
And this includes, as Imentioned, computing, power,
data centers, and mostimportantly, energy.
It was mentioned thatelectricity demand for AI
project is substantial, which weall know, but in the near future
(06:10):
is going to reach 12% of thetotal US demand, which is a very
significant amount because it'scompeting with everything else.
The good news about it from ajob generation perspective is
that there is a significant needfor hundreds of thousands of new
electricians in different rolesand skill laborers in the
electrical field to create andthen maintain this
(06:33):
infrastructure.
Another big topic that wasdiscussed is permitting mostly
on the federal level.
Because a lot of open spacewetlands, et cetera, is governed
by permits.
That is driven by the Army Corpsof Engineers, and they are
currently a significantbottleneck to big infrastructure
projects.
It was mentioned as an examplethat on the state level, major
projects can be pushed throughpermitting between six and nine
(06:54):
months, while on the wetlandpermit is taking 18 to 24
months.
So it could take you up to twoyears just to get the permit.
So do a significant project ifyou need this kind of approval,
and that approval is required inmany cases to large scale
project that goes through theseareas.
To clarify how critical power isto the future vision of these
companies and how the worldwould look like.
(07:15):
I wanna share a quote from SamAltman who said, eventually the
cost of intelligence, the costof AI will converge to the cost
of energy.
If it will be how much you canhave, the abundance of it will
be limited by the abundance ofenergy.
So in terms of long-termstrategic investment for the US
to make it, I can't think ofanything more important than
energy.
Basically what Sam is saying,and again it was very clear from
(07:36):
other people there as well thatthe race to create new energy to
support new development of datacenters is a critical and maybe
the most critical component ofwinning the AI race.
Another big factor was theavailability of skilled talent.
It was mentioned that the UScurrently has a very strong
talent pool, but in order toguarantee that the US wins this
(07:56):
race, we need a lot more of it,including software developers,
hardware developers, andapplication developers, just to
name a few, and it was urgedthat high skilled immigration is
crucial for bringing the besttalent from all around the world
to work in the US and contributeto US AI innovation.
Another important topic wasobviously balancing policy and
regulation.
(08:17):
The term that was used byseveral different people is a
light touch federal regulation,meaning a framework that will
balance safety and innovationfrom the federal level that will
reduce the need for a statelevel patchwork.
It was mentioned by Sam and afew others that working through
50 different regulations willdramatically slow down AI
innovation versus having a clearguideline from the federal level
(08:38):
might prevent the need to dothat.
Another part of the conversationtalked about export controls and
the need to balance the risks ofhaving stuff getting to China
with the need to have as manycountries around the world.
Important implement US-basedtechnologies in order to create
an alliance around the worldthat will use US technology
versus Chinese technologies forglobal AI domination, it was
(09:00):
very clearly stated that strongexport controls will create a
vacuum in many areas around theworld, which will be filled by
competitors of the US to provideAI solution these in these
countries.
It is already happening in Chinaitself, where Hui, who is their
largest AI chip manufacturerover there, is growing at an
insane pace, both in means ofthe amount of chips they're
generating and sailing as wellas the capabilities that they
(09:22):
have because of us exportcontrols.
But if we've prevented fromother countries, it's gonna fill
up this vacuum as well.
Risks were also discussed inthis conversation, including
protecting children frompotential harms of ai, talking
about learning from the mistakesof the internet and social media
eras that had dramatic negativeimpacts on teenagers and younger
kids as well.
(09:43):
There was a conversation aboutthe problem with deep fakes and
protecting individuals likenessfrom unauthorized replication,
and the push for industry tofind ways to clearly identify
what is AI generated versus whatis not.
The problems of disrupting theconcept of intellectual property
by these AI companies was also atopic that was discussed and the
potential for AI to be used forboth offensive and defensive.
(10:05):
Cyber security was also raisedas a concern.
The bottom line was very clear.
The leading companies and thiscurrent administration are
calling for a partnership thatwill have relatively little
limitation on these companies todevelop the most advanced AI
capabilities to keep the USahead, despite the risks and it
called for partnership betweengovernment and these companies
(10:28):
in order to enable large scaleinfrastructure projects to
enable that in combination withpartnerships with allied
countries to deploy thesesolutions for them as well.
We discussed this shortly afterthis new administration was
elected, that's probably what isexpected when the elected AI SAR
was announced, and he's aventure capitalist, one of the
PayPal Mafia and a big name onSilicon Valley.
(10:50):
It was very, very clear thatthis is a direction, less
regulation, more investment, runfast and we'll figure out the
risks later.
And while this particularconversation was more balanced,
including discussing a lot ofthe risks, I think the direction
is very clear.
Another topic that I want tocover is OpenAI released what
they call the blueprint for AIin enterprise.
It is based on their experienceof large enterprises using open
(11:12):
AI's technology and what theybelieve drives it to be
successful versus notsuccessful.
Their report reveals that 92% ofFortune 500 companies are
actively using ai.
It didn't exactly say what thatmeans and what the ROI that
these companies are seeing, butit's a much larger number than I
expected.
But again, this could mean thatone person in the company is
using Chachi PT to write emails.
That would still mean thatsomebody in the company is using
(11:34):
ai.
I don't think, by the way,that's the case, but I think
it's too vague right now tounderstand exactly how they're
using ai.
They also claim that companiesthat use chat GPT are reporting
40% reduction in task completiontime for tasks like coding,
writing, and data analysis,which means it's enabling
significantly more throughputand production by these
companies.
So the seven steps that OpenAImentioned are start with evals,
(11:56):
meaning define a systematicscientific ways to measure the
performance of AI againstspecific benchmarks versus
deploy the AI and then try tofigure out what it's actually
doing.
So define the exact things youwant AI to do, define how you
measure that, and then actuallymeasure it and correct
accordingly.
Or you may end up with the wrongoutput.
The second one was embed AI inyour products.
(12:16):
Meaning don't just use itinternally, but actually use AI
to enhance the value of thedeliverables of your company,
whether it's product orservices.
The next one is start now andinvest early.
That's pretty self-explanatory,but they're basically saying the
AI era is here.
It's not the future.
If you're not investing it rightnow, you might miss the train or
suffer the consequences.
The example they gave is thatKlarna early investment and
(12:39):
broad adoption of AI led to themhaving a customer service
assistant that handles twothirds of all service chat,
cutting the resolution time from11 minutes to two minutes, and.
This by itself is projected todrive$40 million of profit to
the company.
The next topic is customize andfine tune your models where the
benefits are clear, right?
it allows you to improve theaccuracy and domain expertise,
(13:00):
consistent tone and style,faster outcomes, and so on by
using customized models.
More about that from OpenAI in afew minutes.
The next topic was specificallyabout code generation, but most
large enterprises also writecode as part of what they're
doing, even if it's just forinternal purposes.
So the next one was unblock yourdevelopers.
Automating the softwaredevelopment lifecycle can
multiply the dividends of whatthe company is doing.
(13:22):
The example they gave there isfrom MercadoLibre, who is a
huge, retailer in South America,who developed a platform layer
called Verde that helps their17,000 developers unifying
accelerate AI applicationbuilds.
So they're now creatingapplications for internal usage
significantly faster.
And then the last point thatthey said is Set bold automation
goals.
I think this is a very importantpoint because what happens, and
(13:43):
I talk about this a lot in thecourse that I'm teaching as
business people, we're used tolooking at problems.
We used to looking at businessprocesses as processes, multiple
small steps that you have to gothrough.
And in many cases the mistakethat companies make is they try
to solve each and every one ofthe steps with AI versus looking
at the bigger problem versuslooking at the goal.
The Klarna example, and also inthis particular paper, OpenAI
(14:05):
talks about themselves and howthey're using AI to handle
hundreds of thousands of tasksevery single month.
With AI is showing you that youcan, if you look at customer
service as an example, AI can docustomer service instead of
solve small components ofcustomer service, like a better
IVR on the phone, or betterdistribution of the tasks to the
relevant people.
It can actually do most of thesteps of the work circumventing
(14:28):
the small steps.
So setting bold goals for AI candrive much better results,
assuming you have the resourcesand the knowledge on how to do
that.
Now, speaking about AI and howit is impacting different
industries, there was a veryinteresting article on the
information in the last few daystalking about how AI is allowing
companies to go beyond theirniche or their industry and grow
(14:51):
into areas where they couldn'tprofitably grow before.
So the examples they gave arecompanies like Salesforce that
is traditionally just a CRMdeveloper.
Now developing AI agentplatforms.
Also providing customer servicesolutions, which directly
competes with ServiceNow as anexample.
ServiceNow, on the other hand,known for IT services management
is pushing into HR and customerservice with AI driven products
(15:13):
and so on and so forth.
Companies like Glean and Notionare disrupting legacy companies
with AI driven search andproductivity tools.
And Canva now has a spreadsheet.
So you see where this is goingbecause companies can now
develop solutions faster becauseAI enables that and there's AI
embedded into it.
They can provide a lot morevalue in a lot more fields, a
lot faster, allowing them to doit potentially in a way that is
(15:36):
profitable.
And hence they're gonna go intothese fields.
This is increasing thecompetition across the board
between companies because theycan now shift the focus of what
they're doing, or at least gointo additional markets that
they couldn't address before.
I talk about this a lot in thelast chapter of my course, where
we discuss what questionsbusiness leaders needs to ask
themselves in order to grow thebusiness, versus potentially be
(15:56):
eliminated by the fact thattheir customers, their
ecosystem, their competitors andthemselves have access to ai and
how to navigate that startingnow in order to increase the
chances of positive outcome inthe future.
Staying on the same topic ofwhat CEOs need to do, there was
a very interesting interviewthis week on a podcast by
McKinsey.
(16:18):
They interviewed John Chambers,the legendary CEO of Cisco, who
led it to its greatest successand a lot of the conversation
focused on AI And I'll startwith a quote.
John Chambers said, in many waysthe implementation of AI is like
that of the internet, but it isgoing to move at five times the
speed with three times theoutcome.
Basically, he's saying thatthese two revolutions are
(16:39):
similar, but you gotta be ableto move significantly faster and
the benefits will be three timesbigger if you actually do that.
And he mentioned that companiesneed to be able to go from zero
to a hundred in no time andbasically cross the chasm of AI
in one year, or risk beingreplaced by other companies that
will do the same.
I.
He specifically spoke about thesame kind of topic that
(17:01):
companies and organization needto change, potentially the
overall business that they'rein, in order to guarantee that
the company will survive.
He gave an example of the Frenchpostal services.
Again, they deliver mail, isusing AI to shift from declining
letter delivery to parcelservices and new offerings,
including elder care, showcasinghow traditional businesses can
(17:22):
completely reinvent themselveswith AI or the other way around,
right?
If they don't do that, they maygo extinct because AI may
replace the thing that they'redoing right now.
it was very, very clear thatbridging the skill gap and
knowledge gap of employees andof C-Suite is a critical
component and a barrier to AIadoption.
And I agree with that a hundredpercent.
As somebody who has been focusedon AI education and literacy in
(17:45):
companies large and small, I cantell you with confidence that
C-suite and leadership buy-in,in combination with.
Leadership and employee trainingare hold the keys to incredible
benefits from ai, and if youdon't do that, both components,
C-suite buy-in, as well aseducation and training for
leadership and for employees,your chances of a successful AI
implementation are very, verylow.
(18:07):
Now on the impact on themarkets, which we discussed in
several different previousepisode, the tech industry has
shed 214,000 jobs in April of2025, driving the sector's
unemployment rate to 3.5% from3.1% just in March.
So that's one month.
Now, obviously AI is not theonly driver for this.
There are economic concerns ofpotential recession and tariffs
(18:30):
issues and things like that, butit is very, very clear that
these companies are pushing veryhard on automating and creating
code with ai which means you cangenerate the same outputs and in
many cases bigger and in somecases more outputs with
significantly less people.
Microsoft, CEO, Satya Nadelashared last week that 30% of the
code that Microsoft is generatedby AI right now.
(18:51):
That means that if you need togenerate the same amount of
code, you need less people.
If you want to generate morecode, then you can maintain the
same amount of people or not,and I think the direction is
very, very clear.
Both for companies and forindividuals that wait and watch
what happens.
Approach is not going to work.
You have to be proactive and youhave to take action, and you
have to know what you're doing.
If you are an individual, youmust have significant AI skills
(19:14):
or you will lose your job.
It's just a matter of time.
And for companies, if you do nottrain your people properly and
give them the knowledge on howto implement AI for a large
variety of tasks as well asconsider the strategic side of
what is the future of yourcompany In the AI era, your
company will suffer significantmarket share loss and might be
eliminated altogether because ofthe changes that AI will drive
(19:37):
in the economy, in businesses,in your niche, in your industry,
and so on.
And the best way to do that isto start with training and
education of leadership and thenthe rest of the employees, which
is a good point to remind peoplethat the AI Business
Transformation course that I'vebeen teaching for over two years
to hundreds and maybe thousandsof business people, the spring
cohort Starts this Monday, May12th.
So if you're listening to thisepisode on the Date Comes out on
(19:59):
Saturday, May 10th, or on.
Sunday, May 11th, or even onMonday morning, if you're really
a procrastinator, you can stilljoin the course and get
incredible value in learning AIand how to implement it
effectively in a businessperspective.
Three very practical sessionsending with a strategic session
at session four.
And so if you haven't now, so ifyou haven't done this yet, this
(20:20):
is an incredible time to actbecause the urgency is very,
very clear from every angle.
And you hear that fromgovernment, from top leadership
in the AI universe, as well asfrom industry.
The AI change is here, and ifyou won't adapt, you will suffer
the consequences.
Now, we teach this course allthe time.
I'm teaching two courses inApril.
I'm just finishing a course thatstarted in April.
(20:40):
There's two courses in May, andso on and so forth, but most of
these courses are private.
You cannot sign up for them.
We open public courses only oncea quarter, so the next course
most likely will be in August,meaning if you don't sign up
before Monday the next time, youcan join our cohort and learn
the skills, the knowledge, andthe strategy that you can use to
drive dramatic changes in yourcompany and in your personal
(21:01):
career will be delayed by anentire quarter, which is not a
good decision, I think.
So if you can come join us onMonday, the course is four
weeks, every Monday at noonEastern time, we have people
from all over the world.
So even if the time zone is notperfect for you, we have people
from China, we have people fromNew Zealand, we have people from
Australia, India, middle East,and so on.
(21:21):
So wherever you are in theworld, this course can
dramatically impact your futuresuccess.
So come and join us.
By the way, if you are listeningto this after May 12th, we will
replace our sign up for the waitlist for the next course, which
will probably happen in August.
So don't be totally encouraged.
You can still come and sign upfor the August course and join
us later this summer.
So that's it on the impact andthe big topics.
(21:44):
Now let's dive into the rapidfire items.
And as I mentioned, there's alot to talk about, even just
from open ai, this could havebeen an episode by itself.
The biggest news from open AIthis week is that they abandoned
the plan to transition into afor-profit entity, meaning the
nonprofit board will stay incharge of the company.
Now, we've covered this topic inmultiple episodes, but a quick
recap for those of you who juststarted listening this episode,
(22:06):
OpenAI started as a nonprofitcompany with the goal of
developing AI that it willbenefit all humanity in a safe
way.
One of the first investors inthat company was Elon Musk, who
was one of the co-founders, andlater on when they understood
they need a lot more money, Elonwanted to roll OpenAI into Tesla
in order to be able to financeit.
He wanted to become CEO of thiscompany as well.
(22:27):
That wasn't the path that OpenAItook.
Elon Musk left starting aserious beef with Sam Altman in
OpenAI as well.
OpenAI then went to Microsoftand got a crazy investment that
up to this point is over$13billion in both cash and access
to compute It was very clear tokeep on raising the amount of
the amounts of funds thatthey're raising, including a$40
(22:47):
billion founding round, thelargest of any private company
in history that they just raisedfrom SoftBank and other
companies.
Valuing the company as 300billion cannot happen with a
entity.
So they were in the process inthe past year or so of
transitioning from a nonprofitentity to a for-profit entity so
they can provide the relevantreturns to their investors.
(23:08):
But that faced a very seriouspushback from multiple people.
The first one being Elon Musk,who sued them for that
particular purpose.
More about that in a minute, aswell as previous employees and
other bodies who basically saida nonprofit company cannot just
decide to transition to afor-profit company because it
betrays all the money that wasput into it before for the
(23:28):
benefit of the public.
So after fighting that battle,including many meetings with the
Attorney General of bothCalifornia and Delaware, OpenAI,
I guess decided that they're notgoing to win this war, even if
they might win one or two of thebattles.
And they just announced thatthey're abandoning their attempt
to become a for-profit company,but it there also announced some
significant changes at OpenAI,both from a leadership
(23:49):
perspective as well as how thecompany will be structured.
So OpenAI will convert itsfor-profit arm that existed
before and was controlled by thenonprofit board.
They're gonna convert that froman LLC to a public benefit
corporation allowing for equityto employees and investors.
While the nonprofit will retainthe majority control.
This basically means that thefor-profit component can run as
an entity, still allowingprofits to be shared with
(24:12):
investors while at the same timehopefully having the nonprofit
board govern the futuredecisions and the strategy of
the company moving forward, thatstill stays questionable because
Sam Altman, if you remember, wasfired and then brought back and
then need a whole change in theboard structure and people in
the board to have more controlover the board.
So how much is the board reallynonprofit focused is not a
(24:34):
hundred percent clear.
And that's one of the reasonsthat Elon Musk just announced
that he's moving forward withhis lawsuit against OpenAI, and
now I'm quoting the statementfrom his lawyer.
Nothing in today's announcementchanges the fact that OpenAI
will still be developing closedsource AI for the benefit of
Altman, his investors andMicrosoft and quote, which
basically means that thenonprofit structure is just
(24:56):
obscuring the actual assettransfer from the open source
world and the nonprofit and thebenefits of humanity to private
gains, which is what he's tryingto fight.
There's obviously a lot moregoing on.
There's a lot of personal beef,and there is Xai, which is a
competitor of Open ai, and beingable to slow open AI down will
help Xai grow faster.
There's also a counter lawsuitfrom Open AI against Elon for
(25:17):
going after them for no goodreason and that it actually
serves his business agenda andnot really the things that he's
suing them for.
One of the biggest implicationsof not being able to transition
from a non-profit to afor-profit is that part of the
money that they raised in thelast two rounds, including the
40 billion in the recent round,will not come to them if they do
(25:37):
not convert by the end of thisyear, which they're not going
to.
So as an example, the$30 billionthat soft bank committed are
gonna be cut to$20 billion.
That is$10 billion with a B thatthey won't have to use.
And there's a similar agreementwith the previous round that
they've raised in late 2024.
How will that change?
How will that impact, willSoftBank really pull those
(25:58):
funds?
Is unclear, but there's also avery significant financial
implications to the fact thatthey can't make that conversion.
Maybe the fact that they'rechanging the structure of the
company from an LLC will helpthem maintain at least some of
that amount.
But I will let you know once welearn exactly how that evolves.
(26:20):
As part of these major changes,they also made a huge
announcement from a leadershipperspective, where OpenAI hired
Instacart, CEO, Fiji, CMO as theCEO of OpenAI applications to
oversee product, business, andoperational teams.
The current full CEO of OpenAI,Sam Altman, will only focus on
research infrastructure, safetysystem, and board collaboration.
(26:41):
So basically, OpenAI is going tohave two CEOs, Sam focusing more
on the strategy and Fiji CMOfocusing more on application and
deliverable of product.
This is obviously a tectonicshift from Sam Altman being the
strongest person maybe in the AIworld right now.
Giving up half the kingdom tocmo.
(27:01):
CMO is only 39 years old.
She led Instacart intoprofitability since 2021.
Took it public in 2023 andbefore that headed Facebook at
Meta for a decade.
So bringing a significant proventech leadership track record
that hopefully will bring thesame kind of results to open AI
as well.
Now that is not happeningimmediately.
(27:22):
Sima will join OpenAI later in2025 as she needs to transition
out of her position atInstacart.
This is very significant.
Obviously the fastest growing,most known company in the air
world that started this crazyera is going to have two CEOs as
of later this year focusing ontwo different things.
I think from a leadershipperspective it makes sense.
(27:43):
I'm sure Sam was stretched very,very thin, and I think his
genius will better serve thethings he's going to focus on
versus product development andgrowing the business side of
things.
And it's very clear that si Ohas a very successful track
record in doing that.
And it will be interesting tosee how much fuel it adds to the
OpenAI position in the overallAI race.
Now another thing that theyannounced as part of this
(28:04):
restructuring process is they'replanning to cut the revenue
share of existing investors,including Microsoft from 20% to
10% by 2030.
So within five years basicallycut their revenue sharing in
half.
As a reminder, Microsoft as ofnow invested$13.75 billion in
OpenAI, and it has not approveda lot of these restructuring
(28:25):
yet, and it'll be veryinteresting to see how that
whole thing evolves.
That being said, the formalstatement said we continue to
work closely with Microsoft andlook forward to finalizing the
details of the recapitalizationin the near future.
I obviously don't know what'shappening behind the scenes.
I would love to be a fly on thewall in those conversations to
see how the big boys, really doit.
But I think both sides needs tomaintain this frenemies
(28:47):
relationship, at least in thenext few years.
So I assume they will figure itout.
OpenAI also made a bigannouncement on the technical
side, where on May 8th, theyannounced that developers can
now use reinforcement finetuning, also known as RFD, to
customize all four minireasoning model, tailoring it
for specific company andenterprise needs.
Now, it was possible to finetune OpenAI models before, but
(29:09):
the only option was supervisedfine tuning versus reinforcement
fine tuning.
Reinforcement fine tuning,employs a greater model,
basically grading the score ofmultiple responses, adjusting
the model weights to align withnuance calls of enterprise and
communication styles, meaningit's a more advanced way to fine
tune the model that was notavailable before and it is
available right now.
(29:30):
They gave an example thataccordance AI used this new fine
tuning of all four mini for taxanalysis, achieving a 39%
accuracy improvement,outperforming leading models on
tax reasoning benchmarks.
Basically meaning that if youknow how to fine tune a model
based on your data and based onyour needs, you can achieve
significantly better resultsthan just using the top models,
(29:51):
but the top models that are notfine tunes.
So if your company needssomething like this and you have
the right talent in the room,you can now configure RFT via
the Opens AI Fine Tunedashboard, and or API, uploading
datasets and promptingvalidation splits with detailed
documentation available to showyou exactly how to do that.
Staying on OpenAI models.
OpenAI 4.1 is now the defaultmodel for GitHub.
(30:14):
Copilot.
So previously it was 4.0.
Now four one that outperforms 4oin coding has replaced it by
default.
If you are using GitHub copilot,you can still replace back to 4o
by your own selection, but thatis going to go away in 90 days.
So 4.0 will be completelyremoved from the model picker,
90 days from now.
And 4.1, which performs better,more or less across the board is
(30:36):
now the default model.
Open AI also announced thatthey're expanding their data
residency in Asia.
That's actually a veryinteresting move.
So there are regulations inmultiple countries in Asia such
as Japan, India, Singapore, andSouth Korea to store data
locally, especially on moreregulated industries and open AI
is now making a move toguarantee that the data stays in
(30:56):
data centers in those countriesfor companies from those
countries.
They also stated, and I'mquoting for the API platform and
the chacha Business Productsdata remains confidential,
secures, and entirely owned byyou.
It is very clear that thebiggest race right now happens
on the enterprise level and noton the individual level.
Despite OpenAI having 500 to 700million weekly users on their
(31:18):
platform, the biggest revenuescome from enterprises in their
adoption.
And that's another step in thatdirection for world domination
in the AI race.
And the last piece of news fromOpenAI has to do with their
potential acquisition ofWindsurf.
We talked about this in the pastcouple of weeks it was rumored
that OpenAI is going to buywindsurf, which is a.
AI assisted coding platform thatcompetes with other tools such
(31:41):
as GitHub, copilot that we justmentioned, and obviously the
biggest one being Cursor.
So based on Bloomberg, that dealis already done.
It's not approved yet and hencethey haven't announced it, but
apparently they agreed on allthe terms.
So OpenAI is most likely goingto acquire Windsurf for$3
billion.
The competition in the AI codedevelopment is fierce, like in
(32:01):
other aspects of ai, but maybethe strongest competition is in
this particular field.
The concepts of vibe coding aswell as just accelerating
existing code generation hasbeen in the frontier of the AI
race in the past six or ninemonths with companies like
Cursor growing like crazy, andimpacting obviously the
underlying models as well.
With Claude 3.7 sonnetbenefiting a huge growth just
(32:22):
from many coders using theirplatform under the hood.
Now, how will OpenAI useWindsurf?
It's not a hundred percentclear.
They were conversationspreviously of OpenAI potentially
developing their own codegeneration tool.
They even released a fewcomponents of that in the past
few months, but I think that'sgonna be very interesting to see
how they're gonna use it.
I don't think they're gonnaforce people to use OpenAI tools
under the hood in Windsurf,because I think that will push
(32:45):
away a lot of the 600,000 peoplethat are currently using it.
So now that we spoke about openAI and their ambitions in code
writing with ai, apple made avery interesting announcement
stating that they're going tocollaborate with philanthropic
to create a quote unquote vibecoding platform internally and
to the Xcode platform, which isused internally by Apple to
(33:05):
develop their own code.
So Apple has been showing up infailure after failure when it
comes to AI implementation inthe past few years.
It's a real embarrassment tothem and a real embarrassment to
the Apple brand in general.
We discussed in previous episodeabout the reshuffling in
leadership over there and howthey're trying to fix that, but
that's still very longtimelines, talking about
potentially two to three yearsout to deliver things that they
(33:25):
said that they're gonna deliverlast year or this year.
So they're now at the point of,instead of building it
themselves, they're going tocollaborate at least for a
while, and their goal is toenhance Apple's flagship
programming platform with Claude3.7 sonnet, which is considered
by many people as the leadingcoding platform in the world as
the most capable coding largelanguage model right now.
(33:46):
So the platform will now featurea chat interface where developer
can request code modifications,UI testing, and other things
that you do during the softwaredevelopment, all powered by
Claude Sonnet automating bothdevelopment and debugging tasks.
As I mentioned, they're going todeploy this tool internally
initially with no clear decisionwhether they wanna make that
public in the future or not.
I think that finally makes sensefor Apple.
(34:08):
It will allow them to startdoing stuff that other companies
have been doing for a while.
They will be able to start doingit right now.
They're gonna test it internallyfirst without taking risks of
another failed deployment.
So all of this makes perfectsense to me.
Two interesting governmentrelated news.
One is that the FDA has heldmultiple meetings with OpenAI to
discuss potentially using AI forAI drug evaluations.
(34:30):
The FDA Commissioner MartyMcCarey revealed that the agency
completed its first AI assistedscientific review for a product,
which is a step into modernizingdrug approval.
As you know, drug approvalcurrently takes about 10 years
for a single drug, and if thatcan be cut by whatever, it will
benefit humanity by being ableto deliver drugs faster.
The FDA has recently establisheda new position that is while
(34:53):
nominating Jeremy Walsh to bethe FDA's first ever AI officer,
so they're very serious aboutimplementing AI as part of the
FDA.
An interesting participation inthose conversations were two
associates from the Departmentof Government Efficiency, also
known by Doge, led by Elon Muskthat has joined these
discussions as part of a AIdriven reform to the government.
(35:14):
Staying on that topic of Dogeand AI and government jobs,
anthony Jano, who's theco-founder of Accelerate X,
revealed the plans to deploy AIagents across federal agencies
aiming to automate tasksequivalent to 70,000 full-time
jobs within one year.
Now, according to Inc.
Gensco shared with 2000Palantir's alumni, that Doge
(35:35):
established a project tostandardize 300 plus federal
roles, so specific roles basedon a very long list of tasks
that they identify that AI canautomate.
And the goal is to free up atleast 70,000 full-time employees
for higher impact work over nextyear.
What does that mean higherimpact work?
I don't know how many of themare actually gonna let go versus
do higher impact work.
(35:56):
I don't know.
We talked about this many timesin the past.
I think you don't have enoughhigher impact jobs in any
organization, including thegovernment.
And so a lot of these people aregoing to lose their jobs and not
just be re-skilled to do moreimpactful work.
I am sure there's gonna be abacklash from that, but I think
it is inevitable in thegovernment as it is inevitable
in any other organization.
(36:17):
AI will automate more and moretasks, allowing to grow faster,
but also allowing to do things alot more efficiently, which will
require less people to do theexisting work.
And now to a company I don'tthink we've ever mentioned on
this podcast, which is Visa.
So Visa just launched Visaintelligence commerce, enabling
AI agents to autonomouslybrowse, select, and pay for
products.
Basically a global e-commerceshopping platform driven by ai.
(36:41):
The goal from Visa is actuallybrilliant.
People will use the platform toshop for anything while using
Visa for the checkout part ofthis, I.
So the platform is a mix of AIagents that know how to help you
select products from multiplesources across the web while
integrating it with visa's,checkout and safety
capabilities.
So all your data remains privateand your payment information
(37:03):
stays secure because that's whatthey know how to do.
I think that's a very smart andinteresting move by Visa.
I think they, it will allow themto potentially capture a new
share of a new growing market.
The concept of e-commerce as weknow it today, is gonna change
dramatically because in thefuture, and that future may come
in a year, two years, fiveyears, but somewhere within that
(37:24):
timeframe, less and less peoplewill visit e-commerce websites
and more and more agents will dothe shopping online for us,
which means companies will needto go through dramatic changes
in order to adapt and stayrelevant in this new future from
the way they present their datainstead of to people, to agents
in order to be found and selltheir goods and services.
Visa obviously have a wholefraud management and trust
(37:46):
infrastructure that is combinedinto this new environment and
they're making it availablethrough API right now.
So the goal is obviously forother companies to develop on
top of their infrastructure inorder to build more secure,
while agent driven e-commercesolutions.
Another company that shared abig win when it comes to AI
deployment is UnitedHealthGroup.
They have stated that they havedeployed 1000 AI applications
(38:09):
across its insurance, healthdelivery, and pharmacy divisions
doubling from 500 use cases inMay, 2024.
So in one year, they've doubledthe amount of use cases that
they're using AI for.
And according to the report inthe Wall Street Journal, these
applications streamlined aspectsof the business, such as claim
processing, transcribed clinicalvisits, summarize data, power
chat bots.
And assist 20,000 and assisttheir 20,000 engineers in
(38:32):
writing software.
So what you can see time andtime again is that companies
that figure it out and figure itout quickly are starting to
deploy AI solutions across theboard in every aspect of the
business and not just in oneinitial component.
This is a very big change fromwhat we've seen in 2024 where
companies were just doing testsand evaluations of potentially
(38:53):
using AI in very specific nichesor specific use cases.
This is company-wide AIdeployment that drives
innovation and efficiency acrossthe board, which now becomes a
necessity almost in order tostay competitive.
One of the data points that theyprovided is that AI agents
handle 26 million consumer callsin 2024, and they're planning
that it's gonna take more thanhalf of all calls in 2025.
(39:13):
I don't know what's the totalnumber of calls, but it's higher
than 26 million, which isalready very impressive.
That being said, there's alawsuit from 2023 that alleges
United Health AI tool that wasused to evaluate claims rejected
many claims, 90% of them were byerror wrongfully denying
Medicare claims.
So while AI is becoming more andmore available, is being used in
(39:36):
more and more places, it is notalways accurate.
And in their cases like this onewhere not being accurate is just
not acceptable.
So you need to be aware of theserisks when you start developing
and deploying AI solutions foryour business.
And now to a whole segment aboutnew startups or existing
startups in fundraising orinteresting achievements.
Startup Decagon is in talks toraise a hundred million dollars
at a$1.5 billion valuation LEDby some of the biggest names in
(40:00):
the VC world.
Like understand Horowitz Decagondevelops a customer service AI
agent solution used by companieslike Notion and Duolingo, and
have secured over 10 million,$10million in signed contracts,
driving significant success forthe company and driving
significant labor cost savingsfor the people who use the
platform.
As an example, fitness GiantClassPass is using Decagons ai,
(40:21):
reduce their cost of reservationby 95% through 2.5 million
customer conversations.
What, as I show you, it showsyou that the need for agents
across the board is very strong.
Customer service and customerengagement in general are some
of the top use cases and theleading companies who are
developing and delivering thesesolutions are growing very fast,
driving significant savings fromthese components in their
(40:43):
businesses.
Staying on the topic of AIagents and their crazy growth in
2025, relevance ai, which is oneof the platforms that allows you
to develop AI agents withoutwriting any code Just raised 24
million.
Series B.
Relevance is one of thosecompanies that is seeing
explosive growth with 40,000 AIagents registered on their
platform just in January of2025.
(41:05):
So many, many, many people andcompanies are looking for ways
to develop AI agents quickly andeffectively.
I.
Without writing code andrelevance is one of the leading
platforms to do that.
They just introduced Workforce,which is a no-code multi-agent
system for non-technical usersto build and collaborate with AI
teams as well as they justannounced Invent, which is a
text-based agent where you canjust describe in English what
(41:27):
you want and it will spin up theagent for you.
They are facing fiercecompetition from many other
platforms that allow you to dosimilar things, which is develop
AI agents without writing code.
And AI agents in general,whether with or without code, is
predicted to grow dramatically.
Boston Consulting Group BCG ispredicting a 45% compound annual
growth rate for AI agents overthe next five years.
(41:49):
So whatever we're seeing rightnow, it just the tip of the
iceberg to what's coming when itcomes to AI agents development
and deployment.
This coming Tuesday, the episodethat we're going to release will
show you how to build AI agentsand connect them to company
tools using relevance ai.
So this company that justannounced a very big round, so
if you wanna learn how todevelop your own agents without
writing any code across anyaspect of the business while
(42:09):
connecting it to tools thatyou're currently using, don't
miss this episode on Tuesday.
Staying on the topic of agents,but on a completely different
aspect.
In this case, for research andopen source Future House, which
is a nonprofit company backed byEric Schmidt, release what they
called Future House Platform.
The platform is featuringdifferent AI agent tools,
including Crow, Falcon, owl, andPhoenix, designed to accelerate
(42:31):
scientific research.
Each and every one of them has adifferent task where crow
answers literature queries,Falcon conducts deep database
searches.
I will identify search gaps inPhoenix plans, chemistry
experiments, all leveraging acorpus of open source access
papers.
What is this showing you?
It's showing you that the usageof AI agents that can work
collaboratively as a team can beused for many other tasks beyond
(42:54):
just the obvious things inbusiness.
This is a great example on howAI can accelerate scientific
research even if it's not makingscientific discoveries on its
own.
It can allow humans to do thismuch more efficiently than we
could have done so far, which byitself will accelerate the
research and discovery process.
Now to a bunch of news aboutimage and video Generation.
Recraft, which is a SanFrancisco based startup, just
raised$30 billion in a Series B.
(43:15):
Their model that's calledRecraft, version three, code
name Red Panda, it's turning tobe a very powerful AI image
generator, but they have aunique approach to this.
Unlike their competitors, theyprioritize brand consistency.
So the goal is to allowmarketers in specific companies,
in specific teams to createvisuals that perfectly align
with companies, brandguidelines, as well as specific
(43:38):
logos and color schemes.
I must admit I'm able to achievethat to an extent with some of
the other tools, especially withsome of the new functionality
coming in from me Journey, aswell as in open ai, new image
generator.
I haven't tested Recraftplatform yet to tell you what
the differences are, but there'sdefinitely a need for a tool
that will allow you to stayconsistent to brand.
Exactly.
And not kind of, on brand aswell as being able to place your
(43:59):
logo accurately exactly whereyou wanna place it in an
accurate way.
So it would be interesting tosee how this thing evolves.
I do think that the othercompanies, such as Mid Journey,
such as Flux and such as Googlewith Gemini and Chachi, PT with
OpenAI will have thesecapabilities, which makes the
concept of a new company thatdoes only that very
questionable.
But right now I think they havean interesting solution.
(44:21):
Staying on the topic of ImageGeneration Free Pick, which is
an aggregator company formultiple models and image and
video generation tools, unveiledF Light, which is a 10 billion
parameter text to image AImodel.
The unique thing about it isthat it was trained on 80
million copyright safe,commercially licensed images.
So the idea here is obviouslythat you can generate images
that will not be questions fortheir ip, and it's available
(44:43):
right now on both GitHub andHugging Face under Creative ML
and Open Rail M licenses.
And a very interestingannouncement came on May 6th by
a company called Electric fromIsrael who unveiled LTX video 13
B, which is a 13 billionparameter AI video generator
model that is claiming to bemore advanced than anything else
(45:03):
out there, including SONA and VOtwo.
One thing that they have that isvery interesting is they're
using a completely newarchitecture and concept that
they call multiscale rendering,which basically means that they
are starting on the big pictureof what you're trying to do,
rendering that in very lowresolution first, very quickly,
and then adding more and moredetails to the scene, including
lighting specific image detailsas the production evolves and
(45:27):
they're claiming and actuallyshowing a 30 time faster
rendering than the existingmodels, including the capability
to run it on local consumerhardware like Nvidia, RTX 40 90
GPU.
Now, it's also a fully opensource model that you can get
access to on GitHub and huggingface, and it's free to use by
Enterprises under$10 million inannual revenue.
(45:49):
I love it when I see these kindof pieces of news where a
company takes a completelydifferent approach than
everybody else to get to asimilar outcome, faster, better,
and cheaper.
And especially that it requiressignificantly less compute
because I think we're gonna havea very serious issue with the
resources that all of thiscompute requires from our
planet.
And so being able to generate ahigh resolution, 30 frames per
(46:09):
second video, 30 times faster,with much less compute demand,
is really appealing to me andI'm sure will be really
appealing to anybody who createsvideo.
And hence, I hope to see thisplatform grow very fast.
In addition to being fast, it'salso for all the obvious
reasons, much cheaper.
And their goal is to be able tolower rendering costs to sense
per clip.
Another interesting announcementthat was made this week is that
(46:30):
Perplexity is about to releaseComet, which is an AI powered
web browser.
They're building it on top ofchromium, that is the open
source version of the Chromebrowser, which means that
existing Chrome extensions canrun on top of it, which to me is
very attractive.
and comment is described as, andnow I'm quoting Browser for
Agentic Search.
Come uses AI agents to automatetasks like retrieving past
(46:54):
article, past articles, such asfind that Sea Order article from
last Tuesday, and integratedinto Google Services, browsing
history and contextual data.
I find this very appealing.
I am waiting for the day thatthere is going to be a real AI
driven browser.
I think it will completelychange the way we browse the
web.
I think it's inevitable thateverything will turn into that.
(47:15):
It'll be interesting how that'sgonna impact Chrome, because
Chrome will have to change aswell, and that may change
completely the way Googlemonetizes it.
Now that being said, Google maybe forced to sell Chrome.
So that's a whole, differentstory.
But I definitely see more andmore of these attempts to create
AI focused browsers and changethe way we engage with the web
right now.
And now to one of the twointeresting pieces of news from
(47:36):
Nvidia.
One of them actually from JensenHong and then from Nvidia
itself.
So during the Healing ValleyForum, which is a gathering in
Silicone Valley for the elitesand policymakers of Silicon
Valley.
Jensen Hong, the CEO of Nvidiadeclared that all American
companies will have to alsobecome AI factories in order to
be able to compete in thefuture.
(47:58):
What he means by AI factories ismean integrated hubs of chips,
software and infrastructure toproduce AI models and use them
just like you're buildingeverything else in your
business.
So every business per Huang willhave an AI business running in
parallel to it.
And the reason he calls themfactories, he's claiming that
basically electricity goes intothe factories and tokens come
out on the other side.
(48:18):
Meaning in his eyes and hisfuture, which obviously serves
his company very well.
Every single company will needits own AI capabilities in order
to stay competitive in whatevermarket they want to compete in
the future.
Nvidia has been pushing thisconcept for a while, including
across multiple aspects.
They've been developing softwareto help companies do these kind
of things, including creatingdigital twins of their
operations in order to optimizethese processes significantly
(48:40):
faster and in a safe way andother solutions.
I don't know if that's gonnareally be the case.
It's very obvious why Nvidiawants to paint the future this
way, but aspects of this aredefinitely true, all ready.
If you have your own AIcapabilities in house, you'll be
able to do things significantlybetter, faster, and cheaper than
your competition.
Meaning at least for someaspects of the company, this is
100% true and the companies willbe able to figure that out
(49:03):
faster.
We'll be able to gainsignificant market share over
those who don't.
Another interesting piece ofnews from NVIDIA is they
announced they're redesigningtheir AI chips to comply with
the US export control.
So we shared with you that therecent government broader ban on
selling Nvidia chips to China,including the more basic H 20
chips, is going to cause Nvidiato lose$5.5 billion in unsold
(49:25):
inventory and lost sales.
So what Nvidia is doing rightnow is they're actually building
a new set of chips that will bebelow the government threshold
so they can get at least some ofthat market back in China.
They're obviously clearlystating that by restricting the
H 20 systems, the US regulatorsare effectively pushing NVIDIA's
Chinese customers towards Hui AIchips, which is not necessarily
(49:48):
the right thing to do.
And that actually brings us fullcircle to the very first
article.
I really hope that the USgovernment will work together
with US companies in order tomake sure that on one hand the
US stays ahead in the AI race.
But on the other hand, that weare keeping as many companies
around the world, depending onUS technology for their AI
needs.
That's gonna be a very delicatebalancing act.
(50:08):
And stay on the topic of thelegal and stay on the topic of
government and AI regulationinvolvement in ai, a US district
judge in San Francisco hassharply criticized Meta's claim
of using copyrighted books totrain Lama model.
So this is a lawsuit that wasdelivered by comedian Sarah
Silverman and authors Richard Kaand Christopher Golden filed on
(50:28):
2023 copyright Infr lawsuitagainst Meta saying that they're
using their books to train theirmodels.
And the judge said, and I'mquoting you are dramatically
changing.
You might even say obliteratingthe market for the person's
work.
And you're saying you don't evenhave to pay a license to that
person.
I just don't understand how thatcan be fair use.
Now if training AI models, I.
(50:50):
On cooperated material is notfair use, which has been the
claim of these companies allalong.
This has profound implicationson how AI can be trained in the
future.
Now, what's gonna be the outcomeof this particular case?
I don't know.
What implications will that haveon future cases?
I don't know.
But what I said time and timeagain, and this is that this
will end up at the Supreme Courtand the current setup of the
Supreme Court.
(51:10):
I assume this is not gonna be asharsh as this particular judge,
but now there might be a firstcase that actually says that
training AI models oncopywriting materials is not
fair use which as I said, hasprofound implications.
That's it for today.
Don't forget that.
We also have a survey.
I want to know what you thinkabout this podcast and if you
want to be able to impact whatis going to be the content of
this podcast in the future,please fill out the survey.
(51:32):
There's a link for both thecourse and the survey in the
show notes and filling out thesurvey will take you less than a
minute and it will give us a lotof information that will serve
you.
So please go ahead and do that.
As I mentioned earlier onTuesday, we are releasing an
episode on how to build AIagents with relevance ai, which
is a fascinating episode and I'msure many of you want to learn
that capability.
And until then, have an awesomeweekend.