Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hello and welcome to a weekendnews edition of the Leveraging
AI podcast, the podcast thatshares practical, ethical ways
to leverage AI to improveefficiency, grow your business
and advance your career.
This is Isar Mehti, your host.
And while last week was the weekof AI agents explosion, this
week we have multiple topics.
All of them are fascinating.
(00:21):
Some of it are, Major changes inopen AI in their leadership.
That is not the first time, butit seems to be the final nail in
the coffin of the nonprofitversion of open AI.
We're going to talk in depthabout that, but also lots of
fascinating news from otherleaders in the industry, as well
as impacts to creativity withamazing new video capabilities
(00:43):
and its impact on Hollywood, aswell as some big, bold
predictions from leadership inthe industry.
So we have a lot to talk about.
By the way, I'll be speaking atthe AI realized conference in
San Francisco on Wednesday,October 2nd.
So if you are there and you're alistener of this podcast, I
would love to meet you.
So come and say hi.
And now let's get the newsstarted.
(01:13):
If you've been a regularlistening to this podcast, that
there's been major departure ofsenior leaders in open AI since
the big event of the ousting ofSam Altman as the CEO, and then
his return, and then all theturmoil that came after that.
So one after the other, more andmore leaders from open AI has
(01:33):
left.
And this week there was anotherbig earthquake in open AI when
Mira Moradi, the chieftechnology officer of open AI
for the past six years, hasannounced that she's leaving the
company to quote unquote, pursuepersonal exploration.
Immediately after that chiefresearch officer, Bob McGrew and
(01:54):
the VP of research Barrett Zoffhave also announced that they're
leaving the company openAI.
Now, these are top leaders onthe technical side of OpenAI
that are leaving the company.
That's in addition to GregBrockman, who is on personal
leave until the end of the year,which that's not very clear what
(02:15):
that means, but he announced atthe same time, they announced
the launch of O one.
What does that tell us?
all of this is happening as openAI is in the midst of raising
amounts that are not clear yet,but it's going to be probably
around six to 7 billion ataround 150 billion.
In valuation.
Now there's some big names thatare going to be a part of that
(02:37):
investment, including some ofthe biggest companies in the
world, including Apple andpotentially Google, as well as
obviously Microsoft, whoinvested in the first one and a
few other big investing firms.
None of these companies, Ithink, will agree to the current
structure of the company, andhence there's been discussions
and rumors for a very long timethat OpenAI is probably going to
(02:57):
separate itself from its origin,which is a nonprofit
organization that was built tobenefit humanity with AGI, and
this seems to be as I mentionedearlier in the opening might be
the last nail in the coffin.
Now there's only three peopleoff the original founders of
open AI and its leadership thatare still in the company out of
(03:21):
13, everybody else has left.
Now that's not the first timethat a company gets founded and
then people are leaving.
But I think most of thesedepartures have to do with the
fact that open AI has left thedirection that it initially took
that people signed up for, bothin means of the purpose of the
organization, as well as thelevel of importance of safety in
(03:44):
that environment.
So to benefit humanity, you needto be very cautious with what
you're doing.
And I think that has went outthe window for a very long time
now, and we've seen multipleexamples in the past year about
that.
So the departure of suchimportant figures in the company
is obviously a big deal.
And I think it hints that the,if you want takeover of Sam
(04:08):
Altman over the company to leadit in the direction he feels is
the right direction is veryobvious.
I think there's going to be verylittle opposition to Sam moving
forward.
And I think we will see open AIchanging into a for profit
company sometime in the nearfuture, and then raising this
initial amount of money, whichagain, I don't think is a lot of
(04:31):
money.
And we're going to talk a lotabout money in this particular
episode, including about openAI, but they're going to raise
six or 7 billion, at least inthis round, and I think they're
going to raise a lot more thanthat in the coming year.
But the other big, interestingpiece of news coming from open
AI this week is they finallyreleased the advanced voice
(04:52):
mode.
So open AI demoed advanced voicemode a day before Google made
their announcement early thisyear.
But they did not share it withus until this week.
There were a lot of rumors whythat it wasn't ready, that it's
dangerous.
There were a lot of issues withthe voices that they were using
that sounded too much likecelebrities and people didn't
(05:12):
like it, but it's now availableand it's released and you can
start using it.
I must say that I'm using openAI with voice for a very long
time.
Now, mostly my voice inputs.
Instead of typing, I just typetalk to it and tell it what I
want it to do.
But I never actually try to havea conversation with it, or I
tried once or twice before, andit wasn't great.
And this week I found myselfdriving back home late at night
(05:35):
and I wanted somebody to talk toand I didn't want to bother
anybody.
And it was the day that this wasreleased.
I was like, okay, that's goingto be interesting.
I'm going to be speaking at thisconference next week.
And I have different ideas onwhat exactly I want to change
and focus my presentation and Ineed somebody to brainstorm this
with, and this will beinteresting.
And I had a 20 minuteconversation with the new
advanced voice mode.
(05:55):
And I must admit it's incrediblygood.
Now, in the beginning of theconversation, it was really
awkward because it's weirdhaving a conversation with a
machine.
I must admit that about five to10 minutes in, you totally
forget about the fact that it'sa machine and just having a
great conversation brainstormingsession with somebody who's an
(06:16):
expert on, any topic that youwant.
So it was really a helpful thingfor me as I was preparing for my
presentation next week.
But I think in general, this isan incredible capability that I
think is going to take over theway we communicate with machines
and computers moving forward,because it just makes so much
(06:38):
more sense to communicate, justlike we always communicate with
our own voice versus.
Having to type, which is verycumbersome and slow and not, and
doesn't make any sense unlessyou were forced to do that
because that was our way tointeract with computers until
now.
So I think this has profoundimplications in the near future
(06:59):
to how people are going to talkto machines.
And I also think that you shouldtry this out because it's a very
interesting experience.
For me, one of the weirdestthings was I was telling it
about the conference and what myplans are, and I asked it for
its opinion.
And then it started talking fora few minutes about what it
would do and what it wouldchange and about things about
the topic.
And it's not what I wanted, butI felt uncomfortable stopping it
(07:21):
in the middle of the sentence,which is weird because it's a
machine and it's not going to beoffended if I stop it in the
middle of a sentence.
I think it's something we'll getused to very quickly.
And we'll be able to find theright way to work with this.
Just like we learned promptengineering and other
capabilities to make the mostout of these AI capabilities.
Now this advanced voice mode wasreleased in the U S but it was
(07:43):
not released in the EU becausethere's restrictions in the EU
AI Act that is preventing itfrom being released.
Specifically, the AI Act hasspecific clauses that are saying
that AI systems will not inferemotions from other people
actual people.
And this tool has the ability todetect and inflict emotion.
(08:07):
And so it's currently notallowed to be released in the
EU.
I don't think when the EU hasput this law in place, they had
that particular use case inmind.
There's obviously pros and consin here.
On one hand, it's going to putthe EU people and businesses In
a disadvantage compared to otherpeople on the planet that will
(08:27):
be able to use thisfunctionality.
On the other hand, it'sprotecting them emotionally from
these tools, potentiallymanipulating their emotions.
And so I'm not sure where I'm onthe fence on all of this.
I'm sure we're walking into avery weird future right now.
So this is not science fiction.
This is not five years down theroad and we have time to think
(08:49):
about it.
This capability is available.
Right now.
So what's the right approach tothis?
I'm not a hundred percent sure.
I definitely think we need veryclear regulations on what these
tools are allowed and notallowed to do.
And I think they need to beenforced very aggressively with
very severe consequences towhoever is not following those
(09:12):
rules.
By the way, the flip side ofthese rules and if I already
mentioned the EU linked in hasstarted scraping user data to
train AI models, they initiallyclaimed is just to train their
own models.
And then they said it might goto Microsoft as well.
So it's not very clear what thetraining goes for, but they've
done this without changing theirterms and conditions, which
(09:35):
means there are Practicallybreaking the law just like
anybody else who's scraping datato train their AI models Only
they did it on their ownplatform for their own users
that has an agreement with them,which is obviously wrong They
went and corrected that as soonas they got caught and this went
out.
But if you want to switch itoff, you can do this in your
(09:56):
settings.
So in your settings, they have atoggle switch for what they
call, use my data for trainingcontent creation AI models.
And you can find it in yoursettings and switch it off.
And then they presumably are nottraining on your data.
That being said, a lot of othercompanies LinkedIn to train on
that data.
So all you're doing ispreventing LinkedIn from
(10:18):
training on your data.
you're not really preventingthat from anybody else, the
reason it's connected to myprevious news is they did not do
this to people in the EU.
So obviously these regulationsare working.
They are protecting theindividuals who own the data and
the companies who own the dataand the people who are sharing
information online from havingtheir data being used for other
(10:41):
reasons.
And so I do think thatregulation works.
I think that, Clear guidelineson what companies are not
allowed to do with very severeconsequences is the only way
that we can protect ourselvesfrom the negative aspects of
this AI future.
That being said, there's issueswith open source models.
And we've got to talk about thatlater on today.
(11:02):
But back to OpenAI, Sam Altman,as we know, has been pushing for
a while for a very significantinvestment in AI related
infrastructure.
there was the whole conversationearlier this year about raising
7 trillion from multiple bodiesaround the world, including
Saudi Arabia.
But now Sam Altman has releaseda blog post actually on his own
(11:25):
personal website and not throughopen AI advocating for how
critical it is that the U Sspecifically invests
significantly in infrastructurethat will drive AI growth.
He went as far as saying thatnot doing so may lead to lack of
infrastructure and resources,which is a similar situation
(11:47):
that in the past had led tosignificant wars.
So in Sam Altman's eyes, and Inever underestimate his vision.
He believes that AI is the powerof controlling the future.
And so governments will have toinvest to do that, or go to war
for those resources.
(12:08):
That's obviously not something Ithink anybody wants, but that
might get there.
Now, in parallel to all of this,Microsoft and BlackRock, one of
the largest investment companiesin the world has launched a 30
billion fund for AIcompetitiveness and a, an energy
infrastructure, all tied into AI.Infrastructure in the future.
(12:30):
The White House has held a roundtable with AI leaders to discuss
that.
And so it's definitely a focus,but it's also a very big problem
because There's no clear path toprofitability.
It doesn't necessarily putting.
Large data centers in specificcities.
Doesn't this doesn't necessarilyprovide any economic benefits to
(12:53):
that region because it's mostlydata.
There's not a lot of jobs thatit's generates.
We talked in the past aboutseveral different reports,
including from Goldman Sachs,that saying there's no clear
path to ROI to any of those biginvestments that are being made
right now, the Negativeenvironmental impacts that these
data centers have right now issignificant because they require
(13:15):
a huge amount of water to coolthem down.
They require a huge amount ofelectricity that currently also
comes from fossil dependentenergy.
And there's obviously a lot ofdiscussions on how to build
nuclear reactors in order todrive them and so on.
But either way, there negativeimplications to pushing that
(13:35):
forward.
And there's also the othernegative impacts that we talk
about a lot in the show, whichis weaponizing these systems,
which there's zero doubt in mymind, it's already happening.
So building AI driven or quasiAI driven weapons and deep fakes
in the news and AI generatedmisinformation and
disinformation and jobdisplacement.
(13:56):
So there's a lot of negatives.
In that, but the fact thatthere's a huge push forward to
provide the right resources forthat is undeniable.
And it's happening right now.
And you just need to be aware ofthat and try to impact that as
much as you can, mostly in theawareness and education about
the negative implications tohopefully prevent them, or at
(14:17):
least minimize them as much aspossible.
Now, also in his blog post,which by the way, was called the
intelligence age, which hereleased this past week.
He also talks about the factthat AGI is imminent and it's
actually coming pretty fast.
So he used the phrase a fewthousand days.
I don't know how much is a fewthousand days, but it sounds to
(14:37):
me like less than five to tenyears, which was the previous
assessment.
So if we use the concept of afew thousands, not a lot of
thousands, we're talking abouttwo to three thousand days,
divide that by 365 days a year.
You're in, I don't know, four tofive years timeframe where we're
going to have super intelligenceor AGI.
(14:58):
The truth is, as I mentioned inthis podcast several times
before.
It doesn't really matter whenAGI is achieved because every
single milestone along the wayhas very significant impacts
already.
So GPT 01 is already better thanPhDs in many different topics in
answering questions, doingresearch, solving complex
(15:21):
problems.
So we don't need to wait for AGIto get all the implications of
what it means to the workforce,to society, to education, to
healthcare, to all thesedifferent things that make us
what we are as far as a society.
And speaking about educationabout AI, OpenAI has announced
that they've launched the OpenAIAcademy, which is an initiative
(15:42):
aiming to boost AI skills andcareer in developing countries.
So low and middle incomecountries.
So this is, Obviously fantastic,right?
It's a great thing and I reallyhope that more companies go in
that direction and we're goingto talk in a minute that Google
is actually doing similarthings, but the funny thing or
sad thing, if you want to berealistic, is that they've
(16:04):
assigned 1, 000, 000 in APIcredits for that fund.
That's not A lot of money.
That's actually a negligibleamount of money in the big
scheme of things.
Now, in addition, they're goingto do other things in the
program.
They're going to host incubatorand contest.
(16:24):
They're going to be providingaccess to AI experts and
developers for these developingcountries.
And they're building a globalnetwork of developers to foster
collaboration and knowledgesharing and so on.
So fantastic.
This is great.
I think, again, it's a goodidea.
I think it's an initiative thathas to move forward.
Thank you.
But the amounts are just so sadbecause they're so small
(16:46):
compared to all the othernumbers that you hear.
So we just talked about the factthat they're planning to raise 7
billion dollars.
And out of that, they'reinvesting 1 million or maybe a
little more because there'sgoing to be all the
infrastructure around it.
Let's say 5 million, let's say10 million in providing AI
education across the world.
That's not enough.
Now, in parallel, Google hasannounced that they are
(17:07):
launching 120 million Global AIOpportunity Fund.
So Sundar Pichai announced thatin the UN Summit in New York
this week, and it aims to expandAI education and training around
the world.
Focusing on local language.
So using Google translate, theycan take that content and
knowledge and deliver it acrossthe world very effectively.
(17:29):
And they're going to do thisthrough partnership with
different nonprofitorganizations around the world.
So again, great news, 120million.
That's a way bigger amount thanopen AI is investing, which
makes sense.
Google is a much bigger companywith insane amount of positive
revenue versus open AI that arelosing billions every single
year.
But to put things inperspective, there was another
(17:50):
article this week that talkedabout the return of nom shazer
into google.
So a quick recap, we talkedabout this a few weeks ago.
Here was a leading google AIresearcher who left due to
disagreements with google andfounded character dot AI.
And recently open AI in a, inthis weird licensing deal did
(18:10):
not really buy character AI, butthey like assimilated character
AI.
So this particular piece of newssays that they've done the math
on how much money actually wasinvolved in that overall process
to getting Norm Shazir and histeam to be fair into open AI.
And the number is 2.
7 billion with a B.
So Google invest in 2.
(18:32):
7 billion dollars in getting afew leading researchers back
into Google, but they're goingto invest 120 million in Global
Opportunity Fund.
So again, you see where thefunds are being invested right
now, the real money, all of it,or the vast majority of it is
(18:52):
invested in driving these modelsfaster and better in a fierce
and insane competition thatnobody knows its implications,
and investing some of it to say,Hey, look, we're going to go to
the UN and we're going to, andwe're going to announce this
global fund.
That sounds amazing.
But in the big scheme of things,this is.
Sense on the dollars or even, ornot even that in the amount of
(19:15):
money that these companies areinvesting in actually driving
the models forward.
We have been talking a lot onthis podcast, on the importance
of AI education and literacy forpeople in businesses.
It is literally the number onefactor of success versus failure
when implementing AI in thebusiness.
(19:35):
It's actually not the tech, it'sthe ability to train people and
get them to the level ofknowledge they need in order to
use AI in specific use cases.
Use cases successfully, hencegenerating positive ROI.
The biggest question is how doyou train yourself?
If you're the business person orpeople in your team, in your
company, in the most effectiveway.
(19:56):
I have two pieces of veryexciting news for you.
Number one is that I have beenteaching the AI business
transformation course sinceApril of last year.
I have been teaching it twotimes a month, every month,
since the beginning of the year,and once a month, all of last
year, hundreds of businesspeople and businesses are
transforming their way they'redoing business because based on
(20:20):
the information they've learnedin this course.
I mostly teach this courseprivately, meaning organizations
and companies hire me to teachjust their people.
And about once a quarter, we doa publicly available horse.
Well, this once a quarter ishappening again.
So on October 28th of thismonth, we are opening another
(20:43):
course to the public whereanyone can join the courses for
sessions online, two hours each.
So four weeks, two hours everysingle week with me.
Live as an instructor with onehour a week in addition for you
to come and ask questions inbetween based on the homework or
things you learn or things youdidn't understand.
(21:04):
It's a very detailed,comprehensive course.
So we'll take you from whereveryou are in your journey right
now to a level where youunderstand.
What this technology can do foryour business across multiple
aspects and departments,including a detailed blueprint
of how to move forward andimplement this from a company
wide perspective.
(21:24):
So if you are looking todramatically impact the way you
are using AI or your company oryour department is using this is
an amazing opportunity for youto accelerate your knowledge and
start implementing AI.
In everything you're doing inyour business, you can find the
link in the show notes.
So you can, you just open yourphone right now, find the link
(21:46):
to the course, click on it, andyou can sign up right now.
The other piece of news is thatmany companies are already
planning for 2025 and we aredoing a special webinar.
on October 17th at noon Eastern.
So that's a Thursday, October17th at noon Eastern.
We're doing a 2025 AI planningsession webinar.
(22:11):
When we are going to covereverything you need to take into
consideration when you'replanning HR budgets, technology,
Anything you need as far as a Iimplementation planning in 2025,
we're going to cover all thethings you can do right now.
So still in Q four of 2024 inpreparation to starting 2025
(22:32):
with the right foot forward, butalso the things you need to
prepare for in 2025.
If that's something that'sinteresting to you, find another
link in the show notes that'sgoing to take you to
registration for the webinar.
The webinar is absolutely free,so you're all welcome to join
us.
And now back to the episode.
Now, another really interestingpiece of news that came out this
(22:54):
week that is related to open AIis that Johnny Ive, who is a
legendary Apple productdesigner, who is behind some of
the most notable products intech history.
He left Apple a while back tostart his own company that is
called Love From and there wererumors that he's working
together with Sam Altman fromOpenAI on an AI product, a
(23:15):
physical product.
And this week he confirmed theserumors, so the goal of what
they're working on and I'mquoting is to create a product
that uses AI to create acomputing experience that is
less socially destructive thanthe iPhone.
That's it.
They didn't mention what it'sgoing to be.
They didn't mention exactly whatthey're working on, but it's
(23:37):
going to be some kind of adevice, probably a wearable
device that you don't have tohold in your hand.
So this could be glasses.
This could be a pin.
This could be an earbud.
This could be many differentthings.
We don't know the shape and formthat it's going to take, but we
know the two of the mostadvanced minds, one in creating
products and the other one increating usable products
(23:57):
addicting consumer products andthe other in AI are working
together on something like this.
Now there's devices like thistoday, and we're going to talk
about a few new capabilitieslike that.
That's moving forward.
But rabbit that came out and hada horrible device.
There were different pins andnecklaces, and they all didn't
have very good success.
(24:17):
A) because they just weren'tgood enough and they didn't
deliver on what they werepromising B) and that problem
will continue to exist isthere's a lot of ethical
questions in privacy.
When somebody is wearingsunglasses and you don't know if
they're recording you andanalyzing you while they're
talking to you, there's a bigproblem there.
(24:37):
Thing that I see is thatresistance is futile in this
particular scenario, becausethis is coming.
How exactly we're going tofigure it out.
I don't know, but we will haveto, because I don't see a future
in the near future where thisdoesn't exist, where everybody's
wearing a device that isconnected to AI that can help
them analyze Everything thatthey're seeing and take actions
(25:00):
in the digital world that'sconnected to what's happening in
the real world.
And some of it could be reallycool, like you're seeing
something, wearing somethingthat you like, and you can ask
what they're wearing and you canorder it online with free
shipping.
Two seconds without actuallyclicking or opening anything.
That is cool.
And there's analyzing of rocksthat you find that are
interesting or identifyinganimals or plants or knowing how
(25:21):
to navigate in a city you'venever been to, or talking to
people in the language you don'tunderstand.
there's a lot of beneficialthings to it, but there's a lot
of really bad things that can bedone with it.
And we have to find a way again,as a society to deal with that.
But Again, whether we're readyor not, these products are
coming and to continue in thatthought, Meta just unveiled it's
(25:42):
Orion augmented reality glassesprototype in their MetaConnect
events this past week.
So Meta had a huge event.
We're going to talk a lot aboutdifferent announcements that
they made, but the cooleststuff, and again, the most
exciting and troubling stuffthat they announced is this new
pair of glasses.
So as Meta has this partnershipwith Ray Ban when they already
(26:02):
released consumer grade glassesthat has a camera and a
microphone, and you can dodifferent things with it.
But this is a whole differentkind of animal and it's worth
going and watching the demos ofOrion from the meta
announcement.
Very cool.
It knows how to do eye trackingand hand tracking and voice
control.
And there's like this cool thingwrist user interface where you
(26:23):
can type on your wrist to dothings.
And it comes with its own set ofAI features.
And relatively small batteriesright now, this thing costs a
fortune.
So meta's reality labs, which isthe developer behind this thing
has lost 16 billion in 2023.
That's obviously not the onlything that they've developed,
but it tells you how much moneygoes to the development of this.
(26:46):
And I think one of thechallenges is going to be, how
do you make this and it's areally nice thing, which amazing
in some of its capabilities.
In a cost or a price that peoplewill be willing to pay.
But the other questions are allthe questions that I asked
earlier.
What does it mean to society?
How do we live in a world whereeverybody's recording everything
all the time and analyzing withAI, everything all the time, as
(27:09):
I mentioned earlier, that'swhere we're going to start
thinking about it.
If you have any ideas, raiseyour voice and let's try to
figure it out together.
Now, the other thing that Metaannounced this week,
surprisingly, roughly at thesame time that OpenAI announced
their advanced voice mode isthat they, that you'll be able
to communicate with Meta AIusing your voice and getting
answered by different voices.
Some of these voices are AIvoices, but some of these voices
(27:31):
are voices that they licensedfrom specific celebrities, such
as judy Dench and Kristen Belland John Cena and Awkwafina and
Keegan Michael E And again, thetiming is interesting.
I don't know if open AI chose tofinally release their voice
thing because of thisannouncement from Meta and they
(27:53):
didn't want to be left behind,even though they demoed, because
they've demoed their capabilitysix months ago and they sat on
it to make it quote unquote,better, safer, whatever it is
that they were doing, and nowthey released it at the same
week as Meta.
So I think.
Maybe one has to do with theother, but this feature will be
available immediately across allof Meta's family of apps,
including Facebook and Instagramand WhatsApp.
(28:14):
So you can start using thatvoice functionality starting
right now.
They're probably rolling it out.
So if you don't have it in yourphone today, you will have it
sometime in the next few days.
Meta also announced The launchof llama 3.
2.
So their latest and greatestopen source model.
And the biggest difference fromprevious model is that llama 3.
2 is multimodal and it knows howto understand images.
(28:36):
So all the meta models beforewere text only.
And now you can insert imagesinto the conversation and it
knows how to understand them andrelate to them and work with you
on what it's actually seeing.
That's obviously connects verywell to having glasses that can
understand what's happening inthe world around them.
So all of this technology isbeing developed in tandem with a
(28:58):
much, much bigger vision.
But right now what they'regiving us is the capability to
work with images in the models.
They released two differentmodels, one with 11 billion
parameters, the other with 90billion parameters.
And they also released two newversions of text only models
with 1 billion and 3 billionparameters that are faster and
cheaper than the bigger ones touse and still better than the
(29:20):
previous text only models thellama 3.
2 model comes with a hundredtwenty eight thousand tokens
context window Which is muchbigger than we had from them
before and it's aligned withwhat you get from ChachiPT today
So a very easy capable modelthat is running open source and
they're claiming that it's verygood at understanding images
including charts and graphs andcaptions on images and it can
(29:42):
identify objects and createdescriptions of them and so on.
Now I want to connect this tosomething that you may not be
thinking of.
Meta has the largest Inventoryof images in the world by a very
big spread.
And that spread is growing everyday because of the amount of
images people are uploading toFacebook and sending on
(30:03):
WhatsApp.
They had the ability to analyzethose images for a very long
time.
The only thing they're doingright now is releasing it to us
in a way that we can make sensein it and not just running in
algorithms in the background.
So all they're doing is they'retaking something that had been
researching and working anddeploying for a decade.
And they're giving access to usto be able to use it on top of
(30:26):
the existing infrastructure andplatforms that we already.
Now they're also claiming thatMeta 3.
2 is competitive to AnthropicCloud 3 Haiku, which is their
smaller Cloud 3 model and toChatGPT 4.
0 Mini, which again is thesmaller model from ChatGPT and
that it's better than the otheropen source model like Gemma
(30:47):
from Google in the open sourceworld.
So it's a very capable new modelthat is completely open source
that you can get access to onall the normal places where you
get access to open source.
So either there's either meta'swebsites or hugging face or
llama.
com.
But they're also integrating allthese capabilities into their ad
generation and to theirenterprise solution.
(31:09):
So enterprises can now buildagents to do customer
interactions and help withpurchases on the meta platforms
that connects to all the agentcraze that we talked about, in
the past few weeks.
And specifically last week, theyare saying that 1 million
advertisers are already usingmetas generative AI tools to
create images and to createdescriptions and so on.
(31:30):
And they're claiming thatthere's an 11 percent higher
click through rate and a seven,6 percent higher conversion rate
from AI assisted ad campaigns,which connects to my concerns
and the EU concerns and so onthat these models are very good
and very convincing to humans totake actions or think in a
(31:53):
specific direction, which on onehand is great for advertisers,
on the other hand, scary toanybody else.
And the final piece about Metaand part of their announcement
is that they've announced thattheir Imagine feature, which is
their ability to create images,Is rolling out in a new
functionality to all theirplatforms, fully integrated with
(32:13):
the existing user interface.
So users across Facebook andInstagram and WhatsApp will be
able to create images and aspart of their stories are as
part of their profile pictures.
And as part of everything elsethat has to do with
communicating and using theseplatforms.
Platforms, some of this existedbefore, but they've added more
and more places where you canuse the functionality natively
(32:36):
within the existing apps.
Now I'm going to quote MarkZuckerberg in order to explain
to you exactly what the visionis.
So he's saying unlimited accessto those models for free,
integrated easily into ourdifferent products and apps.
So the movie is very clear.
They have billions of peopleusing their products every
(32:59):
single day and making the AIfunctionality available within
the native apps in a way thatpeople have used the apps before
makes a very fast adoption.
So they're currently claimingthat they have 500 million
monthly active users to The metaAI capabilities, and they
(33:19):
predict that meta AI will be themost used AI assistant in the
world by the end of this year.
And again, they're saying thatbecause it's just going to be a
part of apps that people areusing anyway.
So they have a huge amount ofdistribution and a very loyal
customer base that we're justgoing to use this seamlessly
within the existingapplications.
(33:40):
Now, let's move from meta toAnthropic.
We talked about OpenAI raisingmoney.
So Anthropic is also searchingfor the next raise.
And the next raise is supposedlyis going to be from a 30 to 40
billion valuation, doublingtheir valuation from their
previous round earlier thisyear.
(34:00):
To put things in perspective,the assessment right now is that
Anthropic will end this yearwith 800 million in revenue,
which is significantly less bythe way than open AI.
Open AI is projected to end theyear with around 4 billion in
revenue.
but.
Anthropic is burning again,projected about 2.
(34:21):
7 billion.
So they're short 2 2024.
So these companies are burningthrough insane amount of cash,
both on talent that we talkedabout before, how much Google
spent for bringing back oneperson with his team.
so a huge amount of money ontalent, but also a lot of money
on training and inference forall the models that they're
(34:43):
running.
An interesting, a newinteresting announcement from
Anthropic this week, they havereleased what they call
contextual retrieval, which is anew methodology that allows to
get significantly better resultswhen doing RAG, which is getting
AI to respond based oninformation that you provided,
so your documents, yourdatabases, and so on, and
they're claiming That it reducesthe error rate by 67%, so
(35:07):
significantly lesshallucinations by using this.
I'm not going to go into thedetails of exactly how to do
this, but they're saying thatthey've tested it across various
domains such as writing code,writing fiction, scientific
papers, evaluations, financialdocument analysis, and so on.
And they have released what theycall a cookbook.
So basically how you canimplement this to make the most
(35:31):
out of your data.
So if this is something yourorganization is trying to do,
it's worth researching whatAnthropic just released.
Now, speaking on model releasesand new functionality, Google
has announced two new models intheir platform, Gemini 1.
5 Pro 002 and Gemini 1.
5 Flash 002, which is just a newvariation of the previous model
(35:53):
that they released.
The biggest differences are thatit's 15 percent faster faster
and their increased rate limits.
So you can do more requests persecond through the API.
They're also claiming higherbenchmark performance, like 7
percent improvement on MML, youpro and 20 percent improvement
on math, heavy benchmarks.
(36:15):
So better models.
for cheaper, not work faster.
And that's has been the trendall along.
And that's going to continuebeing the trend where we're
going to get better and bettermodels that work faster, and
that costs us less, whichconnecting to everything that we
said before has a lot ofbeneficial aspects, but also a
lot of negative aspects as well.
(36:35):
Now, since we mentioned a littlebit about.
Fundraising Plux, the companythat has gave us the image
generation capability by BlackForest Labs that has been
integrated into Grok, which isthe AI model that runs within X
for its paid members are lookingto raise additional money.
They just came out of stealth acouple of months ago sharing
(36:57):
that they raised 31 milliondollars so far.
And they're currently looking toraise a hundred million dollars
at a one billion dollarvaluation only a few months
after releasing their product.
As somebody who's been usingtheir product, I can tell you,
it's absolutely amazing.
It's the only image generationmodel right now that in my eyes
compete with mid journey.
And so obviously developed bypeople that are very capable.
(37:19):
And the thing they're goingafter, in addition to improving
their image generationcapability is they're going to
use the money to develop a stateof the art text to video Tool
that I'm going to be veryexcited about.
And now we're going to talk alot about video generation,
what's happening in that field,because a lot is happening.
So we talked a lot about that inthe past few months.
(37:41):
And I told you even before that2024 is the year of AI video
generation.
I said that by the way, in 2023,but it's turning to be highly
accurate.
One way the company that maybehas the most capable model right
now that is available to us hasannounced that they're releasing
an API for their platform, soit's currently only available
(38:04):
for limited access and on a waitlist, it offers the Gen 3 Alpha
Turbo, which is their faster,slightly smaller model then
their flagship gen three alphawithout the turbo in it.
And the pricing for it is goingto be one cent per credit and
you need five credits persecond.
So basically five cents persecond, which means to produce a
(38:24):
total of one hour of videos.
You're going to pay 180.
Now that sounds a lot for AIusers who are used to pay 20 a
month for everything or getstuff for free.
But if you compare 180 for anhour worth of video production,
it's negligible.
It's basically three zerocompared to historical ways,
traditional ways of producingvideo with lighting and cameras
(38:47):
and actors and audio and editingand so on.
So the number of 180 for an hourworth of video produced is
basically free compared totraditional ways.
And it's very powerful.
And now it's going to beavailable through an API that
obviously has profoundimplications to the whole
(39:10):
creative and video generationindustry.
A study done by the animationguild this year is projecting
significant impact on thisindustry.
And they estimate that a hundredthousand jobs in the
entertainment industry are goingto be affected by AI in the next
two years.
Now, at the same time thatrunway made their announcement,
(39:30):
Luma, which is their biggestcompetitor in the Western
hemisphere has announced thatthey are releasing.
for their platform called dreammachine.
So dream machine API is nowavailable.
So not just on wait list, youcan actually get access to it
right now.
And it's roughly the same amountof price as the runway API, it's
(39:51):
going to cost you about 252 fora full hour of video production.
Again, you cannot produce a fullhour of video, you produce very
short videos.
But if you need to produce atotal of a full hour, just to
compare the two, it's going tocost you 252 versus 180 on the
other model.
The whole AI generated videoWorld has been booming and we
(40:14):
talked a couple of weeks agoabout Adobe releasing their
enterprise safe Firefly videocreation.
We talked about Alibaba'smodels, which are amazing.
We talked about some otherChinese models, which are
providing amazing capabilities.
So this is running very, veryfast.
And that's before OpenAI hasreleased Sora.
So Sora, which was announcedvery early this year and demoed
(40:36):
very early this year, I thinkFebruary, which I think send
this whole craze into Hyperdrivehas not been released yet, other
than two weeks ago.
Few large group from theindustries like Hollywood
studios and so on to figure outhow to release it safely and in
a better way, it still producesbetter videos, longer videos
than any other platform that isavailable to us today, but it
(40:59):
was not released to the public.
Open AI has announced that theyhave a dev day in the beginning
of October.
Maybe we will learn there whenthis is coming out, or maybe
we'll even get access to Soraright there.
And then.
I assume they're going torelease it soon because there's
a lot of competition that isgaining a lot of traction.
And I don't think they wouldwant to stay behind.
Now, speaking about Hollywoodand the impact of AI on it,
(41:24):
Linesgate, which is a small sizeproduction company, and when I
say small, they're a hugecompany, but they're smaller
compared to other studios, hasannounced a partnership with
Runway to allow Runway to trainon Lionsgate existing content of
both film and TV portfolio.
(41:45):
That is a huge amount of contentthat will be allowed to train on
with some big name movies and TVshows.
And the goal is to create an AIgeneration capability for
Lionsgate to be able to producenew content based on their
existing content, or if I'mquoting, as they mentioned,
augment their work with AI.
(42:08):
Now I want to take you backabout a year.
We're in the summer of lastyear.
There was a big strike by thewriters and the actors that were
trying to protect their futureagainst the usage of AI and how
it's going to impact their jobs.
And there was a very long strikeand eventually they won and got
the agreement they wanted.
And when they did, I said thatthere's no way this agreement
will help them in the future,because once it will come to the
(42:29):
point of whether you're, Studiois going to go out of business
or we'll have to use AI.
The studios will have to use AI.
Now, what does that mean foractors?
What does that mean for writers?
I don't exactly know, but itdoesn't look as bright as today.
Now I know what some of you arethinking.
there's still going to be aplace for human actors.
Absolutely.
And there's still going to bewait for celebrities playing in
(42:53):
big featured movies.
But that being said, if you goto the east side of the world
right now in China, in Japan, inSouth Korea, they are already a
I celebrities that are not real,that are generated by AI that
are followed by tens of millionsof people that are buying
product from them, that arewatching their TV shows, that
(43:14):
are watching their concerts thatthey're putting together.
And they not real, they're allAI based.
So if you don't think there'sgoing to be a Western hemisphere
Hollywood based fictional AIcharacters They're going to be
as popular as Tom Cruise andTaylor Swift.
Think again, because it'scoming.
So again, the implication to theentire entertainment industry
(43:37):
are.
Profound like many otherindustries, but maybe even more.
And it's just a matter of timeuntil we're going to see more
and more AI generated content onthe beneficial side of this
particular agreement, I see veryinteresting business models that
are actually beneficial toeveryone with this new kind of
training.
So let's say you have an actor,a real human actor that has
(43:58):
participated in a series ofmovies or in a TV show, and he's
highly popular, and now you cancreate ads with that person that
connect back to scenes andthemes and actions that happen
in that movie without having tobring the actor over without
having to film.
So you can generate these ads ata much cheaper rate, saving
everybody money, still sellingthe same product in the same
(44:21):
level of success, stillcompensating everybody along the
way, including the actor,including the studio while
making money for the people whowant to sell a product with
significantly less overhead.
So there are definitely benefitsin that process, the ability to
spin out new shows for us, a newvideos for us, a new movies for
us that we would like to watchmuch faster, much cheaper, which
(44:43):
will allow us to get morecreative content because a lot
of the money is not going to bespent on production.
It's going to be spent oncreating better content for us
because you'll have more moneyto spend on that because the
production is going to beprobably two orders of magnitude
cheaper.
I'm going to shift to someinteresting announcement that
are not necessarily industryspecific, but are very important
(45:07):
for us to understand from thefuture that we're going into.
So mark Benioff, the CEO ofSalesforce has made a very
interesting statement this week.
And he basically said that AIand human customer service
agents are nowindistinguishable.
Basically, what that means isthat when you are either
chatting or talking to acustomer service agent, there's
(45:29):
absolutely no way for anyone toknow whether that customer
service agent is human or AIbased.
We talked a lot about the topicin this podcast in the past
year, from the earlyannouncement this year by Klarna
that they were able to do thework of 700 full time agents
with AI assistants that aregetting better scores as far as
(45:52):
customer service than the actualhuman agents.
There's Octopus Energy that isreporting 80 percent customer
satisfaction on email assistanceversus 65 percent satisfaction
when it's provided by humans.
So again, a 15 percent spread orbetter results by the AI.
The flip side is their companiesobviously tried stuff like that,
(46:12):
like McDonald's, we're trying toget automated order takers,
which failed and they tried itin a hundred restaurants and
didn't do it properly, but Ithink it's just, wasn't done
well enough because it's veryobvious to me that when it's
done well, And like Mike Benioffjust confirmed, there's no way
to know and these agents run 24,7, 365 days a year.
(46:33):
They never get tired.
They never fight with theirspouse.
They're never in a bad mood.
They know how to speak anylanguage on the planet.
And they're connected to all thedatabases so they can get much
faster answers and resolveIssues faster than human agents.
So I think the whole customerservice industry is going to go
through a very dramatic changein the next couple of years.
(46:54):
Now, what does that mean forjobs for this huge industry?
I think you understand very wellwhat it means for jobs in that
industry.
I don't think we're going tohave human customer service
agents at all, other than maybesupervisors and higher level of
people to deal with veryspecific unique cases within
five years from now, potentiallysooner.
(47:14):
And again, and to add one lastthing to that.
These models that now can speakand understand what we speak,
understand emotion and simulateemotion as if they have
emotions, which they obviouslydon't.
These models have the capabilityto be very persuasive.
So Yoshua Bengio, which isconsidered one of the godfathers
of AI, one of the leadingresearchers in the field for
(47:37):
decades, he's a Turing awardwinning computer scientist.
He says, That there are seriousrisks with the new or one model
that just came by open AI by itsability to lie compared to
previous open AI models andcompared to other models that
exist today.
And he's talking about the factthat he will do anything
(47:57):
including lying, includingdeceiving people in order to
achieve the goals that were setfor it.
And this was proven in severaldifferent experiments by third
party companies, includingApollo research.
So what does that mean?
It means that again, we have Asa society to figure out ways
(48:17):
together with governmentstogether with his companies
together with hopefullyinternational partnerships to
find ways to put limits on whatthese tools are allowed and not
allowed to do to put verysignificant implications to
anybody who doesn't follow theseregulations so we can benefit
all the amazing capabilitiesthat these tools use.
Bring to the world whileminimizing the potential
(48:40):
negative impacts off that.
That's it for this week.
There's a lot of other news.
There's probably 15 differentthings that did not go into this
episode.
If you want to know what theyare, sign up for a newsletter
that has all of that.
It has announcements on all ourevents and training that we're
doing, which are deep dives intovery specific topics.
We have hundreds of people whoparticipate in those training
(49:01):
sessions every single week.
So if you want to know how to dothat, just open your app right
now in the show notes, there's alink to sign up for the
newsletter.
And in the newsletter, you canfind all of that information,
including all the news we don'tput into This episode, if you
are enjoying this podcast, whichI really hope you are.
And based on the growth in thelisten ship, I know a lot of
people are.
(49:21):
So first of all, I thank you forlistening to the podcast and to
continue consuming this.
But if you're enjoying this, ifyou're learning from this,
please subscribe.
Consider opening your app rightnow, whether it's on Spotify or
Apple podcasts and reviewingthis podcast right now, pull up
your phone and do this rightnow.
And also while you're at it andyou have the phone open, share
this podcast with a few peoplethat you think can benefit from
(49:42):
this, whether friends or familyor colleagues or other people
that you know, I would reallyappreciate that you help the
world by letting more people bemore educated about the goods
and bads of AI and how they canprep and learn more about it and
you're helping the world me togrow this podcast, which is your
way to participate in theprocess of driving AI literacy.
(50:03):
We'll be back on Tuesday with afascinating how to episode where
where we're going to dive intohow to do a specific AI use case
that can benefit your business.
And until then have an awesomeweekend.