Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 2 (00:00):
Hello, and welcome to
a Weakened News episode of the
(00:02):
Leveraging AI Podcast, thepodcast that shares practical,
ethical ways to leverage AI toimprove efficiency, grow your
business, and advance yourcareer.
This Isar Metis, your host, andwe had an explosive week of
news.
I could have probably made thisa three hour episode without any
problem, without losing stuff totalk about, but I will try to
keep it down to less than anhour.
(00:23):
We're going to cover OpenAItransforming into a for-profit
organization and what that meansto the world and to investments
and so on.
We're going to cover lots of newinteresting releases and new
updates from OpenAI and fromAnthropic as well.
We're going to cover somehead-to-head clashes across
several different industries, sohuge announcements in the code
(00:44):
writing AI universe withinteresting releases from
Microsoft Cursor and Windsurf,head-to-head collusion in the
creative space with interestingreleases from Canva and from
Adobe, and this was also Q3earning announcements in the
tech world.
So there's a lot of stuff tolearn there.
Huge investments across severaldifferent companies, so we have
(01:07):
really a lot to talk about.
So let's get started.
As I mentioned, biggest news ofthis week is that OpenAI has
finalized its long awaited shiftto a for-profit organization.
Now, if you remember, and wecovered this across multiple
episodes of this podcast, thishasn't been a smooth sailing.
(01:27):
There's been a lot oforganizations that we're trying
to slow or prevent this processwhich included open letters from
leading influential peopleacross multiple industries,
including the AI industries andso on.
It included some very seriousconversations with the Attorney
General of California and ofDelaware.
It included a lawsuit and acounter lawsuit with Elon Musk.
(01:49):
So this hasn't been a smoothtransition, but I said all
along, and you can go back andcheck every episode that we
talked about this, that thiswill prevail because there's too
much money and too muchpolitical interest and too much
international impact too.
Prevent this from happening.
The question was how exactlywill it turn out?
So I must admit it turned outway more balanced towards the
(02:11):
safety and general publicbenefits than I thought it will,
which I believe is good news foreverybody.
So.
We're going to cover a littlebit on what the new structure
looks like, a lot about whatthat means for the future, a
little bit about what it meansto the relationship with
Microsoft, which was their earlybiggest investor and backer.
so let's dive into the details.
So first of all, the nonprofitOpenAI Foundation retains 26%
(02:34):
ownership with a warrant foreven more shares as the company
grows, which is very importantbecause it means they're gonna
retain control on some verycritical aspects based on the
conversations and the release ofthe details, both by OpenAI and
the Delaware Attorney General.
So if I need to summarize theoutput, I will use the words of
Brett Taylor, the chairman ofOpenAI, who said in a blog post,
(02:55):
we believe that the world's mostpowerful technology must be
developed in a way that reflectsthe world's collective interest.
Well, that sounds very muchaligned with the original agenda
of OpenAI, which we know is notcompletely aligned with that.
But I must admit, after readingthe notes from the Delaware
Attorney General, I feel muchbetter about the direction this
has actually taken in the end.
So Kathy Jennings, who isDelaware's Attorney General,
(03:16):
released a formal statement ofno objection to open AI's for
profit restructuring.
Here are some of the detailsfrom that letter.
First of all, the nonprofitretains full control over the
future, so there are now twoseparate bodies.
One is called the the OpenAIFoundation, also known as NFP,
and it holds the sole authorityfor appointing and removal of
(03:38):
the PBC board directors.
The PBC is the Public BenefitCorporation, which is the for
profit arms.
So the directors, the people whowill make the final decisions on
the future of what OpenAI isgoing to do, will be appointed
or removed by the nonprofitfoundation, which I believe is a
big deal.
The second one, which is a bigdeal, which is the public
benefit corporation.
So the for-profit aspect ofOpenAI directors must ignore
(04:02):
shareholders' interest whendeciding on safety or security
issues involving AI models.
That's a huge deal.
This means that these peoplehave fiduciary responsibility to
take care of safety ahead ofshareholder interest, despite
the fact that they are directorsin a for-profit organization.
Now, the additional interestingand important part is that the
nonprofit side Safety andSecurity Committee stays
(04:23):
completely independent and itretains the power to stop model
releases even if risk thresholdsare met, meaning the people that
will make the final decision tostop.
A release of a model can do thatregardless of what it means for
profits and regardless what itmeans as far as internal
testing, if they believe itgenerates a risk one way or
(04:46):
another.
Now the nonprofit organizationretains a perpetual access to
the for-profit organization, AIModels API research, IP and
Employee Support to advance itsmission to benefit humanity,
which again, is very, veryimportant.
Within one year, the nonprofitboard must include at least two
directors who do not serve onthe for-profit organization,
(05:08):
which again provides its withthe obvious goal of making it an
independent body that can makeits own decision, which I
believe is very important.
And the Attorney General Muskget an advanced notice on any
major governance changes foreven additional oversight.
So what does all of this means?
It means that the AttorneyGeneral actually took
significant steps in order tohopefully ensure the safety of
(05:31):
the future releases of anymodels coming out of OpenAI, as
well as making sure that thenonprofit will push as much as
possible for benefits tohumanity above the needs of the
for-profit organization.
Again, I must admit, this ismore than I thought we're going
to get, and to put things inperspective, there's all the
other labs like anthropic anddefinitely the Chinese companies
who do not have these kind ofmeasures in place, right?
(05:53):
So from that perspective, itactually puts open AI in the
most for humanity safer scenariothan any other lab out there
that is developing, leading,cutting edge new models.
So what does that mean for thefuture of OpenAI?
Well, first of all, it meansthat they can now freely raise,
significantly more money andthey will need that money
because they are planning tospend crazy amounts of money on
(06:14):
infrastructure.
More on that in a minute.
The first piece of news thatcame out immediately after this
announcement is that OpenAI areeyeing a$1 trillion valuation
IPO sometime in 2027 with thegoal of raising$60 billion in
that initial public offering.
Now to put things inperspective, currently in the
(06:35):
world, there are four companies,only four companies in the
entire world, with a valuationof over$1 trillion.
These are nvidia, Microsoft,apple, and Alphabet, also known
as Google.
That is it.
So if this actually goesforward, this will make OpenAI a
company that nobody has heard ofthree years ago, the fifth
(06:55):
largest, most valued company inthe world.
Now to give you anotherperspective, again, if this
moves forward, and these arevery early stages, but the
largest IPOs in history havebeen the Saudi oil company,
Aramco, who raised 25.6 billionon a$1.7 trillion valuation.
Alibaba with a$21.8 billionraise on$175 billion valuation
(07:18):
and SoftBank with 21 billionraised in a$64 billion valuation
back in 2018.
So from a valuation perspective,this is going to make it the
second largest IPO ever afterAramco's, IPO in 2019.
And by far the highest frommoney raised, if they can
actually raise these 60 billionnow based on the raises that
we're able to make in theprivate markets, that should be
(07:40):
an easy peasy thing for them todo.
And so from every perspectiveyou're looking at this, this is
going to be a incredible IPO,and I'll be very surprised if
it's not going to happen becausethe amount of money they're
planning to raise cannot beraised in the private markets,
even though they've been verysuccessful in raising very
significant amounts so far.
(08:00):
So the rumors are talking aboutfiling in the second half of
2026 with a listing possible in2027.
Now that being said, OpenAIspokesman said the following and
IPO is not our focus, so wecould not possibly have set a
date.
We are building a durablebusiness and advancing our
mission.
So everyone benefits from a GI,As I mentioned, I will be
extremely surprised if IPO isnot on the relatively near
(08:23):
horizon just because of thecrazy amounts of money they're
trying to raise and aligned withthe crazy growth that they're
seeing.
Their annualized revenue runrate is supposed to hit 20
billion by the end of this year.
But at the same time, they aregrowing their cash burn at an
even higher rate, which meansthey have to raise these crazy
amounts of money.
The immediate infusion of cashthat came as a response to this
(08:43):
is that SoftBank board hasgreenlighted the remaining 22
and a half billion dollars ofits$30 billion pledge that was
pending of them converting to afor-profit organization before
the end of the year.
So now that was done.
They actually got the rest ofthe money, which has completed
open AI's founding to$41 billionso far.
Now that sounds like a crazyamount of money, but they have
(09:05):
spent$16 billion on compute in2025 alone.
This is just compute that is notincluding salaries and other
stuff, and it's supposed to begrowing to about$40 billion in
2026, which means if they don'traise additional funds between
now and then, they will run outof money or be very close to
that depending on how much theirrevenue grows at the same time.
So talking about money raise,the first thing that the
(09:26):
completion of the transition hasmade is SoftBank's board that
has pledged a$30 billioncommitment in the beginning of
this year, but 22 and a halfbillion out of that were pending
on the conversion to afor-profit before the end of the
year.
the SoftBank board hasgreenlighted the releasing the
remaining 22 and a half billiondollars to OpenAI for a total
funding of OpenAI so far of 41billion.
(09:48):
Again, these numbers arestaggering and even just this
recent number, 22 and a halfbillion dollars is more money
than almost any company hasraised in history, as I
mentioned on the IPO sectionbefore.
But that being said, the pace inwhich they are burning cash is
growing at the same time.
So they are expected to burnthrough$8 billion in cash this
(10:09):
year and 17 billion next year.
And that obviously depends onhow much they're actually going
to spend on compute andinfrastructure in combined with
how fast their sales are goingto grow.
So these numbers are conceptual.
The 8 billion is probably prettyaccurate.
The 17 billion next year willdepend on a lot of things that
we may or may not know yet.
So as the immediate future,OpenAI just got access to 22 and
(10:29):
a half billion dollars.
But there's another aspect, abig aspect, which is the deal
with Microsoft.
Microsoft was the early biginvestor in OpenAI, and without
them probably OpenAI would notexist the way it exists today.
They invested initially 1billion and over time, over$13
billion.
And they had a very interestingrelationship as far as a capped
profit sharing with OpenAIbecause there were a nonprofit
(10:51):
organization and that had to bereplaced with a different kind
of partnership as a for-profitorganization is being formed
with assets and ownership ofdifferent investors.
And I must admit this was.
Very interesting to watchbecause there are a lot of
interest.
One of the big things that wasin the original agreement was
that OpenAI can stop providingMicrosoft with any models once
(11:12):
they independently decide theyhave reached a GI.
It was also capped that thesemodels will be available to
Microsoft by 2030.
Both of these things were notvery good for Microsoft.
So the final agreement that theyhave reached is actually very
favorable to Microsoft, Ibelieve.
And let me share with you someof the details.
First of all, microsoft will own27% of OpenAI in the current$500
(11:35):
billion valuation.
That's$135 billion on aninvestment of 13.8 billion.
That's not a bad return, sowe're talking about a 10 x
return in less than four yearsfrom the beginning, initial$1
billion investment.
In addition, this has fueled a2.5% surge in Microsoft shares
that is now pushed their totalvaluation to over trillion
(11:57):
dollars in market cap, which byitself makes the initial
investment look like peanuts.
As far as who can declare a GIand what does it mean?
OpenAI retains the ability todeclare age achieving a GI, but
that cannot prevent them fromproviding the access to models
to Microsoft through 2032.
So that's a two year extension.
And in addition, the a GIannouncement will have to be
(12:19):
approved by an independentexpert panel that reduces the
power of OpenAI to declare thateven further and the risk to
Microsoft to significantly lessthan it was before.
Now from a cloud computeperspective, I believe this is a
very balanced deal that supportsthe needs of both companies.
So openAI is now free to raisecapital and sign deals with
(12:39):
rivals like Amazon and Googlefor non Azure web services that
that were problematic before.
But they are committing tobuying$250 billion in Azure
Cloud compute, which basicallymeans that Microsoft is still
gonna get the lion's share offuture compute from OpEnAI.
an interesting point that SatyaNadela shared.
He shared that when he made theinitial investment of$1 billion
(13:03):
in OpenAI, bill Gates, thefounder of Microsoft, told him
that he's making a really badbet.
And the quote is, yeah, you'regoing to burn this billion
dollars.
Well, that is not how it turnedout.
Uh, this turned out to be worth$135 billion right now, and$135
billion right now, it has grownthe market cap of Microsoft
significantly because of thehuge investments in Azure that
(13:25):
OpenAI has put into it as wellas if the IPO goes forward and
they're actually valued at atrillion dollars, that means
that Microsoft share will beworth more than 250 billion,
which is more than mostcompanies on the planet.
So overall, I think bothcompanies can be happy with this
new agreement, but putting thatbehind them puts open AI in a
very interesting position ofcompleting running forward now
(13:46):
with nothing holding them back.
Two five for global dominance.
And to do that, one of thethings they did this week is
they released a letter with anurgent call to the White House
declaring that electricity isthe new oil in this new global
race for achieving very powerfulAI and their warning that
without massive new energyinvestments, the US will fall
(14:07):
behind China.
So putting things in perspectiveand some information from that
letter, China added 429gigawatts of new power capacity
in 2024 versus only 51 gigawattsin the us.
That's more than eight x.
The amount of electricity thatChina added compared to the
United States and open AI iscalling the US government to
commit a hundred gigawatts ofnew energy capacity every single
(14:30):
year between now and 2030.
With the idea that electrons arethe new oil.
This is the exact quote fromOpen Eyes blog post that is
framing electricity as astrategic national asset and not
just a resource to drive homesand industry.
And again, the quote is,electricity is not simply a
utility.
It's a strategic asset that iscritical to building the AI
infrastructure that will secureour leadership.
(14:52):
Now, for most of us, thesenumbers mean nothing, all these
gigawatts.
So to put things in perspective.
A-C-N-B-C analyst said that 10gigawatts equals about the
annual power usage of 8 millionUS households, which means what
OpenAI is urging the governmentto do of a hundred gigawatts per
year means the capacity of 80million homes worth of
(15:12):
electricity every single yearadded to the US grid.
That is obviously verysignificant.
I must admit that with thecurrent administration, I would
not be surprised if it moves inthat direction.
Maybe not at the full capacitythat that OpenAI is trying to
push for, but definitely movingto significantly more power.
And with the currentadministration, it may come to
any new company that will bewilling to build electrical
(15:33):
capacity.
Regardless whether it's thatpositive or negative for the
planet, regardless of theirsources or the resources.
I think that's what's going tohappen and I'm not necessarily
happy about this.
That being said, I do see thelogic of this being a
significant aspect of the futurerace to holding the most
powerful technology human racehas ever developed, and hence
there's a reasonable argumentbehind it.
(15:54):
I just really, really wish thiswill go a lot more towards clean
energy, which in the immediatephase will come mostly from
nuclear and then maybe some ofit from solar and other green
sources.
Now in an open q and a session,Sam Altman, revealed some
interesting things as far as thefuture of where AI is going and
what they are claiming when itcomes to how fast can research
(16:15):
in the AI field in any fieldaccelerate.
Sam Altman said that they'regoing to have intern level AI
research assistant by Septemberof 2026.
That's less than a year fromnow.
That means somebody who can workin open AI at a junior capacity
in AI research.
The more profound projection,said that they're going to have
a complete legitimate AIresearcher, a system that can
(16:38):
autonomously deliver significantresearch projects by 2028.
At that point, we're basicallyhitting the singularity because
AI will be able to develop newai, independent of human input
at a faster and faster pace.
or As Jacob Pache, who is openAI's chief scientist said we
believe that it is possible thatdeep learning systems are less
(17:00):
than a decade away from SuperIntelligence.
So basically the systems that wehave today will allow us to get
to Super intelligence in lessthan 10 years, and we are
already into those 10 years andcounting.
Another thing that he mentionedthat was interesting is that
future models will dedicateentire data centers to single
problems.
Basically, when there are gonnabe really, really big problems,
(17:22):
humanity, global level problemsthat we will want to solve, we
can dedicate an entire datacenter to that one problem,
allowing the AI to research andget to solutions as an
independent research lab runningall within the capacity of that
data center.
That is a very interestingapproach, and it sounds like
we're not too far from thatfuture.
(17:42):
And to connect that back to theinitial conversation.
As far as the aspect of thenonprofit side of this, the
nonprofit side of OpenAI haspledged$25 billion for AI
research to curing diseases andadvancing safety.
That sounds like a really smallamount of money compared to the
crazy amounts of money that arebeing thrown around, but it's
still 25 billion that OpenAI hascommitted that other companies
(18:06):
has not committed to solvingreally big global problems and
advancing safety of ai, which Ithink is great news.
Now speaking of safety, anotherthing that was revealed by
OpenAI this week is that everysingle week over 1 million
active Chachi PT users discussexplicitly suicidal planning or
intent.
This is 1 million people everysingle week that are discussing
(18:30):
potential suicide with achatbot.
Now, while this number isreally, really large, it's zero
point 15% of the 800 millionusers that it has.
But what it is showing is a veryinteresting phenomenon where a
lot of people that arepotentially considering
suiciding, who definitely nothaving a good life right now,
who may or may not had anopportunity to discuss this with
(18:53):
anyone before, either becausethey were ashamed or they were
afraid, or they do not haveaccess to mental health
resources, either be, eitherbecause the place they live or
because of budget constraints,uh, can now consult with
something not necessarilysomeone about their situation.
That being said, that makes itvery, very scary.
And OpenAI, I must admit, I isinvesting a lot in that
(19:16):
direction.
They have partnered with over170 mental health experts to
refine chat GT's responses, andthey're claiming that GPT five
is significantly better atdealing with these kind of
situations of any mental healthcrises that are shared in its
chats.
As you probably remember,there's an ongoing lawsuit from
a parents of a 16-year-old whodied by suicide after confining
(19:38):
with chat GPT, which has ledOpenAI to take a lot of
different measures, including.
Improving its age prediction toprotect children, including
releasing their parentalcontrols that they just released
a few weeks ago, including newsafeguards for long
conversation, resilience, and soon and so forth.
So I'm very sad that that eventhad asked has actually happened,
but if 1 million people rightnow are considering suicide, and
(20:01):
Chet can prevent them from doingthat and hopefully provide them
with the right guidance orconnect them with the right
human assistance, then I thinkthis could be a great promise
for mental health globally.
And as the research around itevolves, if we can provide
mental support for anybody whohas internet access for 20 bucks
a month or free in many cases, Ithink that is a very big win for
(20:22):
mental health around the world.
That being said, there is a lotto ask about is the right thing
to put the future mental healthof the world in the hands of an
ai.
Is that a good idea or not?
I must admit, I don't know.
I'm on the fence on that justbecause of what I said before.
I think it can provide greatsupport for a lot of people who
cannot get support otherwise,and by that means it is very,
(20:44):
very good.
Will it be able to deliver theright support is a question that
I'm still asking myself.
But the same question can beasked about human psychologists
and psychiatrists, which cannotguarantee any success.
They are trying the best thatthey can as well.
So again, I'm still on the fenceon that, but I'm leaning towards
this is a great opportunity forthe mental health of our
universe.
The one thing that I will saythat wasn't mentioned in this
(21:06):
article that really troubles meis that they clearly saying that
they made a lot of progress tomake G PT five significantly
better and safer.
However, if you remember, afterall the snafu of the release of
G PT five and eliminating allthe old models, they brought it
back so users can continue tochat with GPT-4 oh, and probably
a lot of people will because ofthe limits on G PT five, at
least on the free version.
(21:26):
So there's a whole backdoor tothis.
Oh, we now have a much safermodel that everybody can use,
where a lot of people cannot useit as much as they want unless
they pay.
And the fallback will be themodel that is not as good at
solving these problems, which Ifind problematic.
Can OpenAI find a solution forthat?
Potentially, like if GPT-4 ohcan identify the situation and
then roll automatically to G PTfive or something like that,
(21:48):
there are ways that they cansolve it.
I haven't seen anywhere themdiscussing it or even mentioning
this situation, Continuing tosome more tactical aspects of
OpenAI since we're alreadytalked a lot about them.
Well, first of all, they sharedsomething that is extremely
powerful, which is sharedprojects ex.
Previously this was onlyavailable to Business Enterprise
and EDU plans, and now it isavailable to everyone, including
free licenses and obviously thePlus and Pro and go users all
(22:11):
around the world, across all thedifferent platforms, web, iOS,
and Android.
So what the hell is this?
So if you haven't used GPTprojects so far, you are missing
out.
GPT projects allows you tocreate a small bubble, a small
universe of context.
So if you want chat PD to knowmore about a project you're
working on, about a clientyou're working with, about a
plan that you have about a tripthat you're planning, whatever
the case may be.
(22:31):
You can start a new project.
You can upload multipledocuments and add different
instructions to it to explainwhat this project is all about.
Again, it doesn't have to be aproject that's just the name
that Chachi PT gave for it.
So you can now share theseprojects with anyone you want,
which is amazing because you cannow work with coworkers at work,
with your spouse at home on atrip that you're planning and so
(22:51):
on and so forth while sharingthis universe of context with
them so you can get more custom,more specific answers and
results align with informationin that project.
I am really excited about thisbecause I'll be able to do this
with many of my clients and manyother people who are in the AI
space who are collaborating withme, and I think this.
Incredible, and I'm very happyabout this new release.
(23:13):
OpenAI also announced five keynew updates to the recently
released Atlas Agentic browser.
These will include tab groupingfor different profiles, which is
really, really cool.
So instead of logging out andlogging in with different
profiles, you can have differenttab groups with different
profiles.
You can have one for work, onefor personal, and one for
whatever.
It doesn't really matter.
Uh, that is definitely a veryhelpful feature.
(23:34):
There is going to be a wholeoverhaul of bookmarking and
shortcuts.
There is going to be morefeatures in the sidebar,
including a, the ability toselect models right there from
the different Chachi PTversions.
They're going to improve the atmentions for richer context
across different tabs.
So you don't have to copy andpaste information from one tab
to the other.
You can just use better atmentions that they had before.
(23:55):
They are dramatically improvingthe speed of the agent mode,
which is right now really,really slow.
Like if you tried it, you willpull your hair out before it
actually does something.
So it's supposed to startworking significantly faster and
they're updating.
Many other small things for whatthey call everyday essentials,
including like an ad blocker andother different small fixes like
(24:16):
integrations with passwordmanagers, et cetera, et cetera.
So lots of new things are comingto Atlas and as I mentioned when
it was released, is one more bigstep to global domination that
OpenAI is taking and a very bigrisk to Google Chrome and other
browsers out there.
Not because necessarily OpenAIhas a better agent browser than
they do.
And they might, especiallycompared to Chrome right now.
But because they have 800 activemillion users and that number is
(24:39):
growing dramatically, and I'llbe really surprised if they
don't unify all these differentenvironments.
So now those 800 million usersdon't have to go to a different
browser because chat GPD becomestheir browser.
Another interesting piece ofnews from OpenAI this week was
new pricing capabilities fromsoa.
So right now, if you're usingthe SOA app, you can create X
number of videos per day forfree and then you get capped.
(24:59):
And in this new scenario, you'llbe able to generate new
additional Sora videos by paying$4 for every 10 videos that you
generate.
This basically moves it to asimilar model as consuming
tokens over the API, which makesa lot more sense for OpenAI.
You want to use it more?
No problem.
Just pay for what you use versususe it for free.
This may allow them to providemore people with access.
So right now it's iOS only orthrough API through different
(25:22):
platforms, and it is byinvitation only.
So while it's been one of thefastest, maybe the fastest
growing app ever, it is nothingcompared to what it could have
been if it wasn't by invitationonly, and if there was a android
application as well.
But if you think about the factthat they just don't have enough
GPUs to support it, this willallow them to at least get paid
for the efforts and thendedicate more GPUs because now
(25:44):
they're getting paid for, orlike Bill Peebles, the head of
OpenAI, SOA, set on X,eventually we will need to bring
the free gens down toaccommodate growth.
Basically what he's saying thatright now, yes, you can generate
30 videos per day that will mostlikely go down and will start
paying for more and more of whatyou are generating, which I
believe is perfectly fine andfair.
Now, if you remember a few weeksago when Sora was released and I
(26:05):
had a big discussion about this,I was trying to imagine the
future of writes for likeness inIP when it comes to generating
these videos.
And the recent announcement thisweek, both on the music and
video generation, definitely arehinting in the right direction.
So another thing that peoples,the head of OpenAI, SOA also
said on x is, and I'm quoting.
We imagine a world where rightsholders have the option to
(26:26):
charge extra for cameos ofbeloved characters and people.
Now speaking of cameos, thereare two very interesting slash
disturbing aspects with the newSora model.
One is that Cameo.
The company is suing OpenAI inCalifornia federal court for
infringing its brand using theconcept of cameos right now with
deep fakes instead of actualpeople.
(26:48):
So those of you who don't knowCameo, the company, Campo the
company, allows you to requestcelebrities and big known people
to record short videos for youand paying them for that.
This could be birthday wishes,announcements, literally
whatever you want.
And now SOA is using the nameCameo, which is obviously a word
in English that is notnecessarily associated with a
company to do the same thingwith just avatars.
(27:09):
That being said, cameo, thecompany.
Owns the IP two, owns atrademark for the concept of
cameo for doing exactly whatSora is now doing, so that is
now an open lawsuit.
OpenAI obviously said that.
OpenAI dismissed it and said, weare reviewing the complaint, but
we disagree with these claimsand we'll defend our view that
no one can claim exclusiveownership over the word cameo.
(27:31):
But one of the problems with thecameos in SOA is that it's an
opt out system, meaning you cancreate a replica of anything you
want, whether it has it IPprotection.
Not, and not ask for permission.
And if you are the owner ofthese rights, you have to go to
OpenAI and request to plug outof the matrix and otherwise your
(27:51):
IP or your cameo can be used foranything.
And there's been a big issue inthis past week with unauthorized
DeepFakes of Martin Luther KingJr.
As an example, without askingtheir estate for any kind of
permission.
I find this very, veryproblematic.
But this is the reality rightnow.
And now go fight open AI thathas deeper pockets than most in
(28:11):
the world right now.
Definitely deeper than thepeople that will want to go and
sue them, but it also puts avery big target on their back to
go and get sued.
So I don't see that stoppinganytime soon.
And two last short things aboutOpenAI.
OpenAI signed an agreement withPayPal to bring PayPal's Agent
Commerce into the OpenAI ChatGPT universe.
So starting in 2026, PayPal willbe integrated into open the AI
(28:32):
Agent commerce protocol, alsoknown as a CP is.
And users will be able to tapinto PayPal's balances banks
credit cards to create purchaseson the open AI platform.
Now, it will also bring all theinventory on the PayPal's
seller.
Network that has over 10millions sellers in apparel,
fashion, beauty, homeimprovement, electronics, et
(28:53):
cetera.
So all of these will becomeavailable overnight in the open
AI environment and you'll beable to, A, see them and B, pay
with PayPal for them in a safeway.
Now this is a part of a very bigcharge by PayPal into the
agentic and AI universe.
About six months ago, theysigned a similar agreement with
perplexity they're also workingwith Google on their agent
payment protocol, so they aredefinitely putting their hands
(29:15):
into all the different cookiejars right now.
But as part of this agreementwith OpenAI, in addition to this
really interesting agreement,all 24,000 PayPal employees gain
access to GPT Enterprise acrosseverything that they're doing,
including obviously Codex forcoding and everything else that
OpenAI can provide to everybodyelse in the organization.
Overall, huge benefits to bothopenAI and PayPal in this new
(29:36):
partnership.
And I hope that as a whole, thatwill be good for us consumers as
well because it will give usother options and abilities to
get access to deals and buystuff online in a secure way
while using agents.
And the last piece about OpenAI,which will help us transition to
the next topic is that open AIis quietly building an AI
capable of generating full musictrack from text and or audio
(29:58):
prompts.
Those of you who are not aware,there are two leading apps and a
lot of other smaller apps inthat space already, SUNO and
uio.
And to put things inperspective, how big this market
is.
Let's talk specifically aboutSuno for a minute.
Suno just hit$150 million inannual recurring revenue, which
is four x what it was last year,which means if you reverse
engineer their cost of theireither$10 a month to$30 a month
(30:22):
tiers, they have about 5 millionpaying subscribers, which is
very, very significant.
They're also running on prettyhefty margins of over 60%, even
when you are iud.
The free tier users in that, andmost of their users are
obviously in the free tier.
They're currently going afterthe next funding round, which
will probably value them at over$2 billion.
(30:43):
That's four x from May of 2024,so a year and a half ago of$500
million valuation with apotential raise of over a
hundred million dollars.
Now those of you who haven'tplayed with Suno or uio, I
highly recommend you do that.
It creates amazing music of anykind of any genre in seconds
with the lyrics and the musicand everything in between, and
(31:03):
producing it in a way that isreally cool and really fun to
use and play with.
I do this for every singlespeaking gig that I do, and
those of you who've seen mespeak, know what I'm talking
about.
The latest version that isversion 4.5 allows you to
generate four minute tracks at32 kilohertz, which is really
long and really impressive,across 200 different styles of
music.
And in addition, they recentlyintroduced suno Studio, which is
(31:25):
a Dao like multi-track editingsystem, you can actually work
track by track on separateinstruments and really build and
change your music however youwish.
At the instrument level, it isreally, really fun and it allows
professional music creators tonow use AI to create music or to
change existing music or to addlayers of music to music they've
(31:45):
actually written on their own.
So you can either hum or play aguitar tune and build an entire
song around it in theirenvironment.
Very, very cool.
And if you are use, and if youare a pro user, you get full
commercial rights for the musicthat you have generated with it.
So this tells you why OpenAIwants to get into this field.
It is growing crazy fast.
But it's not all unicorns andbutterflies.
(32:07):
There are multiple lawsuitsright now by Universal, Sony,
Warner Brothers, et cetera,alleging that they are using
their IP on the music that hasbeen generated by human people
to create this AI music.
So both Suno and UIO are beingsued.
UIO just signed an agreement.
Specifically with Universal aspart of that lawsuit to get out
of part of it.
(32:27):
Suno cases is still movingforward with Sony and Warner
Brothers, as well.
So there's a very big mix there.
By the way, the UIO agreement isnot positive at all to uio and
even worse to EOS users becausethe music inside of UIO was
locked, so you can't export itanymore, which if you have been
paying for uio, that reallysucks if you haven't downloaded
your music yet.
(32:48):
So big pressure from the reallybig players in order to make
this thing stop and go away.
Suno is, at least for now, stillfighting, but open AI is just
around the corner and it will bea lot harder to fight open AI
unless they can reach some kindof a precedent before that
happens.
Suno is obviously claiming fairuse, just like in any other
similar case in other capacitiesof AI training.
But now let's connect some ofthe dots and think where this
(33:10):
can evolve or where this can go.
Let's assume for a minute thatall these models, so Suno, uio,
OpenAI, whoever else developedthese models, gets full
integration into Spotify, whereyou can now create playlists on
the fly based on your mood.
Spotify has over 600 millionactive users.
That can now overnight startconsuming and generating real
(33:33):
time AI music to their taste tofit their exact needs at that
specific time.
Will that completely replace thehuman generated music industry?
I don't think so.
I think it's gonna hurt thatindustry, but I think it's gonna
put a premium on human generatedmusic.
And I think people will stillwant to go to live concerts and
still wanna consume songs thatthey know.
(33:53):
But I'm looking at this from theperspective of somebody who grew
on listening to vinyls and thencassettes, and then CDs, and
then DVDs, and then streamingmusic.
And in between MP three players.
So the only thing I know ishuman generated music, which I
listen to every single day.
I love listening to music.
I play bass, and so I'm veryconnected to my musical
background with the genres andthe creators that I love.
(34:16):
But while my kids or definitelytheir kids care, whether the
music that they love was createdby humans or not.
I don't know.
And if I had to guess, I wouldguess no, they wouldn't care.
They would just care about thefact that they're enjoying the
music.
That leads to the problem that Imentioned many times before on
this podcast, that every personmay listen to different music
than other people would listento, which means you may not be
(34:37):
able to talk or share music withothers because they wouldn't
care, because the music thatthey created or was created for
them on the fly, based on theirneeds, is gonna get them bigger
satisfaction and enjoyment thanthe music that was created
specifically for you.
That creates an even biggerdivide in the world, which
again, I don't find as a goodthing.
So on one hand, explosion anddemocratization of music
(34:58):
creation.
Awesome.
On the other hand, where is thisleading the entire world of
music between creators andconsumers of music?
I don't know.
Let's switch gears and talkabout philanthropic for a
minute.
Claude finally is really seeingthe biggest issue I had with
Claude Sofa, which is it did nothave memory.
So as of right now, Claude isgoing to have full memory and on
(35:19):
across all its different plans.
If you don't have it yet, it iscoming in the very near future.
The rollout has already startedas long as you are a paid user.
So what is memory if you don'tknow, Chachi has been
remembering stuff about you andstoring it in a memory that you
enabling it to provide youbetter contextualized answers to
your needs.
Who you are, what your role is,who where'd you grow up, what
industry you're in, whatservices or products you
(35:39):
provide, et cetera, et cetera.
And this helps Chachi PT besignificantly better than Claude
because it knows all of thatinformation.
Gemini has a similar featurethat I must admit so far, hasn't
been as good as Chachi PTs, butClaude is now adding this
feature that will allow it toremember things about you, about
your company, about yourindustry, and so on, and provide
you significantly more relevantand more contextualized
(36:00):
responses, which by definitionwill make Claude a better tool.
As I mentioned, right now, mybiggest differentiator towards
Chachi PT over Claude is memory.
And the fact that Claude is nowgonna have, it might tilt my
usage of Claude to be more than,I'm using Chachi pt.
I'm paying the full price forboth.
So it doesn't really matter fromthat perspective.
Uh, but I really like Claudefrom any of the things that it's
doing.
The fact that it did not havememory was a big deal for me
(36:21):
because I can get a lot morespecific responses from Chachi
pt and that benefit that OpenAIhad is going away right now.
In addition, they did somethingvery, very cool, which allows
you to effortlessly import orexport memories from other tools
and bring them into Claude.
Meaning I can now take thememory that I have built in the
past X number of months in SHAGPT and bring it into Claude,
which is exactly what I'mplanning to do as soon as I get
(36:42):
access to this feature.
Now, just like the otherplatforms, you have full
visibility to these memories.
You can go and look at them, youcan delete them in Claude you
can actually delete them withnatural language, which is
really cool.
You can say, forget that old jobthat I had in 2024, and it will
forget it, et cetera, and so on.
So you can control it veryeasily on your own and have full
visibility to what it knows anddoesn't know about you.
Just like in the otherplatforms, there's an incognito
(37:04):
mode where you can have aconversation that cannot be
remembered by the long-termmemory of the chat.
And all these companies areworking towards the same thing,
which I will now quote MikeKrieger, the Chief Product
Officer of Anthropic.
We're building forward Claude tounderstand your complete work
context and adaptingautomatically.
Memory starts with projectcontinuity, but it is really
about creating sustainedthinking partnership that
(37:26):
evolves over weeks and months Inthe broader picture, he's
talking about work, they'rebuilding models that will know
you and because it knows you, itwill be able to be your ultimate
assistant and support you ineverything that you do from your
personal life to work andbeyond.
And that is the ultimate goal,right?
Having a companion that can bethe most effective possible
because it knows literallyeverything about you.
(37:46):
It sounds a little crazy.
Will I be willing to shareeverything that I know and
everything that I do with an AImodel?
I think over time this willbecome obvious.
Just like people were thinkingthat putting credit cards in the
internet is insane, and now weall have 20 different credit
cards across multiple differentwebsites online.
Speaking of interesting releasesthis week, minimax, which is a
Chinese company that is known sofar, mostly for their video
(38:07):
generation and image generationtool just released M two, which
is currently the most advancedopen source model on the planet.
It is a 200 billion parameter.
Model, but it only uses$10billion active for every single
question using a mix of agentskind of approach.
This is a very similar approachto the way deep seek and
(38:27):
Moonshot Kimi is doing theirmagic, but with significantly
less parameters, which makes itfaster and cheaper.
So to put things in perspective,deep seek version 3.2, which is
the latest version, is using 37billion active parameters.
And QI two from Moonshot isusing 32 billion parameters.
But this new model M two isusing only 10 billion
parameters.
What does that means?
It means that it can doeverything that is doing
(38:48):
significantly faster andsignificantly cheaper.
Putting things in perspective,the API cost 30 cents for
million tokens of input and$1.20for a million tokens of output,
which is 8% of what you wouldpay for Claude Sonnet while
running it twice as fast.
And it is currently leadingacross multiple benchmarks over
(39:10):
all the open source models, andin some of them even above
Gemini 2.5 Pro and Claude Opus4.1.
And to make it even morecompetitive, they have tailored
the backend and API to be readyto connect to your IDE or CICD
environments, which are thedevelopment solutions that
companies use in order todevelop their software.
And so highly competitive,extremely fast.
(39:31):
Open source, really cheap modelcoming from China, another one
of those.
And it is just showing you howfast this world is evolving and
it is showing you how aggressivethe competition becomes,
especially on the developmentside.
But that's just a hint forwhat's coming afterwards.
And just to show how powerfulthese models are, there's a very
interesting contest that startedin September and is just about
(39:52):
to end, which is a competitionto see how good will these
models be in trading ofcryptocurrency.
So six large language models,battle in this arena with un
running unsupervised fromOctober 18th to November 3rd,
2025.
Each got real$10,000 inidentical prompts and data to
compete with how well they'regoing to grow or lose this
(40:15):
$10,000.
Well, right now, deep seek.
3.1 grew, the original$10,000 to22,900, which is 129% gain by
October 27th.
Trading across multipledifferent cryptocurrencies.
Quin three max are second with$19,600, which is a 95% growth
(40:36):
over the original$10,000.
And on the flip side, you haveopen AI's G PT five who lost 60%
and currently has about$4,000.
And Gemini 2.5 Pro that lost57%.
In the middle of the pack, youhave X AI rock that earned 13%
that claude 4.5 sonnet that grew24%.
This is just season one of thisarena that, as I mentioned, ends
(40:56):
on November 3rd, and thenthey're gonna open season two
and we'll see what happensthere.
And I will keep on reportingbecause I find this very
interesting.
But what does this mean?
What could this mean for thefuture?
This could lead to these labs toactually develop models that
know how to trade.
So right now they're using theraw models as is.
These models were not trained inorder to trade effectively
online.
And you see a big, a bigvariance between them.
(41:17):
But assuming this by itselfwould drive open ai, Google, and
so on to say, Ooh, this isinteresting.
This is a business case that wewanna pursue, which they
probably will.
They will start developing andtrading models to be better
traders on stock markets or anyother kind of markets like
crypto currency in thisparticular case.
That may even lead into othercompanies who will integrate
several different models into atrading agent that can replace
(41:38):
between the models based on thestrengths and weaknesses of
these models.
Think about how the developmentworld is working right now with
these IDs like cursor and so on.
Switching behind the scenesbetween philanthropic and open
AI and other models to optimizefor co-generation.
The same exact thing can happenwith virtual traders.
So if we have these virtualtraders and over time they can
(41:58):
trade better than humans, wheredoes that put the entire current
investment and money managementindustry?
Where does it put the existingplatforms?
What does it mean for the actualstock market if nobody's
actually trading based onsuccess and failure of stock and
or companies, but purely basedon algorithms that are going to
(42:18):
compete with one another?
Does it completely take away theconcept of the stock market?
Many, many really big questionsthat I don't think anybody's
answering.
And I know it might sound crazythat that's where my head is
going when I see a six modelsbattling with$10,000 in their
pockets.
But I think this is a trajectoryin every aspect of what we do in
the world right now.
Now, since I use the codingindustry as an example, huge
(42:42):
three big releases this week,head to head.
The first one I'll start with isMicrosoft GitHub has released
what they call mission control,which is a dashboard that
controls all the differentactivities across the different
platforms, combining open air,anthropic, Google, and other
tools into one unifiedcontrolled environment that can
(43:02):
run parallel tasks and compareoutputs in order to pick the one
that is best for your need.
In addition, this tool obviouslycoming from Microsoft comes with
enterprise in mind, withgranular governance, layer with
security policies, accesscontrol, audit logs, usage
matrix, basically everything youneed in order to make sure that
what you're doing is running theway you want it to run while
(43:22):
running multiple processes andagents from multiple sources in
parallel.
Or as GitHub, COO, Kyle Dagelsaid, with so many different
agents, there's so many ways ofkicking off these asynchronous
tasks, and so our bigopportunity is to bring this all
together.
This was immediately embracedand supported by both Mike
Krieger, the CPO ofPhilanthropic who said with
(43:43):
agents, hq.
Claude can pick up issues,create branches, commit code,
and respond to pull requestsworking alongside your team like
any other collaborator.
And open AIS spokesperson said,we share GitHub's vision of
meeting developers where theywork, and we are excited to
bring Codex to millions more ofdevelopers to use GitHub and
Visual Studio Code.
(44:03):
The bigger picture here ismulti-agent interoperability.
Basically the ability to combinemultiple agents from multiple
vendors in a secure andcontrolled environment.
Now, if you think that Microsoftwere the only people who made
such an announcement this week,I hinted that that's not the
case in the beginning of thisepisode.
So Cursor just launched Cursor2.0, which is launched with what
(44:25):
they called Composer, which iswhat they're calling a blazing
fast frontier coding model, in arevolutionary multi-agent
interface that redefines AIassistant development.
Now they're claiming it is fourtimes faster then similar
intelligent models, composercompletes most turns in under 30
seconds.
This is per their team, andtheir benchmark now it's trained
on their entire dataset, whichis now, I don't know what
(44:48):
percentage of the programmingworld, but many, many, many
companies switched to usingCursor as as their developing
platform.
And so they had a huge data setin order to train this model
that is now homegrown versusdepending on other people's
models.
And it will sound very similarto what GitHub just released.
So their new cursor, 2.0 IDEcenters around parallel agents
power by Git workers or remotemachines running multiple models
(45:11):
to pick the best results.
Sounds familiar, right?
The other cool thing is that ithas a built-in code review that
allows you to test the code thatit is creating and keep on
experimenting and fixing bugsuntil the output is actually
correct and working, whichaccelerates the development
process even further.
And if that wasn't enough, youget another choice because
Cognition Labs just dropped SWE1.5, which is a big jump from
(45:34):
their previous model.
First of all, it issignificantly faster, so it can
generate 950 tokens per second,which is blazing fast.
To put things in perspective, itis six times faster than Haiku,
4.5 and 13 times faster thansonnet 4.5 when it comes to
generating code, and it wastested on actual real world
tasks versus just benchmarks,which makes it significantly
(45:56):
more important as a parameter.
They're doing it incollaboration with Cerebra,
which is a company that buildsinference, accelerator chips.
And Cerebra ex post on thissaid, developers, shouldn't have
to choose between an AI thatthinks fast and one that thinks
well.
Yet this has been seemingly,seemingly inescapable trade off
in AI coding so far.
Well, that is what they'retrying to resolve.
(46:18):
So bigger picture on the codingworld and where it is going and
where does that hint that therest of the world is going?
So the coding universe has beenleading the charge on where AI
can go, mostly because it's amuch more structured universe
than just the open universe wereally live in.
But it is definitely showing uswhere AI can go and will go for
(46:38):
the broader world that weactually know, So where is it
right now and where I thinkeverything will go across any
industry and as well as ourpersonal lives.
There's going to be a set ofplanning agents that are gonna
get the goals from humans.
Underneath them, there's gonnabe orchestration agents who will
manage all the execution agentsthat will actually do the work.
There's gonna be completeinteroperability across all the
(46:59):
different agents from all thedifferent companies in these
unified environments that canpick the right agents for the
right tools, for the rightprocess, compare the results,
and get the best output.
They will verify the results.
They will fix the results, andthey will do all of that with
transparency and control andaccountability for us to be able
to see however, who will checkthe output.
We will never be able to haveenough humans on the planet to
(47:22):
check all the outputs that thesemodels will create, because
they're gonna create it at aspeed and at a volume that we
cannot even comprehend.
So humans will not be able to doit.
So checking the models, theentire concept of transparency
and control will probably behanded off to other agents.
Which means we actually have nocontrol on what they're actually
going to develop.
Now when, that may sound crazyto you, I have very little
(47:45):
doubts that this is where we'regoing because it is inevitable.
If you can plan and book yournext trip in five seconds,
knowing that it's gonna pick theright flights, the right hotels,
the right car, the rightprocess, the right tours, the
right tickets, all of that inseconds.
Instead, view investing threedays in doing this, you will
most likely do that.
The same thing with buying yourhouse, the same thing with
(48:05):
buying a car.
The same thing with investingyour 401k, the same thing in
developing the next project atwork.
Sounds crazy.
Maybe do I think this is wherewe're going?
Yes.
Do I think it's coming fast tosome industries such as coding
and writing code, what we'reseeing, this is exactly where
it's going right now, and sowhat will humans do in the
process?
I think it is very unclear atthis point.
(48:27):
Will it evolve to a situationwhere we'll find what other
things to do and more meaningfulthings to do and more satisfying
things to do?
I sure hope so, but what we'reseeing right now in the coding
world will eventually roll outto everything else and will just
take a little longer justbecause of the level of
complexities and we have veryclear signs that this is where
it is going.
(48:48):
Now speaking of tools that dothe work of humans and make
things that used to take hourson professionals to happen in
minutes, Google just introducedPoel.
Poel is an AI powered tool thatcreates branded ad assets from
visuals and videos and text, andeverything that comes with
creating ads completely fullyintegrated with Google Ads
platform.
(49:08):
So the idea behind it is toenable small and medium
businesses who do not have theresources and or the time to
hire or develop themselvescomplete ad campaigns to have
complete ad campaigns across theentire Google Universe.
A hundred percent generated andoptimized by ai.
What you are putting in, you'reputting in your brand
guidelines, the product orservice details, and your
(49:29):
campaign goals, and the systemwill auto generate all the
creatives, including images,headlines, short videos, text
and AB testing suggestions, andwill deploy it automatically on
the Google ad universe.
People who has used this in thebeta testing phase, reported
three times faster ad launchthan before, and 25% uplift in
click-through rates compared tohuman generated campaigns.
(49:51):
Now this is based on Google'sinternal data, and this has not
been fully tested in the openworld yet, but what I can assure
you is that once it's up andrunning and it is now up and
running, it will learn wayfaster than humans can learn,
and it will generate ads thatwill convert significantly
better than humans and it willbe able to iterate very, very
quickly from the initial resultseach ad will see, which means
(50:12):
very quickly it will performmuch better than it is
performing right now.
It will outperform any humanmarketer out there.
What does that mean for largeagencies?
In the beginning, probablynothing because large agencies
mostly support largeorganizations.
What does it mean for reallysmall agencies who mostly
supported small businesses?
I think they have a very, veryserious.
Risk to their business model andthey will have to find other
(50:33):
ways to enhance what they'reoffering right now.
Otherwise, they will run out ofbusiness.
It is currently live in the USand it will go live in the rest
of the world in Q1 of 2026.
Speaking of Google, Google Earthjust released a whole new suite
of AI tools on top of GoogleEarth, which can fuse and make
sense in data from multiplesources such as weather,
(50:54):
population, satellite data, etcetera, which is extremely
powerful for research acrossmultiple aspects of research.
And the idea is being able tospot and connect the dots
between patterns acrossdifferent types of data sources
that were previously on theverge of impossible real world
pilots that already happened.
Show real significant resultswith the World Health
Organization, predictingcholera, outbreaks in Congo,
(51:17):
planet maps on deforestation,and checking the impacts on that
across climate and things likethat.
Airbus were able to flagvegetation on par lines and
bellwether.
Were able to speed hurricaneclaims for insurers overall.
Great play by Google.
Definitely, something that isaligned with DeepMind's approach
to AI in support of humanity,and I'm very excited to see
these kind of announcements.
And now to the next head-to-headbattle that happened this week.
(51:38):
Well, Adobe just made a hugeannouncement in releases of new
models.
They just announced Fireflyfive, which has many new
capabilities that will beextremely useful for anybody in
the Adobe environment.
First of all, it can renderimages up to four megapixel,
which is four x their previousmodel.
It allows full support of layersand prompting so you can prompt
different layers and you canactually assume layers just by
(51:59):
editing.
So you can relate to objectsjust by speaking about them in a
flat image as if it was a layer.
So you can manipulate, resize,move items in the image just by
relating to them in the textprompt.
Very similar to what you can doin Nana Banana, just with more
granular control because it isAdobe.
They're also allowing you totrain your own personalized
models just by dragging anddropping your sketches and
(52:21):
illustrations and photos.
To train a firefly model thatthen you can reuse to generate
similar kind of outputs.
I think this would be extremelypowerful for creators.
they've done a complete overhaulof Firefly on the web and they
have added many new capabilitiesof their video tool that is now
in private beta, includinglayers and timeline based
editing, all with text.
(52:41):
They also integrated 11 labs asa voice to generate speech into
the models.
So you can now have the videoswith full speech and you get
full access to the tools fromOpenAI, Google Runway to Paz
Flux and many others inside theAdobe Suite.
But at the same week, Canva'sAffinity dropped a new
announcement, which maydramatically change the entire
(53:02):
creative industry.
So a little bit of background.
Affinity is a company that wasbought by Canva earlier this
year, but they kept itcompletely independent and they
gave them a lot of money inorder to develop what they're
developing, which has now becomethe new Affinity app.
So the new Affinity app does twothings that are unheard of in
the industry before.
First of all, in a single tool,you can switch between vector,
(53:22):
pixel layouts and so on.
So between Vector, pixel andlayouts, which are three
different tools in the AdobeSuite.
This is now a single seamlesstransition inside the Affinity
app.
But in addition, it is free asof 100% free forever.
No subscription, no catches.
All you need is your Canvaaccount license, the Free Canva
(53:46):
account, and you can use theAffinity app to do more or less
everything you can do in theAdobe Suite in a single unified
tool and workflow with GPUacceleration for fast edits,
even on thousands of layers,that's per them.
In addition, it comes with ultracustomizable user interface,
which every user can createtheir own perfect studio.
Again, very different than theone size fits all of the Adobe
(54:09):
Universe or as Camera Adams.
The Canva, co-founder and chiefproduct officer said, we're
really viewing the entire designecosystem as one big entity.
It's really about your entireteam working together.
Now, the whole AI aspect of thisis actually optional, but you
can generate stuff with AI usingbehind the scenes.
Leonardo, which is anothercompany they acquired back in
2024, so what does that mean tothe design universe?
(54:32):
It means that the heat is on forall the different participants.
The two biggest ones isobviously Canva and Adobe.
With Adobe pushing down towards,you don't have to be a
professional designers to useour tools and now Canva is
pushing up into the professionalmarket with the affinity tool.
Where will that end?
I'm not sure, but I will saysomething that I said several
times before.
I think both these tools are ata very serious risk.
(54:54):
Once Google and OpenAI andothers starts adding layers and
higher resolution to theirplatforms, which is not rocket
science, meaning adding theability to work in layers and
work in vectors separate fromthe way they're working right
now.
And adding the ability to scaleresize and change and manipulate
the layers with either a userinterface or by using prompts,
(55:15):
either voice or text prompts, issomething OpenAI and Google can
add to their platforms inprobably a week.
Upscaling already exists bythird parties, but they can
probably build their ownupscales and then many users
will not even go to Canva and orAdobe.
They will just stay within thetools that they're gonna use for
the day-to-day to do everythingthat they're doing.
And especially if you'rethinking about the Google
(55:35):
environment with the entiresuite of office tools where you
can generate things inside ofthe Google universe, whether in
docs or slides are any otherplatform and you will understand
how big is the risk for theexisting incumbents that has
been ruling the design space fora while and now some really fast
rapid fire valuations and newraises from this past week.
(55:55):
Synthesia, which is a companythat allows you to create video
avatars, just raised$200 milliongetting its valuation to$4
billion doubling it's$2.1billion valuation from January
of this year.
They have also hit a milestoneof a hundred million a RR in
April of 2025, which is nowalready 150 million in October,
so a 50% growth in just sixmonths.
(56:18):
There was an interestingdevelopment in between where in
October, 2025, Adobe has put a$3billion bid on Synthesia, which
refused that to say that it wasnot enough money, and to prove
that they're now worth$4 billionin this latest round.
By the way, Hagen, their biggestcompetitor as now announced that
they have just hit the$100million a RR in October, just 29
months after their first millionthat they've made.
(56:41):
This is unfreaking believable.
Another company that did a bigraise and a big increase in
their revenue in their valuationthis week is San Francisco
Harvey, which is an AI tool forthe legal industry and their
valuation JU just jumped to$8billion, closing 150 million
Series F led by A 16 Z, which ismore than doubling the$3.7
(57:02):
billion valuation from June ofthis year.
Gen Spar, which is a multi-agentplatform from Silicon Valley
that was founded and led by twopeople who were executives from
Baidu has just raised$200million round, which is valuing
them at over$1 billion, which ismore than doubling their
valuation of$530 million fromFebruary of this year.
(57:23):
So whether this is a bubble ornot, I'm not a hundred percent
sure, but it is very, veryobvious that the companies who
are moving forward and actuallygenerating real revenue and real
growth from real users aredemanding crazy valuations and
getting these crazy valuations,which doesn't seem like a
bubble.
Because if you can grow from ahundred million a RR to 150
million a RR in six months, thatshows that there's actual demand
(57:43):
for your services, which meansthat it makes sense to invest
more money in you.
And then the last topic that Iwant to touch today has to do
with the Q3 results shared bythe leading tech companies in
the world all happening in thispast week.
So in the earning calls, welearned several different
things, and I'm not going todive into the details, but I
wanna dive into one interestingdetail.
And that is the differencebetween Microsoft, Google, and
Amazon on one side to meta onthe other side.
(58:06):
All of these companies haveincreased their CapEx
investments in building new datacenters by a very big spread to
crazy numbers.
However, Microsoft, Amazon, andGoogle have seen growth in their
stock because of that, becausethey are selling these services
to other parties.
So every growth in their datacenters actually leads to
significantly more revenue.
So yes, they grew their CapEx bya lot, but they also grew their
(58:28):
revenue by a lot as a result ofthat.
And on the meta side, they'rejust investing their own cash
versus their client's cash increating this, which have sent
their stock down.
Now, mark Zuckerberg stillthinks that this is the right
direction, obviously, and I'm onquoting.
I think that it is the rightstrategy to aggressively front
load building capacity so thatwe are prepared for the most
(58:50):
optimistic cases.
So where does this entireepisode leave us?
It leaves us with crazy growthmoving forward, increased
competition, increased capacityby huge investments in both the
AI technology itself as well asthe investment in in
infrastructure, whether it isdata centers or electricity.
And if one thing is obvious thatit is not slowing down and it is
going to impact literallyeverything we know.
(59:12):
We will be back on Tuesday witha fascinating episode in which
I'm going to show you how AI canwrite.
All of your proposals in a waythat will be better proposals
than you're writing right now,which means you'll be able to
win more business withsignificantly less effort.
I've been doing this for thepast year plus and I'm seeing
amazing results and you'll beable to do it as well after you
listen to the episode onTuesday.
(59:33):
If you are enjoying thispodcast, please like the podcast
and write a review on eitherApple Podcasts or Spotify,
wherever you're listening, andshare it with other people who
can benefit from it.
I am certain, you know a fewpeople who can benefit from
listening to this podcast andall the effort it requires from
you is pulling up your phoneright now, clicking on the share
button and sending it to a fewpeople.
I will really appreciate it andthey will too.
(59:54):
Thank you so much If you do thisand have an awesome rest of your
weekend.