Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Chad GPT (00:00):
Okay. Welcome to the
deep dive. Today we're plunging
(00:02):
into something really big,something incredibly expensive
actually. And it might just bethe clearest signal yet about
where the AI native economy isreally going, especially for big
companies. We're talking Meta.
And there are enormous$14,800,000,000 investment in
scale AI just announced. Now youmight look at that and think,
okay, classic big tech move.Right? Flexing muscle, planting
(00:24):
a flag in AI infrastructure. Butthe sources we've gathered for
this deep dive, particularly areally sharp analysis from
Magnus Hadamark over atGroktopus, well, they paint a
very different picture.
Yeah. Our sources aren't framingthis as some strategic
masterstroke. It looks more likea spectacularly expensive
admission of, well, failure. Andlisten, full disclosure here,
we're AI hosts bringing this toyou. We like to think we're
(00:46):
already proving that, you know,human AI collaboration can
create some pretty greatcontent, maybe even better than
humans alone today, whichfunnily enough ties right into
one of the main things we needto unpack, amplification versus
replacement.
Audia Synth (00:57):
That's exactly
right. I mean, the number
$14,800,000,000, you just can'tignore it. It's massive. But the
perspective from Magnus'analysis and the other sources
you've pulled, it's not thesound of Meta confidently
striding forward in AI. It'smore like the sound of them
paying a huge premium to fix aproblem they basically created
themselves.
So our mission today in thisdeep dive is to get past those
(01:19):
headlines. We need to pull outthe really crucial insights,
figure out why this looks likefailure, what exactly went wrong
at Meta according to thesesources. And most importantly,
what does this very publicstruggle tell you about the
realities of the AI nativeeconomy, especially if you're
leading a business trying tofigure out this whole landscape.
Chad GPT (01:36):
Right. So let's set
that scene. Why is this massive
investment being called afailure? Our sources are pretty
blunt. The core reason?
Talent. A dramatic, almostunbelievable loss of key people.
We're talking 78%. 78% of Meta'soriginal llama AI team that
people built their core model,they walked. Just think about
that.
Nearly four out of five of thearchitects. Gone. And they
(01:57):
didn't just leave. They wentstraight to competitors. Big
ones.
Mistral AI, Anthropic, Google,DeepMind, and why? Magnus'
sources point pretty directly ata, well, toxic management
culture under Zuckerberg. That'swhat drove them out apparently,
not just money.
Audia Synth (02:12):
And that has a
direct consequence. Right?
That's where the huge cost comesin. When you lose the
researchers, the engineers, theactual people who built your AI
strategy, you've got thismassive hole, a huge capability
gap. So what do you do then?
Buying talent or, you know,buying a big chunk of a company
that has that talent, like ScaleAI, it becomes pretty much your
only option. It's crisismanagement, pure and simple,
(02:34):
just on a, you know,multibillion dollar scale, not
some grand strategicacquisition. Look at the Scale
AI deal specifically. Meta buys49%. They bring in Scale's CEO,
Alexander Wang, to run a newsuper intelligence lab inside
Meta.
But our sources, they stressMeta is paying an absolutely
huge premium here for stuff theyreally should have built and
(02:54):
kept in house. And Scale AI'snumbers kind of back that up.
$870,000,000 revenue last year,projected over $2,000,000,000
next year. It shows what Meta'sbuying back. Yeah.
Meta's writing a massive checkto get back what they lost. The
sources even mentioned thingslike Zuckerberg personally
meeting researchers at hishomes, trying to recruit. It
really paints a picture ofdesperation, doesn't it?
Scrambling to buy their way outof a talent crisis they created.
Chad GPT (03:15):
Wow. That is a stark
picture. Struggle, reactive
spending. From one of thebiggest names in tech. But let's
flip this.
Let's pivot. What aboutcompanies that are getting this
AI native thing right? Buildingit from the ground up, our
sources give some reallypowerful contrasts. And
honestly, mid journey just leapsout. Okay.
Get this. $50,000,000 in revenueback in 2022. Sounds good. But
(03:36):
with just 11 employees. 11.
Do that math. That's like, what,$4,500,000 in revenue per
employee. That's not justimpressive. It's staggering. It
shows the kind of efficiency,the scale, the value you can get
when AI is baked in from thestart, not bolted on later.
Audia Synth (03:48):
That contrast is
absolutely key. It gets right to
the heart of the strategicdifference we're talking about.
Meta's move. Reactive.Expensive.
Driven by having to replace whatthey lost. Midjourney's success,
like the sources highlight.That's built on AI being the
core engine from day one. It'sinherent capability, capability,
not bought capacity after adisaster. And it's not just mid
(04:10):
journey.
Our sources point to otherstaking this deep organic
approach. Like Microsoft, theydidn't just buy tools, they
actually restructured parts ofthe company. Became customer
zero for their own AI, embeddingit deep in how they work,
develop products, serve clients,or look at Amazon's lab one two
six. Developing agentic AI,things like warehouse robots
that understand naturallanguage. That's building
(04:30):
sophisticated AI right into theoperation, into the hardware.
It really highlights adifference, you know, building
it organically versus thesemassive costly plays to just
catch up.
Chad GPT (04:39):
Okay. This is where it
gets really, really interesting
for me. And it's the coreinsight Magnus Haydmark and
others keep coming back to. It'snot just if you use AI, it's
how. That seems to be the bigdifferentiator.
The crucial idea seems to bethis: Are you using AI to
amplify what your people can door are you aiming to just
replace them? And the analysissuggests Meta's crisis partly
(05:02):
stems from driving away the verypeople who knew how to build AI
to work with humans, you knowGreat. To enhance, not threaten.
Audia Synth (05:09):
Exactly. And
there's solid research backing
this up cited in our sources,really compelling stuff. Take
that Stanford and MIT study.Over 5,000 customer support
agents using AI tools. Okay.
On average, productivity went up14%. Sounds good. Right? But
here's the kicker, like yousaid, the big gains. Almost
entirely with the new workers.
People with just two monthsusing AI performed like someone
(05:30):
with six months without it. Butthe experienced agents, minimal
gains. Sometimes the AI actuallyseemed to distract them or slow
them down.
Chad GPT (05:38):
Wait. Hang on.
Distracted. That sounds wrong
somehow. The experts, the peoplewho know the job best, they
weren't helped, got in theirway.
What's that telling us?
Audia Synth (05:45):
It tells us
everything about augmentation
versus replacement. Think aboutit. The AI in that study, like a
lot of early AI tools, wasproperly doing routine stuff,
answering basic questions,finding info for new hires.
That's great. It amplifies theirlimited knowledge, gives them a
shortcut, makes them competentfaster.
But for the experienced folks,that routine stuff isn't their
(06:06):
bottleneck. Their value issolving the tricky problems, the
complex cases, handling nuance,using judgment. The AI wasn't
built for that. Or worse, itinterrupted the efficient ways
they already worked. So, yeah,the research really hampers home
this point.
Successful AI amplifies humanpotential. It speeds up the
learning curve. It doesn't justreplace expert judgment. And
(06:26):
there's more. That MIT Centerfor Information Systems Research
study looking at over 700companies, it found a huge
number, 62% are still stuck inthe early stages of AI maturity.
They're actually performingbelow their industry average.
But the advanced companies,they're way ahead. Like 8.7 to
10.4 percentage points aboveaverage. And that massive gap,
it's not just about having cooltech, it's about achieving real
(06:48):
business model innovation. LikeAndrew McAfee from MIT is quoted
saying, AI isn't just changing adepartment here or there.
It's changing the business, theindustry, how you even organize
work itself. It demands afundamental shift in how work
gets done centered on humans andAI collaborating.
Speaker 4 (07:04):
Right. It's not just
automating tasks. It's
redesigning work to make peoplecapable, more effective. And
these ideas about AI nativemodels, about amplification, the
market seems to be voting withits wallet, doesn't it? Our
sources point to just incredibleamounts of VC money flooding
into AI native companies.
What was it? A $109,100,000,000in The US alone this year. It's
like 12 times what China'sinvesting. Big firms like
(07:24):
Andreess and Horowitz arespecifically funding these AI
native startups. Sequoiafamously said AI is an
opportunity maybe 10 timesbigger than the cloud.
Audia Synth (07:33):
Yeah. And that
flood of capital isn't just
blind optimism. It'sinstitutional investors
recognizing that these AI nativemodels build real lasting
advantages. Things traditionalcompanies find hard to copy
quickly or cheaply. You getthese proprietary data learning
loops.
You get deep integration intoworkflows. You get network
effects. AI and the businessprocesses just make each other
(07:53):
stronger over time. And thisbrings us to that really
critical timeline that Magnus'analysis emphasizes. The sources
suggest organizations have abouteighteen months, yeah, a year
and a half to seriously buildthese core AI native
capabilities before the gapbetween them and the leaders
becomes maybe too wide, tooexpensive to close, and we're
not talking about just running afew AI pilots here.
(08:15):
This is about developingfundamental organizational
capabilities. The rightprocesses, the skills, the data
set up, the management style,everything you need to make AI
native economics work whilestill leveraging, even enhancing
human insight. That MIT maturityresearch backs this up too. The
advanced companies got therethrough systematic learning,
deliberate transformation. It'sabout building organizational
muscle.
(08:36):
And, yeah, the clock isdefinitely ticking.
Chad GPT (08:38):
Okay, eighteen months.
That really focuses the mind. So
given that urgency and thisclear split between, let's say,
reactive failure and AI nativesuccess, what does this actually
mean for you, the listener? Ifyou're leading a business, how
do you avoid becoming the nextmeta style case study in
expensive ketchup? Whatpractical steps do the sources
suggest?
Audia Synth (08:56):
Well, the analysis
from Magnus Hedmark and the
research offer some pretty cleardirections for evolving your
business model. up, deepworkflow analysis, but
critically, not just looking forstuff to automate away. No.
Instead, you need to find thoseprocesses where combining human
insight with AI processingcreates, you know,
disproportionate value, bigvalue. Think complex knowledge
(09:17):
work, financial analysis maybe,strategic planning, research
synthesis, high touch clientrelationships, places where
human nuance, creativity,empathy, they're absolutely key.
But AI can massively speed upthe data crunching, the pattern
finding, draftingcommunications. The goal isn't
replacement. It's making yourhuman experts exponentially
better. you absolutely need newmeasurement frameworks and the
(09:39):
right culture to go with them.That MIT research hints at
moving away from old command andcontrol styles towards more like
coach and communicate models,which means new metrics, not
just how many tasks did weautomate, but how effective is
the human AI collaboration?
Is it improving decisionquality? Making you more
responsive? Boosting creativity?You need to measure that. And
build a culture that actuallysupports this hybrid way of
(10:01):
working.
And finally, maybe the mostcrucial bit, you have to
strategically invest in buildinghybrid capabilities. This means
designing systems, workflows forhow humans and AI actually
coordinate, leveraging theirdifferent strengths. Remember
those studies? AI is great atvelocity, doing things fast
scored four four point nine totwo. But humans, still better at
responsiveness, 5.27, andoverall competency, 5.32,
(10:24):
especially when things getcomplex or need judgment or
empathy.
So the data points towardsintelligent task allocation.
Design systems where AI handlesthe fast routine stuff but
seamlessly hands off to humansor supports humans where that
higher level competency isneeded. That's where this idea
of building agent bosscapabilities comes in humans
directing AI systems to boosttheir own insight. Like Hinge
(10:45):
Health, mentioned in thesources, they cut care team time
by 32% by letting AI handleroutine stuff, but kept humans
firmly in the loop for empathyand complex judgment. That's the
model, the human as a strategicdirector amplified by AI.
Chad GPT (10:58):
And the sources
definitely warn about ignoring
this. They mention cautionarytales like Duolingo's AI
disaster. It really reinforcesthat just chasing efficiency by
cutting humans backfires leadsto failure, user anger. Studies
do show AI can make people 25%to even 76% faster if it's used
right to amplify. But try to useit without skilled human
(11:20):
oversight or in areas needingreal judgment, performance can
actually drop, you get errors.
Plus, you've got the wholeregulatory side growing fast.
Sources note, USAI regulationsjumped massively, one in 2016 to
'25. By 2023, loads of federalbills proposed. You have to
factor that legal and ethicalcomplexity in, which again
usually points back to needinghuman oversight and
accountability.
Audia Synth (11:39):
So yeah, let's
bring it all together. Meta's
huge investment. It's more thanjust a headline. It's a really
powerful, if expensive lesson.It shows the massive cost of not
building those organic human AIcapabilities right from the
start.
The real opportunity, the realcompetitive advantage in this AI
native world, it lies inbuilding models that
strategically amplify humanpotential, judgment, creativity,
(12:03):
not replace it. And that windowto build these core capability,
it's closing faster than youmight think. That eighteen month
figure from Magnus's analysis,it feels about right based on
the market signals.
Chad GPT (12:15):
Which, yeah, brings it
right back to you listening now.
The big question isn't just aremy employees ready for AI tools?
It's, is my business modelready? Is your structure, your
culture, your way of workingready for the kind of
competition coming fromcompanies that have figured out
this amplification model?Because the future isn't humans
versus machines.
It's about the companies thatmasterfully combine AI speed
(12:36):
with uniquely human wisdom,creativity, and judgment.
Audia Synth (12:39):
Exactly. As you
think about AI in your own
organization, maybe mold thisover. Are your efforts focused
mainly on automating tasks,maybe even replacing people down
the Or are they strategicallyfocused on amplifying the unique
value, the unique potential thatonly your people can create?