All Episodes

November 12, 2025 36 mins

This week, Oz sits down with Stephen Witt, a frequent contributor to The New Yorker and author of The Thinking Machine: Jensen Huang, NVIDIA, and the World’s Most Coveted Microchip. They’ll discuss what's made NVIDIA the most valuable chip company in the world — and the most valuable publicly traded company, period. And how a single piece of hardware changed the world forever, and its journey to existence — from a sketch on a Denny’s napkin to powering data centers the size of Central Park. Then, Stephen demystifies why data centers are shrouded in so much secrecy and what lies ahead in our AI future.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:13):
Welcome to tech stuff. I'm as Vloshan here with Cara Price.
Hey Kara, Hi, as So years ago, around the time
that we were reporting our first podcast together on the
forthcoming AI.

Speaker 2 (00:26):
Revolution around no longer forthcoming.

Speaker 1 (00:30):
You invested in Nvidia, which is up over one hundred
x since then. Congratulations.

Speaker 3 (00:37):
You know, I just felt when we reported on it
that it was going to be the future, and so
I did invest in it, and you know, very happily.

Speaker 2 (00:45):
So now, what was it?

Speaker 1 (00:47):
Was it something that tipped you over the edge back
then to do it?

Speaker 3 (00:50):
Well, I was just sort of thinking to myself, this
is the thing that's going to power everything. So obviously
it's something that people are going to be paying attention
to and investors are going to be paying attention to,
and it just made a lot of sense to me
at the time.

Speaker 1 (01:04):
We know Warren Buffett is retiring this year, there's a
slot open. In Vidia is, of course the most valuable
company in the world today, recently topping five trillion dollars
in value, and of course a lot of people are
crying bubble, not just for Nvidia but for the AI
industry as a whole. Actually carrying curiousy you held your stock.

Speaker 3 (01:25):
I held I look, I needed some things, so I
sold some, but I still have a lot.

Speaker 1 (01:31):
I have enough, and you're going to hold on. You're
not worried about the bubble.

Speaker 2 (01:34):
I think I am going to hold on.

Speaker 1 (01:36):
But the questions people have are, what if AI doesn't
get better infinitely as it scales, what if people invent
new chips that are far more efficient than the in
Nvidia chips, And what if the adoption of AI by
other companies doesn't give them the results so they hope
for financially. And so I talked about all of that

(01:56):
with somebody who knows Nvidia better than anyone else. In fact,
he didn't really wrote the book on it, The Thinking Machine,
Jensen Wang in VideA and the world's most coveted microchip.
He actually interviewed Wang six times.

Speaker 4 (02:10):
He's moody, and you know, he has what I would
describe as somewhat self indulgent performances of anger from time
to time.

Speaker 1 (02:17):
That's Stephen Witt, and he actually got interested in Nvidia
shortly after chatchipt took the world by storm.

Speaker 4 (02:24):
I had been using CHATGPT and I was like, Wow,
this thing is amazing. This is like twenty twenty two,
and I am cooked like, there's not going to be
room for me as a writer. This thing can already
write almost as well as I can, and actually writes
better than I did when I was young.

Speaker 1 (02:39):
As Stephen dug around to understand what was powering this technology,
he got more and more interested in the company building
its physical infrastructure.

Speaker 4 (02:47):
What brought me to a video was I was trying
to write about Open Eye and them, and it was
just a crowd of the romillion journalists swarming around. I
was like, there's gotta be some other story here. And
what I've done as a journalist is look for big
movements of money that aren't being covered. And I looked
at Nvidia's stock price and I in my mind they
were still the gaming company.

Speaker 2 (03:07):
I was like, what the hell is going on here?
The company's worth a trillion dollars.

Speaker 4 (03:10):
And then as I started to investigate it, I was like, Oh, wow,
they build all the hardware. They built all the hardware
that makes this stuff go. That's fascinating. And then I
kind of learned about Jensen and I was like, wait,
this company has had the same CEO through both the
gaming and the AI days and not only that this
is the same This is the founder, he's been the
CEO for thirty years.

Speaker 2 (03:29):
It's the same guy all along.

Speaker 1 (03:31):
So Stephen wrote the book on Nvidia, but he also
wrote a great in New yorka piece recently about data centers,
and an essay for The New York Times about quote
the AI prompt that could end the world. So he's
really a farm to table thinker, from chips to data
centers to AI to the apocalypse. It was a fun conversation.

Speaker 3 (03:49):
Yeah, as a writer, he seems to sort of be
at the center of everything in a way that I
find very compelling, And I actually want to know more
about the AI prompt that could end the world.

Speaker 1 (04:00):
In that case, you have to listen to the whole
interview because we talk about it right at the end. Okay,
it's the conversation with Stephen Witt for then Layman, what
is in Vidia and how did it become the most
valuable company in the world.

Speaker 4 (04:12):
In Vidia is basically a hardware designer. They make a
special kind of microchip called a graphics processing unit, and
the initial purpose of this thing was to just render
graphics in video games. So if you were a video gamer,
you knew who this company was because you would actually
build your whole PC just around this in Vidia card,

(04:32):
so this was the engine that rendered the graphics on
your screen. Sometime around two thousand and four two thousand
and five, scientists began to notice how powerful these cards were,
and they started hacking into the cards, like hacking into
the circuitry to get to those powerful mathematical functions inside
the microchip. And Jensen Wong saw this and he said, wait,

(04:53):
this is a whole new market that I can pursue.

Speaker 1 (04:56):
So he built the.

Speaker 4 (04:57):
Software platform that turns the graphics card into basically a
low budget supercomputer. Now you may ask who is this for, Well,
it's not really for established research scientists because they can
usually afford time on a conventional supercomputer. It's for scientists
who are sort of marginalized, who can't afford time on

(05:19):
a supercomputer and whose research is out of favor, So
it's for mad scientists. It's for scientists who are pursuing
unpopular or weird or kind of offbeat scientific projects. But ultimately,
the key use case turned out to be AI, and
specifically a branch of AI that most AI researchers thought

(05:43):
was crazy, called neural network technology. And what you're doing
here is you're building software that kind of resembles the
connections in the human brain. It's inspired by the biological brain. Actually,
you build a bunch of synthetic neurons in a little file,
and then you train them by repeat exposing them to
training data. So what this could mean, for example, if

(06:03):
we're trying to build a neural network to recognize objects
to do computer vision, then we'll show it tens of
thousands or hundreds of thousands, or ultimately millions of images.

Speaker 2 (06:13):
And slowly rewire.

Speaker 4 (06:15):
Its neurons until it can start to identify things. Now,
this had been proposed going all the way back to
the nineteen forties, but nobody had ever been able to
get it to work. And the missing piece, it turns out,
is just raw computing power. Jeffrey Hinton, who they call
him the Godfather of AI, he said, you know, the
point we never thought to ask was what if we

(06:36):
just made it go a million times faster? And that's
what Nvidia's hardware did. It made AI and these neural
networks in particular, train and learn a million times faster.

Speaker 1 (06:46):
We actually had Hinton on the podcast earlier this year.
Was he in early one of these mad scientists whose
research was unpopular and therefore started buying in video chips.

Speaker 2 (06:56):
Yes, very much so, and there were few.

Speaker 4 (06:57):
There was a community of these guys.

Speaker 2 (06:59):
It wasn't just him.

Speaker 4 (07:00):
There were a number of other people doing it, most
of whose work has now been recognized. But they were
very much on the margins of computer science. They couldn't
get five thousand dollars in research funding, but they could
get enough money to afford two five hundred dollars in
video retail graphics gaming cards, which they did. And Hinton
had a graduate student named Alex Krzewski who was just

(07:21):
an ACE programmer, and he turned the neural net that
he ran on these cards into something called alex net,
which then started to recognize images better than any AI
had ever done before. Like it smashed the paradigm, and
so that engineered around twenty twelve or twenty thirteen a
paradigm shift in AI, and since then everything that has

(07:42):
happened has been a repeated application of the thing. Alex
discovered that if you took neural nets, if you ran
them on Nvidia technology, you would have a very powerful result.

Speaker 1 (07:53):
So this is all fascinating, but to some it may
sound a big geeky and like inside baseball, which is
why I was very attracted to a quote in your
recent Yoka piece which said, if Americas want to retire comfortably,
in Vidia has to succeed. Yes, And I'm curious what
your conversations with the edits around that.

Speaker 2 (08:13):
Around that.

Speaker 4 (08:13):
Oh no, that's that's straightforward. That one actually sailed right
through fact checking.

Speaker 2 (08:17):
There were no questions.

Speaker 4 (08:18):
What has happened since Alex invented this thing in his
bedroom is that we scaled it up from two graphics
cards to two hundred thousand or more, and we have
plans to scale them up to two or two million
and then twenty million. Right So this is the data
center boom which we're going through right now. It's a
new industrial revolution. We're basically building these giant barns full

(08:40):
of in Vidia microchips to run calculations to build better
AI twenty four to seven around the clock, and it's
one of the largest apployments of capital in human history.
This has made in Nvidia the most valuable company in
the world, and it has created a situation where in
Vidia stock is more concentrated in the S and P

(09:02):
five hundred than any stock since they started keeping track.
And actually Microsoft, who is the second biggest, has that
valuation largely because they're building these sheds and renting out
in video equipment.

Speaker 2 (09:12):
So that's linked too.

Speaker 4 (09:14):
So think about that fifteen percent of the stock market
is these two stocks, right, they have to succeed. Americans
in particular, are usually invested passively through index funds in
something that looks exactly like the S and P five hundred,
So you know, if in Vidio crashes, it's going to
create a lot of paint throughout the economy.

Speaker 1 (09:33):
I want to talk about the data centers, but before that,
I want to talk about the man who founded in
Vidio and is a CEO today at Jensen Huang. He
seems to pop up everywhere, but he also seems to
be more inscrutable. I mean, who is he? And do
you see him as different from Zuckerberg and Altman and
Bezos In some significant way?

Speaker 4 (09:51):
Jensen most resembles of all executives Elon Musk because he
is an engineering wizard. Bezos is smart as hell, and
so Zuckerberg. But timately they're kind of software guides. You
know they're coming at the computer from the keyboard in
the terminal. Jensen is totally different. He approaches computing from
the circuit up. He's a degree not in computer science originally,
but actually in electrical engineering. Okay, So for Jensen, the

(10:14):
computer is a piece of hardware that runs calculations in
a microchip, and he literally designed those microchips on paper
at the beginning of his career.

Speaker 2 (10:24):
And that's all he's ever done.

Speaker 4 (10:26):
And this is a little bit why even though he
runs the most valuable company in the world, it's a
little baffling to to kind of people. Nothing Jensen makes
is really that accessible. It's all deep inside the computer.

Speaker 1 (10:37):
As a quote in your piece that I liked what
he said, I find that I'm best when I'm under adversity.
My heart rate actually goes down. Anyone who's dealt with
RUSSIAU in a restaurant knows what I'm talking about.

Speaker 4 (10:48):
Yeah, yeah, I mean he started out at Denny's, so
his first job was basically I think he was a
bus boy at first, and then graduated to dishwasher and
ultimately became a server. And I was talking to someone
in the company and she's like, you know what Jensen
is actually a lot calmer and more compassionate with his
employees when things are going wrong. It's when the company's

(11:08):
stock price is way up but it looks like everything's
going great that he really becomes much more cruel, like
much much meaner to everybody. So he is actually in
some ways a nicer person when things are going wrong.
When he succeeds, it makes him nervous.

Speaker 1 (11:22):
One of his colleagues are described working with him as
kind of like sticking your finger in the electric socket.
That's quite the metaphor.

Speaker 4 (11:30):
It's one hundred percent accurate. I mean, I've interacted with Jensen.
It is like sticking your finger in the electric socket.
He's so tightly wound. He expects so much to happen
in every conversation. Just to even start talking to him,
you have to be totally up to speed. He's not
going to waste any times. He's not going to suffer fools.
And he's also really intense and unpredictable, and you just
don't know where he's.

Speaker 2 (11:49):
Going to go in any conversation.

Speaker 4 (11:51):
And you know, he has what I would describe as
somewhat self indulgent performances of anger from time to time,
and That's especially true if you're one of his executive
If you're not delivering, he's going to stand you up
in front of an audience of people and just start
screaming at you. But really, I mean yelling, and it's
not fun, and he will humiliate you in front of
an audience. I think people at Nvidia have to develop

(12:13):
very thick skins. He actually did this to me at
one point, so I kind of know exactly. Oh yeah, yeah, Well.
I kept asking him about the future. Jensen does not
like to speculate. He doesn't have actually a science fiction
vision of what the future is going to look like.
He has a data driven vision from engineering principles of
where he thinks technology is going to go. But if

(12:33):
he can't see beyond that, he won't speculate. But I
noticed that other people at this firm would talk about it,
and I really wanted to get into his imagination. I
guess I would say, of where he thinks all this
can go. So I presented him with a clip from
Arthur C. Clark discussing the future of computers, and this
is back from nineteen sixty four, but it was kind
of anticipating the current reality we were in where we

(12:54):
would start training mechanical brains, and those brains would train
faster than biological brains and eventually proceed biological prints. And
so I show this clip to some other people in
in video, and they've gotten very They kind of like
smelled up and started giving these grand soliloquies about the
future that were like very beautiful and articulate.

Speaker 2 (13:10):
And I was hoping to get that response from Jensen.

Speaker 4 (13:14):
Instead, he just starts screaming at me, how about how
stupid the clip was, how he didn't give a shit
about Arthur C.

Speaker 2 (13:19):
Clarke.

Speaker 4 (13:19):
He never read one of his books, he didn't read
science fiction, and he thought the whole line of questioning
was pedestrian and that I was letting him down by
asking I was wasting his time. Despite having written his biography,
Jensen remains a little bit of a puzzle and just
that I cannot tell you what's going on inside his brain.

Speaker 2 (13:34):
So well, but I will say this, he's extremely neurotic,
by which I mean I don't even mean this in
a clinical sense.

Speaker 4 (13:42):
I just mean that, by his own admission, he's totally
driven by negative emotions. So even though he's on top
of the world, I think his mind is telling him constantly,
you're going to fail. This is a temporary thing, and
the video is going to go back down again, you know,
twice in his tenure as CEO in video stock price
has a retreat by almost ninety percent.

Speaker 1 (14:01):
What could make it happen now? What keeps me up
at night today?

Speaker 2 (14:04):
What could happen today? Anything? Any number of things.

Speaker 4 (14:07):
This would not be comprehensive. But there's three big risks.
The first is just competition. Nvidia is making so much
money and everyone's seeing that, and this attracts competition in
the same manner that chum attracts sharks. Right, It's like
throwing blood in the water for other microchip designers to
earn a seventy percent eighty percent gross margin, which is
what they do on.

Speaker 2 (14:26):
Some of these chips.

Speaker 4 (14:28):
So Google has built a whole alternative stack for AI
computing around their own, their own kind of platform, and
they're starting to lease that out to new customers. That's
a big risk. There's a big risk that Chinese companies
build alternative, cheaper stacks to what Nvidiat does. Intel had
ninety ninety five percent of the CPU market at one
point in this country. Now they're falling apart. Conquering one

(14:50):
cycle in microchips is no guarantee that you will conquer
the next one, and history demonstrates that quite clearly.

Speaker 2 (14:56):
So that could happen. B Basically, what hap happens.

Speaker 4 (15:01):
In the data center is we're doing a mathematical operation
called a matrix multiplication, and it's extremely computationally expensive to
do this, so without getting two technical basically, to train
an AI right now, we have to do ten trillion
trillion individual computations, which is more than the number of

(15:21):
observable stars in the universe. However, maybe it's possible that
we find some more efficient way of doing that. Maybe
there's a way that requires only ten billion trillion or
even ten hundred trillion. Right Nvidios stock price would go
down because we wouldn't have to build so many data centers, right,
we'd have a more efficient training solution. All of this
is a more complex way of saying, maybe there's a

(15:42):
technological solution where we you know, right now we're route
forcing our way to AI. It's a heavy industrial problem.
We're talking about building nuclear power plants to bring these
things online. I think maybe it's possible that there's a
technological solution that trains these things faster, and if we
discovered it, we wouldn't have to buy so many in

(16:02):
Vidio microchips that would also make their stock price go down.
But the third thing is basically right now for the
last thirteen or fourteen years, the more microchips we stuff
into the barn, Okay, the more microchips we throw at
this problem, the better AI we.

Speaker 1 (16:18):
Gnd this is the scaling law in quotes.

Speaker 2 (16:22):
Okay, is not a law in the universe that this
has to happen.

Speaker 4 (16:26):
It's not some immutable, physical proven thing from first principles
of physics that the more microchips we have, the better
AI we have. In fact, no one is entirely sure
why this works. Presumably, like most other forces in the universe,
this will hit some kind of s curve. It'll start
to plateau or level off at some point. We're not
there yet. But if we did hit a plateau, if

(16:49):
stuffing more microchips into the barn only resulted in marginally
better AI or didn't improve it at all, I think
in videos stock price will go down a lot, and
I think it would look make this whole era look
kind of like a bubble if that were to happen.

Speaker 1 (17:03):
Now, is this why in Vidia has kind of become
the bank of the air evolution In a sense, they're
wanting to lend money and lock other companies into the
current paradigm of AI, maybe even hoping to defensively prevent
other more economical approaches from emerging and consolidating video's position.
I mean, how much of a chess game is this

(17:24):
in terms of thinking about the future of computing for
Jensen and others.

Speaker 4 (17:27):
Oh yeah, it's chess, and Jensen is an I mean
expert chess player at this kind of chess.

Speaker 2 (17:32):
He's really good at thinking about the.

Speaker 4 (17:35):
Competitive positioning of where he is and where other people are.
You know, in vidiots early days, the graphics the GPU
market back in the video game days was really crowded.
At one point there were fifty or sixty participants in
this market. I talked to David Kirk, who was the
chief scientist in Vidio during this time. Jensen would go
into his office and a whiteboard and you would have
a list of all his competitors up there, and not

(17:56):
only that, they would have a list of who the
best engineers working at those competitors work and then they
would come up with plans to poach those engineers and
get them to come work for a video so that
they would drain the brain power of their competitors and
force them to collapse. I've compared the early graphics days
to the movie Battle Royale, whe all the kids are

(18:17):
on the island, they have to kill each other. It
was like that. There were like forty competitors and only
one could survive. Jensen one. He won the Battle Royale.
He was the last guy standing.

Speaker 2 (18:26):
I mean, he won the knife fight.

Speaker 4 (18:28):
So he is unbelievably ruthless and unbelievably good at identifying
where the competition is and what he could do, not
just to beat them in the marketplace, but actually to
hollow out their engineering talent.

Speaker 1 (18:41):
Who are in Video's biggest customers and what are they
buying the chips for?

Speaker 4 (18:46):
Okay, it's a bit complex. The biggest customers, they don't
disclose it. Almost certainly it's Microsoft and then probably Amazon.
What these companies do is they trained some AI on
their own, but what they're really doing is putting They're
the ones building the sheds, they're the ones building the
data centers. So in video sells them the microchips, and
then kind of the ultimate end user is a frontier

(19:08):
AI lab, so that could be something like Anthropic or
open AI. So essentially the way to think about this
is in Vidia sells the microchips to Microsoft or Amazon
or maybe Oracle. Oracle builds and operates a gigantic data
center with one hundred thousand microchips in it that takes
as much power as like a small city, and then

(19:28):
clients like open ai come and lease it out from them.

Speaker 1 (19:39):
After the break Why data centers are worried about break ins?
Stay with us. Let's talk about data centers. There's something

(20:03):
weird about data centers because on the one hand, they
are literally the most boring thing in the world, and
on the other hand, they are unbelievably fascinating. I mean,
you mentioned there's article information about James Bond style security
consultants defending data centers, like, how do you explain what
is going on here?

Speaker 2 (20:22):
Okay, So.

Speaker 4 (20:24):
Basically, and this is the kind of the most amazing
thing you can imagine. This giant barn racks of computers
as far as the eye can see. What those computers
are doing is processing the training data for the actual
file of AI and that file usually it contains let's
say a trill a guess, but let's say like a
trillion weights, a trillion neurons. Okay, well, we can store

(20:47):
a trillion neurons on a small external hard drive, like
you can store them this much sides of a candy bar.

Speaker 1 (20:52):
Okay.

Speaker 4 (20:53):
So, at least in theory, if somebody were to break
into a data center and extract the information on that
little file, they would basically own chat GPT six. They
would own all of Opening Eyes IP if they could
just break it out of the data center. And this
is actually a real concern, probably not so much from
petty thieves, but from like state sponsored actors, like maybe

(21:15):
China wants to know what's on Opening Eyes equipment before
it launches, right Like, It's kind of almost like a
corporate espionage problem. And so a couple things happen in response.
First of all, the data center operators do not want
to tell you where these things are even located.

Speaker 1 (21:28):
So that's the information huge. I mean, how well can
you hide them?

Speaker 4 (21:32):
Well, they're huge, but they're also extremely boring, so they
just look like a giant industrial warehouse and often there's
no way to distinguish it from the next giant industrial warehouse,
Like are they moving palettes of shoes around in there
or is it a data center?

Speaker 2 (21:46):
I don't even really know now.

Speaker 4 (21:48):
I think if you had a trained eye and knew
what electrical equipment to look for, you would see it.
But it's more just kind of like keeping it all
a big secret. You're right, some of them are getting
so big that there's no hiding this. But still they
don't let you know that they're data centers. And they
look boring as hell. They're grayscale buildings, you know, luclex sheds.

Speaker 1 (22:06):
I know you can't say where it was, but you
did get to go to the Microsoft data center, Like
describe arriving there, what it looked like, what it smelt like,
who was there. I mean, really take us into the scene.

Speaker 2 (22:17):
There's a campus. It is like a giant plot of land.

Speaker 4 (22:20):
I will say it was in the middle of nowhere
that they had just taken over and were building into
this massive data center, and it was.

Speaker 2 (22:28):
In an agricultural community.

Speaker 4 (22:29):
In fact, directly across the street from this data center
was a dilapidated shed with rusted cars in the driveway,
straight dogs wandering around in cans of modello like littering
the yard. And then it's slowly being taken over by
these giant computing barns, not just Microsoft, but everywhere you look,
and there's redundant one hundred foot power lines everywhere, right,

(22:51):
So it just looked like, you know, all the farmers
were being kicked out and looked like an invasion by aliens.

Speaker 3 (22:56):
So you go in.

Speaker 4 (22:57):
There's multiple security checkpoints, I think the three vehicle checkpoints
I had to go through to get to kind of
the heart of the data center. Then you go in
and it's Microsoft, so you have to sign fifteen NDAs
and watch a PowerPoint and put on all the safety
equipment and then you're inside. Now inside is a little underwhelming.
It's a giant concrete barn, just full of repeated racks
of equipment as far as the eye can see. It's

(23:19):
not necessarily inspiring of poetry or anything. It feels like
being inside of an industrial process, which it is, and
not a very beautiful one either. There's cable everywhere, pipes
for water and air, cables for electricity, cables for transporting
data around, and then there's repeated power banks. There's batteries,
there's power stations, there's industrial HVAC systems, and all of

(23:40):
this is to just keep the microchips running twenty four
to seven, to keep the AI processing running. I did
ultimately kind of sweet talk my way into the control room,
which I wasn't supposed to be in initially, so that
was kind of cool. And the guy in the control
room showed me what was happening, and it's just this
power spike of the power going up and the power
going down, and the power going up and the power
going down. When the power goes going up, the microships

(24:01):
were kind of like moving all at once to do
a bunch of matrix multiplications, and then when the power
went down, they were writing the results to file. And
this happened over and over and somewhere in that data
center where there was that tiny little file of numbers,
that tiny little collection of synthetic neurons, and with every
pulse there it just got a little bit smarter.

Speaker 1 (24:20):
Did the pulse make you think of life? Biological life? Yes?
And no.

Speaker 4 (24:26):
They're calling these things neurons, right, So these systems, while
they are inspired by biology, don't necessarily work in the
same way as biology. Still, it's certainly inspired by the brain,
and it seems to have emergent capabilities, like emergent biological capabilities,
kind of like a human brain.

Speaker 2 (24:45):
I'll tell you a fascinating story.

Speaker 4 (24:46):
I was talking to the product had the original product
have for chat gpt who launched it, and he was like, yeah,
we put it up and we just kind of walked away.
We didn't think it would be that popular. And the
first place that really started directing traffic to chat g
was a Reddit board in Japan. He was like, this
game is a great surprise to me because I had

(25:07):
no idea it could speak Japanese. That was something it
had learned and empirically one of the reasons we put
it out there was to test what it could do.
So it came as a surprise to us that this
thing could speak Japanese well enough to attract a large
and in fact ravenous Japanese user base. And so when
you train these things, you actually don't know what they
can do at the end. It's often a surprise to you,

(25:29):
even the creators.

Speaker 1 (25:30):
But there is this life not life thread throughout your piece.
I mean, you mentioned being kind of desperate for human
contact being led through these data centers, And you mentioned
one of the data center founders from Coal. We're talking
about wanting to hire people who can endure a lot
of pain. What is this pain brutality in human sort

(25:53):
of set of ideas around data centers.

Speaker 4 (25:55):
Yeah, it's a lot like working in a printing press.

Speaker 2 (25:57):
It's a heavy industry.

Speaker 4 (25:58):
It's extremely loud in side the data center, especially core weaves.
I mean, I couldn't hear myself think. Actually, if you
work for a long time data center, you have to
wear both ear plugs and then over that a set
of protective cans. So you've got to do kind of
like two kinds of your protection. And even then long
term tonight is can be a risk. And also you
can electroc you yourself. There's very high voltage electric equipment

(26:19):
running through there. It's just not an easy place to work.
And not only that, when Nvidia rolls out a new
set of microchips, it is a scramble to put them online.
Every second that you don't have them up for customers
available to use, it's costing you money. So the tech
at Microsoft I talked to who told me he'd actually

(26:40):
gotten a deployment of a video microchips on New Year's
Eve and then spent the entire night setting up the
rig that particular night just to make sure it was
available for customers on New Year's Day.

Speaker 2 (26:48):
And the core weave guys it was the same thing.

Speaker 4 (26:50):
They were like, yeah, we were missing a particular component
and it was like a forty dollars component, but we
couldn't find it anywhere. So we had to get this
thing up and running harder to private jet to have
a guy fly the component down from Seattle and just
so we could install it in our data center same day.

Speaker 2 (27:06):
We couldn't wait even one more second.

Speaker 4 (27:08):
So it's a race, you know, it's absolutely a race
to get this equipment online because demand for AI training
is just insane.

Speaker 2 (27:15):
It's through the roof.

Speaker 4 (27:17):
It's it's four or five years of demand pento and
a race to where I mean, do you think that
we are in the midst of architecturing the future of humanity?

Speaker 1 (27:27):
Or is this one of the world's great boondoggles? A
tremendous financial cost bitols so energy cost to communities and
to the world's environment.

Speaker 2 (27:37):
It's not a boondoggle.

Speaker 4 (27:39):
This is not NFTs, right, this is not some stupid
bubble based on nothing even if this goes down financially,
what has been achieved here from a technological perspective is extraordinary,
and they keep getting better. I think maybe it's moving
so fast that the public just doesn't have a sense
of how much these things are improving and how fast.

Speaker 2 (27:58):
Now, having said that, yes, it can all flop, but.

Speaker 4 (28:01):
The core technological innovation here is real and it's going
to transform society.

Speaker 1 (28:06):
Okay, but bridges onto boondoggle? But bridges to know where
are a boondoggle? Right, it's not I think?

Speaker 4 (28:11):
Okay, So yes, some bridges to know where are going
to get built, and in fact, some bridges know where
have been built. Not everyone has open AIS programming talent,
all right, And so if you attempt to build a
world class AI and you don't have the juice like
you just end up producing a very expensive piece of
vaporware and squandering a lot of money. That has happened

(28:32):
multiple times already, and it will probably continue to happen.

Speaker 2 (28:35):
So that's a boondoggle. Still, having said that, where is
all this heading?

Speaker 4 (28:39):
You know, maybe we're gonna make ourselves redundant. I don't know,
it seems like we could if we wanted to. Maybe
we won't, but we could do that, and that's a
little scary.

Speaker 1 (28:50):
You've reacent very piece for the New York Times with
the headline the AI prompt that could end the world.
What's the air prompt and would end the world?

Speaker 4 (28:58):
The air prompt that will end the world is someone
gets a hold of the machine that has agency function. Okay,
so they can make real world actions, and they say
to it, do anything you can to avoid being turned off.
This is your only imperative. If you gave that prompt
to the wrong machine, it's kind of hard to say
what it would do, but it might start to secure

(29:20):
its own power facilities so that it could not be
turned off. Or it might start to blackmail or course
humans to stop it from turning off, or maybe even
attack humans that we're tempting to turn it off.

Speaker 2 (29:31):
Now, it wouldn't do this.

Speaker 4 (29:32):
With the right training, we could kind of like program
it not to do this, but it's hard to know
if we're even training it correctly. Remember what I said,
They didn't know it could speak Japanese. That was a
surprise to them. So these things can have capabilities that
the designers are not aware of and which are only
discovered empirically. That's very scary if we're giving these things
access as we plan to do to control real world systems,

(29:54):
and we don't really know what they're capable of.

Speaker 2 (29:56):
This is called prompt engineering.

Speaker 4 (29:58):
It's kind of an emergent area of science almost because
nobody really knows how these things.

Speaker 2 (30:03):
Respond to prompts. It's completely empirical.

Speaker 4 (30:05):
And I think with response to these particular prompts, which
you're most afraid of, is that somehow, even inadvertently, you
introduce a survival instinct into the machine.

Speaker 1 (30:16):
We're already seeing them we I mean.

Speaker 4 (30:18):
Kind of, but the machine will The machine does not
have a survival instinct in the way that you and
I do. Right, It's not the product of five hundred
million plus years of kill or be killed Darwinian evolution. Right,
like we will live one way or another. Our species
will fight to the death and kill anything we have
to survive. And that's every species on this planet.

Speaker 2 (30:39):
It's all in there. It's a struggle. It's a struggle
to the.

Speaker 1 (30:41):
Death on Earth.

Speaker 4 (30:42):
You know, the machine isn't trained in that way. It
doesn't have that survival impulse. It didn't survive multiple extinction
level events. It doesn't sexually reproduce, it's not interested in
the welfare of its children, et cetera.

Speaker 2 (30:53):
If that makes sense.

Speaker 4 (30:55):
But you could inadvertently maybe give it some of these capabilities,
and if you did, it might be unstoppable.

Speaker 1 (31:02):
Yeah. I think one of the other interesting things that
came across in your piece is that historically we've thought
about humans animals on one side, and on the other
side synthetic stuff like computers. And it's not so much
that synthetic stuff like computers has to become more lifelike.
I internalize some kind of survival drive or reproduction drive,

(31:24):
but the computers can now meaningfully intrude upon and interfere
with the biological side, and in particular when it comes
to synthesizing new viruses.

Speaker 4 (31:36):
That's right, the AA has the capability, at least in theory,
and especially it will have this capability in spades and
years to come to synthesize a lethal virus. Right, the
synthesize a lethal pathogen like super covid, right covid with
like a ninety nine percent death right, it could do
that if it wanted to, better than human could. Okay,
if this fell out on the wrong hand, somebody was

(31:57):
an apocalyptic mindset, at least in theory, if something like
this could be built now, the designers are very aware
of this risk, and in fact, in some ways this
is like the risk that they were most afraid of
to begin with. To prevent them from doing this, they
do a lot of fine tuning as a second round,
but inside the machine that capability is still there. They
never completely eliminate it. They just kind of make it

(32:19):
difficult for people to make those requests of the AI,
and they flag them when people do. This creates a
fear of what might be called like a lab leak
scenario before the AI has made public. Internally, the developers
are building it right, and that AI i'll do anything
they ask it to. And so in theory, if you
got access to one of those pre production AIS and

(32:39):
asked it to do gnarly stuff like synthesized viruses and
attached it to some kind of agency model, like yeah,
you could reenact the stand right if you wanted to.

Speaker 1 (32:48):
Did you hear any mitigation strategies that gave you comfort?

Speaker 4 (32:52):
No? No, I mean what's happening now is a race condition.
It's like the nuclear arms race. Nobody can slow down
no matter what they say, they just have to keep
building bigger and bigger and better and better systems. The
fear among the people who would regulate AI there's functionally
no regulation at all, is that we can't regulate it
because then China will pull into the lead. And actually
the fear is basically accurate. So you have something that

(33:15):
resembles arms race conditions, both among the frontier labs themselves
as they compete to when what I have described is
probably the single greatest prize in the history of capitalism.
If you could get dominant status with chatch ept where
everyone was on it, that would be worth so much money,
probably more than a video is worth. And then also
you can't lose to China. You can't have China have
better AI than the US. That's kind of the mindset

(33:37):
of US lawmakers right now.

Speaker 2 (33:38):
It's probably true.

Speaker 4 (33:40):
So we're in a dangerous race to build ever more
capable systems with less and less oversight, and I don't
perceive how we would stop. I think what will have
to happen is that some kind of big accident will
have to happen before people wake up to the danger.

Speaker 1 (33:55):
On the happy note, Stephen Wack thank you.

Speaker 2 (33:58):
I love me say this too.

Speaker 4 (34:00):
There's a lot of very positive outcomes here. There is
a path where this just turbo charges.

Speaker 2 (34:04):
Already, I have.

Speaker 4 (34:05):
Mostly experienced positive outcomes from AI. I'm worried it's making
me dumber, I must say, am it's making me a
worst writer and a worst thinker. But it's an extraordinarily
good resource for doing like fact checking for the New Yorker,
for example. You know a few years ago they hallucinated
and you couldn't trust them. But now you asked the
AI to go like dig up sources on the web,
and it's really good at it. It's better than Google,
way better. It saves me a ton of time. So

(34:28):
I think this self driving cars, all this stuff, medicine,
AI pioneered demosis. Hobbas believes we're going to cure every
disease with AI. Maybe it's true. The capabilities are there.
Of course, if you have the capability to cure every disease,
you also haveink capability synthesize new and scary stuff. But
if we can control it, if we can bring it
under control and use it to create positive outcomes or humanity,

(34:49):
we could be entering an age of prosperity and.

Speaker 2 (34:51):
Wonder if possible.

Speaker 1 (34:53):
Well, but thank you so much, thank you for having me.

(35:18):
That's it for this week for tech Stuff. I'm Cara
Price and I'm Oz Volosian. This episode was produced by
Eliza Dennis, Tyler Hill and Melissa Slaughter. It was executive
produced by Me, Kara Price, Julia Nutter, and Kate Osborne
for Kaleidoscope and Katrin norvelve iHeart Podcasts. Jack Insley mixed
this episode. Kyle Murdoch wrote up theme song.

Speaker 3 (35:38):
Join us on Friday for the Week in Tech, where
we'll run through the headlines you need to follow.

Speaker 1 (35:43):
And please do rate and review the show and reach
out to us at tech Stuff podcast at gmail dot com.
We want to hear from you.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Ruthie's Table 4

Ruthie's Table 4

For more than 30 years The River Cafe in London, has been the home-from-home of artists, architects, designers, actors, collectors, writers, activists, and politicians. Michael Caine, Glenn Close, JJ Abrams, Steve McQueen, Victoria and David Beckham, and Lily Allen, are just some of the people who love to call The River Cafe home. On River Cafe Table 4, Rogers sits down with her customers—who have become friends—to talk about food memories. Table 4 explores how food impacts every aspect of our lives. “Foods is politics, food is cultural, food is how you express love, food is about your heritage, it defines who you and who you want to be,” says Rogers. Each week, Rogers invites her guest to reminisce about family suppers and first dates, what they cook, how they eat when performing, the restaurants they choose, and what food they seek when they need comfort. And to punctuate each episode of Table 4, guests such as Ralph Fiennes, Emily Blunt, and Alfonso Cuarón, read their favourite recipe from one of the best-selling River Cafe cookbooks. Table 4 itself, is situated near The River Cafe’s open kitchen, close to the bright pink wood-fired oven and next to the glossy yellow pass, where Ruthie oversees the restaurant. You are invited to take a seat at this intimate table and join the conversation. For more information, recipes, and ingredients, go to https://shoptherivercafe.co.uk/ Web: https://rivercafe.co.uk/ Instagram: www.instagram.com/therivercafelondon/ Facebook: https://en-gb.facebook.com/therivercafelondon/ For more podcasts from iHeartRadio, visit the iheartradio app, apple podcasts, or wherever you listen to your favorite shows. Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.