Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hi. Is Us Valoscian here and Cara Price, and we're
taking some time off for the holidays. We'll be back
with new episodes starting in January. In the meantime, instead
of leaving this feed empty, we wanted to share one
of my favorite episodes from last year. This week, we're
rearing my conversation with Stephen Witt from November. He's an
author and frequent contributor to The New Yorker who wrote
(00:20):
the book on one of the biggest companies in the world.
You may have heard of it in video. In this episode,
we hear how the CEO, Jensen Huang, went from working
at Denny's to being the world leader manufacturing AI chips.
Hope you enjoy it and thanks for listening. Welcome to
(00:51):
Tech Stuff. I'm Us Vlashan here with Cara Price. Hey, Kara, Hi,
as so years ago around the time that we were
reporting our first podcast together on the forthcoming AI Revolution
around no longer forthcoming. Yeah, you invested in in Vidia,
which is up over one hundred x since then. Congratulations.
Speaker 2 (01:14):
You know, I just felt when we reported on it
that it was going to be the future, and so
I did invest in it, and you know, very happily.
Speaker 1 (01:22):
So now, what was it? Was there something that tipped
you over the edge back then to do it? Well?
Speaker 2 (01:28):
I was just sort of thinking to myself, this is
the thing that's going to power everything. So obviously it's
something that people are going to be paying attention to
and investors are going to be paying attention to, and
it just it made a lot of sense to me
at the time.
Speaker 1 (01:41):
We know Warren Buffett is retiring this year, there's a
slot open. In Vidia is, of course the most valuable
company in the world today, recently topping five trillion dollars
in value, and of course a lot of people are
crying bubble, not just for Nvidia but for the AI
industry as a whole, actually carrying curiously.
Speaker 2 (02:01):
You held your stock, I held I look, I needed
some things, so I sold some, but still I still
have a lot.
Speaker 1 (02:08):
I have enough, And you're gonna hold on. You're not
worried about the bubble. I think I am gonna hold on.
But the questions people have are, what if AI doesn't
get better infinitely as it scales, what if people invent
new chips that are far more efficient than the Invidia chips,
and what if the adoption of AI by other companies
doesn't give them the results that they hope for financially.
(02:31):
And so I talked about all of that with somebody
who knows Nvidia better than anyone else. In fact, he
literally wrote the book on it, The Thinking Machine, Jensen
Wang in Vidia and the world's most coveted microchip. He
actually interviewed Wang six times.
Speaker 3 (02:47):
He's moody, and you know, he has what I would
describe as somewhat self indulgent performances of anger from time
to time.
Speaker 1 (02:54):
That's Stephen Witt, and he actually got interested in Nvidia
shortly after chatchipt took the world by storm.
Speaker 3 (03:01):
I had been using chat GBT and I was like, Wow,
this thing is amazing. This is like twenty twenty two,
and I am cooked, Like there's not going to be
room for me as a writer. This thing can already
write almost as well as I can, and actually it
writes better than I did when I was young.
Speaker 1 (03:16):
As Stephen dug around to understand what was powering this technology,
he got more and more interested in the company building
its physical infrastructure.
Speaker 3 (03:24):
What brought me to a video was I was trying
to write about open Aye and them, and it was
just a crowded there, a million journalists swarming around. I
was like, there's got to be some other story here.
And what I've done as a journalist is look for
big movements of money that aren't being covered. And I
looked at in Video's stock price, and I in my
mind they were still the gaming company, and I was like,
what the hell is going on here? This company's worth
(03:46):
a trillion dollars. And then as I started to investigate it,
I was like, Oh, wow, they build all the hardware.
They built all the hardware that makes this stuff go.
That's fascinating. And then I kind of learned about Jensen
and I was like, wait, this company has had the
same CEO through both the gaming and the AI days.
And not only that, this is the same as the founder.
He's been the CEO for thirty years. It's the same
(04:06):
guy all along.
Speaker 1 (04:08):
So Stephen wrote the book on Nvidia, but he also
wrote a great New York A piece recently about data
centers and an essay for The New York Times about
quote the AI prompt that could end the world. So
he's really a farm to table thinker, from chips to
data centers to AI to the apocalypse. It was a
fun conversation.
Speaker 2 (04:26):
Yeah, as a writer, he seems to sort of be
at the center of everything in a way that I
find very compelling, And I actually want to know more
about the AI prompt that could end the world.
Speaker 1 (04:37):
Well, in that case, you have to listen to the
whole interview because we talk about it right at the end.
Okay's the conversation with Stephen Witt. For the layman, what
is in Vidia and how did it become the most
valuable company in the world.
Speaker 3 (04:49):
Invidia is basically a hardware designer. They make a special
kind of microchip called a graphics processing unit, and the
initial purpose of this thing was to just render graphics
in video games. So if you were a video gamer,
you knew who this company was because you would actually
build your whole PC just around this in video card.
(05:09):
So this was the engine that rendered the graphics on
your screen. Sometime around two thousand and four to two
thousand and five, scientists began to notice how powerful these
cards were, and they started hacking into the cards, like
hacking into the circuitry to get to those powerful mathematical
functions inside the microchip. And Jensen Wong saw this and
(05:29):
he said, wait, this is a whole new market that
I can pursue. So he built the software platform that
turns the graphics card into basically a low budget supercomputer.
Now you may ask who is this for, Well, it's
not really for established research scientists because they can usually
afford time on a conventional supercomputer. It's for scientists who
(05:53):
are sort of marginalized, who can't afford time on a
supercomputer and whose research is.
Speaker 1 (05:58):
Out of favor.
Speaker 3 (06:00):
So it's for mad scientists. It's for scientists who are
pursuing unpopular or weird or kind of offbeat scientific projects.
But ultimately, the key use case turned out to be AI,
and specifically a branch of AI that most AI researchers
thought was crazy called neural network technology. And what you're
(06:23):
doing here is you're building software that kind of resembles
the connections in the human brain. It's inspired by the
biological brain. Actually, you build a bunch of synthetic neurons
in a little file, and then you train them by
repeatedly exposing them to training data. So what this could mean,
for example, if we're trying to build a neural network
to recognize objects to do computer vision, then we'll show
(06:45):
it tens of thousands or hundreds of thousands, or ultimately
millions of images and slowly rewire its neurons until it
can start to identify things. Now, this had been proposed
going all the way back to the nineteen forties, but
nobody had ever able to get it to work. And
the missing piece, it turns out, is just raw computing power.
(07:06):
Jeffrey Hinton, who they call him the godfather of AI,
he said, you know, the point we never thought to
ask was what if we just made it go a
million times faster? And that's what in Nvidia's hardware did.
It made AI and these neural networks in particular, train
and learn a million times faster.
Speaker 1 (07:23):
We actually had Hinton on the podcast earlier this year.
Was he an early one of these mad scientists whose
research was unpopular and therefore started buying in video chips.
Speaker 3 (07:33):
Yes, very much so. And there was a community of
these guys. It wasn't just him. There were a number
of other people doing it, most of whose work has
now been recognized. But they were very much on the
margins of computer science. They couldn't get five thousand dollars
in research funding, but they could get enough money to
afford two five hundred dollars in video retail graphics gaming cards,
(07:53):
which they did, and Hinton had a graduate student named
Alex Krzewski who was just an ACE programmer, and he
turned the neural net that he ran on these cards
into something called alex net, which then started to recognize
images better than any AI had ever done before. Like
it smashed the paradigm. And so that engineered around twenty
(08:13):
twelve or twenty thirteen a paradigm shift in AI. And
since then everything that has happened has been a repeated
application of the thing. Alex discovered that if you took
neural nets, if you ran them on Nvidia technology, you
would have a very powerful result.
Speaker 1 (08:30):
So this is all fascinating, but to some it may
sound a big geeky and like inside Baseball, which is
why I was very attracted to a quote in your
recent ne Yoka piece which said, if Americans want to
retire comfortably, in Vidia has to succeed. Yes, And I
was curious what your conversations with the edits around that,
around that.
Speaker 3 (08:50):
Oh no, that's that's straightforward. That one actually sailed right
through fact checking. There were no questions what has happened
since Alex invented this thing in his bedroom. Is that
we scaled it up from two graphics cards to two
hundred thousand or more, and we have plans to scale
them up to two or two million than twenty million, right,
So this is the data center boom which we're going
through right now. It's a new industrial revolution. We're basically
(09:14):
building these giant barns full of in Vidia microchips to
run calculations to build better AI twenty four to seven
around the clock, and it's one of the largest deployments
of capital in human history. This has made in Vidia
the most valuable company in the world, and it has
(09:34):
created a situation where in Vidia stock is more concentrated
in the S and P five hundred than any stock
since they started keeping track. And actually Microsoft, who is
the second biggest, has that valuation largely because they're building
these sheds and renting out in video equipment. So that's
linked to So think about that fifteen percent of the
stock market is these two stocks, right, they have to succeed.
(09:57):
Americans in particular, are usually invested massively through index funds
and something that looks exactly like the S and P
five hundred, so you know, if in Vidio crashes, it's
going to create a lot of paint throughout the economy.
Speaker 1 (10:10):
I want to talk about the data centers, but before that,
I want to talk about the man who founded in
Nvidia and is a CEO today at Jensen Huang. He
seems to pop up everywhere, but he also seems to
be more inscrutable. I mean, who is he and do
you see him as different from Zuckerberg and Oltman and
Bezos in some significant way?
Speaker 3 (10:28):
Jensen most resembles of all executives Elon Musk because he
is an engineering wizard. Bezos is smart as hell, and
so Zuckerberg. But ultimately they're kind of software guides. You know,
they're coming at the computer from the keyboard in the terminal.
Jensen is totally different. He approaches computing from the circuit up.
He's a degree not in computer science originally, but actually
in electrical engineering. Okay, so for Jensen, the computer is
(10:52):
a piece of hardware that runs calculations in a microchip,
and he literally designed those microchips on paper at the
beginning of his career, and that's all he's ever done.
And this is a little bit why even though he
runs the most valuable company in the world. It's a
little baffling to to kind of people. Nothing Jensen makes
is really that accessible. It's all deep inside the computer.
Speaker 1 (11:14):
As a quote in your piece that I liked what
he said, I find that I'm best when I'm under adversity.
My heart rate actually goes down. Anyone who's dealt with
RUSSIAU in a restaurant knows what I'm talking about.
Speaker 3 (11:25):
Yeah, yeah, I mean he started out at Denny's, so
his first job was basically I think he was a
bus boy at first, and then graduated to dishwasher and
ultimately became a server. And I was talking to someone
in the company and she's like, you know what, Jensen
is actually a lot calmer and more compassionate with his
employees when things are going wrong. It's when the company
(11:45):
stock price is way up but it looks like everything's
going great that he really becomes much more cruel, like
much much meaner to everybody. So he is actually, in
some ways a nicer person when things are going wrong.
When he succeeds, it makes him nervous.
Speaker 1 (11:59):
One of his colleagues describe working with him as kind
of like sticking your finger in the electric socket. That's
quite the metaphor.
Speaker 3 (12:06):
It's one hundred percent accurate. I mean I've interacted with Jensen.
It is like sticking your finger in the electric socket.
He's so tightly wound. He expects so much to happen
in every conversation. Just to even start talking to him,
you have to be totally up to speed. He's not
gonna waste any time. He's not going to suffer fools.
And he's also really intense and unpredictable, and you just
don't know where he's going to go in any conversation.
(12:28):
And you know, he has what I would describe as
somewhat self indulgent performances of anger from time to time.
And that's especially true if you're one of his executives.
If you're not delivering, he's going to stand you up
in front of an audience of people and just start
screaming at you. But really I mean yelling, and it's
not fun, and he will humiliate you in front of
an audience. I think people at Nvidia have to develop
(12:50):
very thick skins. He actually did this to me at
one point, so I kind of know exactly. Oh yeah, yeah, Well,
I kept asking him about the future. Jensen does not
like to speculate. He doesn't have actually a science fiction
vision of what the future is going to look like.
He has a data driven vision from engineering principles of
where he thinks technology is going to go. But if
(13:10):
he can't see beyond that, he won't speculate. But I
noticed that other people at this firm would talk about it,
and I really wanted to get into his imagination. I
guess I would say of where he thinks all this
can go. So I presented him with a clip from
Arthur C. Clark discussing the future of computers, and this
is back from nineteen sixty four, but it was kind
of anticipating the current reality we were in where we
(13:31):
would start training mechanical brains and those brains would train
faster than biological brains and eventually would supersede biological brains.
And so I'd show this clip to some other people
in the video, and they've gotten very They kind of
like smelled up and started giving these grand soliloquies about
the future that were like very beautiful and articulate, And
I was hoping to get that response from Jensen. Instead,
he just starts screaming at me, how about how stupid
(13:53):
the clip was, how he didn't give a shit about
Arthur C. Clarke, He never read one of his books,
he didn't read science fiction, and he thought the whole
line of questioning was pedestrian, and that I was letting
him down by asking I was wasting his time. Despite
having written his biography, Jensen remains a little bit of
a puzzle, and just that I cannot tell you what's
going on inside his brain. So well, but I will
say this, he's extremely neurotic, by which I mean, I
(14:17):
don't even mean this in a clinical sense. I just
mean that, by his own admission, he's totally driven by
negative emotions. So even though he's on top of the world,
I think his mind is telling him constantly, you're going
to fail. This is a temporary thing, and Vidio is
going to go back down again, you know, twice in
his tenure as CEO, in Vidia's stock price has retreated
by almost ninety percent.
Speaker 1 (14:38):
What could make it happen now? What keeps me up
at night today? What could happen today?
Speaker 3 (14:42):
Anything? Any number of things. This would not be comprehensive,
But there's three big risks. The first is just competition,
and Vidia is making so much money and everyone's seeing that,
and this attracts competition in the same maner that chum
attracts sharks. Right, it's like throwing blood in the water
for other microchip design to earn a seventy percent eighty
percent gross margin, which is what they do on.
Speaker 1 (15:03):
Some of these chips.
Speaker 3 (15:04):
So Google has built a whole alternative stack for AI
computing around their own, their own kind of platform, and
they're starting to lease that out to new customers. That's
a big risk. There's a big risk that Chinese companies
built alternative, cheaper stacks to what Nvidia does. Intel had
ninety ninety five percent of the CPU market at one
point in this country. Now they're falling apart. Conquering one
(15:27):
cycle in microchips is no guarantee that you will conquer
the next one, and history demonstrates that quite clearly, so
that could happen. B Basically, what happens in the data
center is we're doing a mathematical operation called a matrix multiplication,
and it's extremely computationally expensive to do this. So without
(15:48):
getting two technical basically to train an AI right now,
we have to do ten trillion trillion individual computations, which
is more than the number of observable stars in the universe. However,
maybe it's possible that we find some more efficient way
of doing that. Maybe there's a way that requires only
ten billion trillion or even ten hundred trillion, right, and
(16:10):
video stock price would go down because we wouldn't have
to build so many data centers. Right, we'd have a
more efficient training solution. All of this is a more
complex way of saying, maybe there's a technological solution where
we you know, right now, we're brute forcing our way
to AI. It's a heavy industrial problem. We're talking about
building nuclear power plants to bring these things online. I
(16:31):
think maybe it's possible that there's a technological solution that
trains these things faster, and if we discovered it, we
wouldn't have to buy so many in video microchips. That
would also make their stock price got down. But the
third thing is basically right now, for the last thirteen
or fourteen years, the more microchips we stuff into the barn. Okay,
(16:52):
the more microchips we throw at this problem, the better.
Speaker 1 (16:55):
AI we GND. This is the scaling law law in quotes.
Speaker 3 (16:59):
Okay, is not a law in the universe that this
has to happen. It's not some immutable, physical proven thing
from first principles of physics that the more microchips we have,
the better AI we have. In fact, no one is
entirely sure why this works. Presumably, like most other forces
in the universe, this will hit some kind of escurve.
(17:20):
It'll start to plateau or level off at some point.
We're not there yet. But if we did hit a plateau,
if stuffing more microchips into the barn only resulted in
marginally better AI or didn't improve it at all, I
think in video stock price will go down a lat
and I think it would look make this whole era
look kind of like a bubble if that were to happen.
Speaker 1 (17:40):
Now, is this why in Nvidia has kind of become
the bank of the air evolution In a sense, they're
wanting to lend money and lock other companies into the
current paradigm of AI, maybe even hoping to defensively prevent
other more economical approaches from emerging and consolidating videos position.
I mean, how much of a chess game is this
(18:01):
in terms of thinking about the future of computing. For
Jensen and.
Speaker 3 (18:04):
Others, Oh yeah, it's chess, and Jensen is an expert
chess player. At this kind of chess, He's really good
at thinking about the competitive positioning of where he is
and where other people are. You know, in Vidiot's early days,
the GPU market, back in the video game days was
really crowded. At one point there were fifty or sixty
participants in this market. I talked to David Kirk, who
(18:25):
was the chief scientist in VIDIA during this time. Jensen
would go into his office and a whiteboard and you
would have a list of all his competitors up there.
And not only that, they would have a list of
who the best engineers working at those competitors were, and
then they would come up with plans to poach those
engineers and get them to come work for a VIDIA
(18:45):
so that they would drain the brain power of their
competitors and force them to collapse. I've compared the early
graphics days to the movie Battle Royale where all the
kids are on the island and they have to kill
each other. It was like that there were like forty
competitors and only one could survive.
Speaker 1 (19:00):
Here's one.
Speaker 3 (19:01):
He won the Battle Royale. He was the last guy standing.
I mean he won the knife fight. So he is
unbelievably ruthless and unbelievably good at identifying where the competition
is and what he could do, not just to beat
them in the marketplace, but actually to hollow out their
engineering talent.
Speaker 1 (19:18):
Who are in Video's biggest customers and what are they
buying the chips for.
Speaker 3 (19:23):
Okay, it's a bit complex. The biggest customers, they don't
disclose it. Almost certainly it's it's Microsoft and then probably Amazon.
What these companies do is they train some AI on
their own, but what they're really doing is putting. They're
the ones building the sheds, they're the ones building the
data centers. So in Video sells them the micro chips,
and then kind of the ultimate end user is a
(19:44):
frontier AI lab, so that could be something like Anthropic
or open AI. So essentially the way to think about
this is in Vidia sells the microchips to Microsoft or
Amazon or maybe Oracle. Oracle builds and operates a gigantic
day center with one hundred thousand microchips in it that
takes as much power as like a small city, and
(20:05):
then clients like open ai come and lease it out from.
Speaker 1 (20:08):
Them after the break Why data centers are worried about
break ins? Stay with us. Let's talk about data centers.
(20:40):
There's something weird about data centers because on the one hand,
they are literally the most boring thing in the world,
and on the other hand, they are unbelievably fascinating. I mean,
you mentioned there's article information about James Bond style security
consultants defending data centers, Like, how do you explain what
is going on here? Okay?
Speaker 3 (20:59):
So basically, and this is the kind of the most
amazing thing you can imagine, this giant barn racks of
computers as far as the eye can see. What those
computers are doing is processing the training data for the
actual file of AI. And that file usually it contains
let's say a trills a guest. But let's say like
(21:21):
a trillion weights, a trillion neurons.
Speaker 1 (21:23):
Okay, well, we.
Speaker 3 (21:24):
Can store a trillion neurons on a small external hard
drive like you can store them this much sides of
a candy bar. Okay, So, at least in theory, if
somebody were to break into a data center and extract
the information on that little file, they would basically own
chat GPT six. They would own all of Opening Eyes
IP if they could just break it out of the
(21:44):
data center. And this is actually a real concern, probably
not so much from petty thieves, but from like state
sponsored actors, like maybe China wants to know what's on
Opening Eyes equipment before it launches, right, Like, it's kind
of almost like a corporate espionage problem. And so a
couple things happened in response. First of all, that data
center operators do not want to tell you where these
things are even located.
Speaker 1 (22:05):
So that's the animation huge. I mean, how well can
you hide them?
Speaker 4 (22:09):
Well, they're huge, but they're also extremely boring, so they
just look like a giant industrial warehouse, and often there's
no way to distinguish it from the next giant industrial warehouse,
Like are they moving palettes of shoes around in there?
Speaker 3 (22:23):
Or is it a data center? I don't even really
know now. I think if you had a trained eye
and knew what electrical equipment to look for, you would
see it. But it's more just kind of like keeping
it all a big secret. You're right, some of them
are getting so big that there's no hiding this, But
still they don't let you know that they're data centers
and they look boring as hell. They're gray scale buildings,
you know, lucle sheds.
Speaker 1 (22:43):
I know, you can't say where it was, but you
did get to go to the Microsoft Data center, Like,
describe arriving there, what it looked like, what it smelt light,
Who was there? I mean, really take us into the scene.
Speaker 3 (22:54):
There's a campus. It is like a giant plot of land.
I will say it was in the middle of nowhere
that they had just taken over and were building into
this massive data center. And it was in an agricultural community.
In fact, directly across the street from this data center
was a dilapidated shed with rusted cars in the driveway,
straight dogs wandering around in cans of modello like littering
(23:17):
the yard. And then it's slowly being taken over by
these giant computing barns, not just Microsoft, but everywhere you look,
and there's redundant one hundred foot power lines everywhere, right,
So it just looked like, you know, all the farmers
were being kicked out and looked like an invasion by aliens.
So you go in. There's multiple security checkpoints. I think
there were three vehicle checkpoints. I had to go through
(23:38):
to get to kind of the heart of the data center.
Then you go in and it's Microsoft, so you have
to sign fifteen NDAs and watch a PowerPoint and put
on all the safety equipment and then you're inside. Now,
inside is a little underwhelming. It's a giant concrete barn,
just full of repeated racks of equipment as far as
the eye can see. It's not necessarily inspiring of poetry
or anything. It feels like being inside of an industrial process,
(24:01):
which it is, and not a very beautiful one either.
There's cable everywhere, pipes for water and air, cables for electricity,
cables for transporting data around, and then there's repeated power banks.
There's batteries, there's power stations, there's industrial HVAC systems, and
all of this is to just keep the microchips running
twenty four to seven, to keep the AI processing running.
(24:22):
I did ultimately kind of sweet talk my way into
the control room, which I wasn't supposed to be in initially,
so that was kind of cool. And the guy in
the control room showed me what was happening, and it's
just this power spike of the power going up and
the power going down, and the power going up and the
power going down. When the power was going up, the
microships were kind of like moving all at once to
do a bunch of matrix multiplications. And then when the
(24:42):
power went down, they were writing the results to file.
And this happened over and over and somewhere in that
data center where there was that tiny little file of numbers,
that tiny little collection of synthetic neurons, and with every
pulse there it just got a little bit smarter.
Speaker 1 (24:57):
Did the pulse make you think of life file? Yes?
Speaker 3 (25:02):
And no. They're calling these things neurons, right, So these systems,
while they are inspired by biology, don't necessarily work in
the same way as biology. Still, it's certainly inspired by
the brain, and it seems to have emergent capabilities, like
emergent biological capabilities, kind of like a human brain. I'll
tell you a fascinating story. I was talking to the
(25:24):
product had, the original product had for chat gpt, who
launched it, And he was like, yeah, we put it
up and we just kind of walked away. We didn't
think it would be that popular. And the first place
that really started directing traffic to chat gpt was a
Reddit board in Japan. He was like, this game is
a great surprise to me because I had no idea
(25:44):
it could speak Japanese. That was something it had learned
and empirically. One of the reasons we put it out
there was to test what it could do. So it
came as a surprise to us that this thing could
speak Japanese well enough to attract a large and in
fact ravenous Jack Companies user base. And so when you
train these things, you actually don't know what they can
do at the end. It's often a surprise to you,
(26:06):
even the creators. But there is this life not life
thread throughout your piece. I mean you mentioned being kind
of desperate for human contact are being led through these
data centers, and you mentioned one of the data center
founders from Coal. We're talking about wanting to hire people
who can endure a lot of pain. What is this
(26:26):
pain brutality in human sort of set of ideas around
data centers. Yeah, it's a lot like working in a
printing press. It's a heavy industry. It's extremely loud inside
the data center, especially core weaves. I mean, I couldn't
hear myself think. Actually, if you work for a long
time data center, you have to wear both earplugs and
then over that a set of protective cans. So you've
got to do kind of like two kinds of your protection.
(26:48):
And even then long term tonight is can be a risk.
And also you can electrocute yourself. There's very high voltage
electric equipment running through there. It's just not an easy
place to work. Not only that, when Nvidia rolls out
a new set of microchips, it is a scramble to
put them online. Every second that you don't have them
(27:10):
up for customers available to use, it's costing you money.
So the tech at Microsoft I talked who told me
he'd actually gotten a deployment of a video microchips on
New Year's Eve and then spent the entire night setting
up the rig that particular night just to make sure
it was available for customers on New Year's Day. And
the core we've guys it was the same thing. They
were like, yeah, we were missing a particular component and
(27:31):
it was like a forty dollars component, but we couldn't
find it anywhere. So we had to get this thing
up and running. So we chartered a private jet to
have a guy fly the component down from Seattle and
just so we could install it in our data center
same day. We couldn't wait even one more second. So
it's a race, you know, it's absolutely a race to
get this equipment online because demand for AI training is
(27:51):
just insane. It's through the roof. It's it's four or
five years of demand.
Speaker 1 (27:55):
Pento and a race to where. I mean, do you
think that we are in the midst of architecturing the
future of humanity? Or is this one of the world's
great boondoggles, a tremendous financial cost, bools and energy cost
to communities and to the world's environment.
Speaker 3 (28:14):
It's not a boondoggle. This is not NFTs, right, this
is not this is not some stupid bubble on based
on nothing. Even if this goes down financially, what has
been achieved here from a technological perspective is extraordinary, and
they keep getting better. I think maybe it's moving so
fast that the public just doesn't have a sense of
how much these things are improving and how fast. Now
(28:36):
having said that, yes, it can all flop, but the
core technological innovation here is real and it's going to
transform society.
Speaker 1 (28:43):
Okay, but bridges on't a boondoggle, but bridges to know
where are a boondoggle, right, it's I think.
Speaker 3 (28:48):
Okay, So yes, some bridges to know where are going
to get built, and in fact, some bridges know where
have been built. Not everyone has open AIS programming talent,
all right, And so if you attempt to build a
a world class AI and you don't have the juice
like you just end up producing a very expensive piece
of vaporware and squandering a lot of money. That has
(29:08):
happened multiple times already, and it will probably continue to happen.
So that's a boondoggle. Still, having said that, where's all
this heading. You know, maybe we're gonna make ourselves redundant.
I don't know. It seems like we could if we
wanted to. Maybe we won't, but we could do that,
And that's a little scary.
Speaker 1 (29:27):
You've reached a very piece for the New York Times
with the headline the AI prompt that could end the world.
What's the air prompt and would end the world?
Speaker 3 (29:35):
The a I prompt that will end the world is this.
Someone gets a hold of the machine that has agency function. Okay,
so they can like make real world actions and they
say to it, do anything you can to avoid being
turned off. This is your only imperative. If you gave
that prompt to the wrong machine, it's kind of hard
to say what it would do, but it might start
(29:56):
to secure its own power facilities so that it could
not be turned off. Or it might start to blackmail
or course humans to stop it from turning off, or
maybe even attack humans that were tempting to turn it off. Now,
it wouldn't do this. With the right training, we could
kind of like program it not to do this, but
it's hard to know if we're even training it correctly.
Remember what I said, They didn't know it could speak Japanese.
(30:18):
That was a surprise to them. So these things can
have capabilities that the designers are not aware of and
which are only discovered empirically. That's very scary if we're
giving these things access as we plan to do to
control real world systems, and we don't really know what
they're capable of. This is called prompt engineering. It's kind
of an emergent area of science almost because nobody really
(30:38):
knows how these things respond to prompts. It's completely empirical,
and I think with response to these particular prompts, which
you're most afraid of, is that somehow even inadvertently you
introduce a survival instinct into the machine.
Speaker 1 (30:53):
We're already seeing them we I mean.
Speaker 3 (30:55):
Kind of, but the machine will the machine does not
have a survival and stake in the way that you
and I do. Right, It's not the product of five
hundred million plus years of kill or be killed Darwinian evolution. Right,
Like we will live one way or another. Our species
will fight to the death and kill anything we have
to survive. And that's every species on this planet. It's
(31:16):
all in there. It's a struggle. It's a struggle to.
Speaker 1 (31:18):
The death on Earth.
Speaker 3 (31:19):
You know, the machine isn't trained in that way. It
doesn't have that survival impulse. It didn't survive multiple extinction
level of vents, it doesn't sexually reproduce, it's not interested
in the welfare of its children, et cetera. If that
makes sense. But you could inadvertently maybe give it some
of these capabilities, and if you did, it might be unstoppable.
Speaker 1 (31:39):
Yeah. I think one of the other interesting things that
came across in your piece is that historically we've thought
about humans animals on one side, and on the other
side synthetic stuff like computers, and it's not so much
that synthetic stuff like computers has to become more lifelike.
I internalize some kind of survival drive or reproduction drive.
(32:01):
But the computers can now meaningfully intrude upon and interfere
with the biological side, and in particular when it comes
to synthesizing new viruses.
Speaker 3 (32:13):
That's right, the AI has the capability, at least in theory,
and especially it will have this capability in spades and
years to come, to synthesize a lethal virus. Right, the
synthesize a lethal pathogen like super covid, right covid with
like a ninety nine percent death right, It could do
that if it wanted to, better than human could. Okay,
if this felut on the wrong hand, somebody was an
(32:34):
apocalyptic mindset, at least in theory, something like this could
be built.
Speaker 1 (32:38):
Now.
Speaker 3 (32:38):
The designers are very aware of this risk, and in fact,
in some ways this is like the risk that they
were most afraid of to begin with. To prevent them
from doing this, they do a lot of fine tuning
as a second round, but inside the machine that capability
is still there. They never completely eliminate it. They just
kind of make it difficult for people to make those
requests of the AI, and they flag than when people do,
(33:01):
this creates a fear of what might be called like
a lab leak scenario. Before the AI has made public internally,
the developers are building it right, and that AI i'll
do anything they ask it to. And so in theory,
if you got access to one of those pre production
AIS and asked it to do gnarly stuff like synthesized viruses,
and attached it to some kind of agency model, like yeah,
(33:22):
you could, you could reenact the stand right if you
wanted to.
Speaker 1 (33:25):
Did you hear any mitigation strategies that gave you comfort?
Speaker 3 (33:29):
No?
Speaker 1 (33:30):
No, I mean what's.
Speaker 3 (33:31):
Happening now is a race condition. It's like the nuclear
arms race. Nobody can slow down, no matter what they say.
They just have to keep building bigger and bigger and
better and better systems. The fear among the people who
would regulate AI there's functionally no regulation at all, is
that we can't regulate it because then China will pull
into the lead. And actually that that fear is basically accurate.
So you have something that resembles arms race conditions, both
(33:53):
among the frontier labs themselves as they compete to when
what I have described is probably the single greatest prize
in the history capitalism. If you could get dominant status
with chat GPT where everyone was on it, that would
be worth so much money, probably more than a videos worth.
And then also, you can't lose to China. You can't
have China have better AI than the US. That's kind
of the mindset of US lawmakers right now. It's probably true.
(34:17):
So we're in a dangerous race to build ever more
capable systems with less and less oversight. And I don't
perceive how we would stop. I think what will have
to happen is that some kind of big accident will
have to happen before people wake up to the danger.
Speaker 1 (34:32):
On the happy note, Stephen Wack, thank you, I.
Speaker 3 (34:35):
Love me say this too. There's a lot of very
positive outcomes here. There is a path where this just
turbo charges. Already. I have mostly experienced positive outcomes from AI.
I'm worried it's making me dumber, I must say, am
I it's making me a worst writer and a worse thinker.
But it's an extraordinarily good resource for doing like fact
checking for The New Yorker, for example. You know, a
few years ago they hallucinated and you couldn't trust them.
(34:57):
But now you asked the AI to go like dig
up sources on the web, and it's really good at it.
It's better than Google, way better. It saves me a
ton of time. So I think this self driving cars,
all this stuff, medicine AI pioneered Demesis Hobbist believes we're
going to cure every disease with AI. Maybe it's true.
The capabilities are there. Of course, if you have the
capability to cure every disease, you also have the capability
(35:18):
synthesize new and scary stuff. But if we can control it,
if we can bring it under control and use it
to create positive outcomes for humanity, we could be entering
an age of prosperity and wonder if possible.
Speaker 1 (35:30):
Well, but thank you so much, thank you for having me.
(35:55):
That's it for this week for tech stuff.
Speaker 2 (35:56):
I'm care Price and.
Speaker 1 (35:58):
I'm as Vlasian. This episode was pretty used by Eliza,
Dennis Tyler Hill, and Melissa Slaughter. It was executive produced
by me Cara Price, Julia Nutter, and Kate Osborne for
Kaleidoscope and Katria Novel for iHeart Podcasts. Jack Insley mixed
this episode. Kyle Murdoch wrote up theme song.
Speaker 2 (36:15):
Join us on Friday for the Weekend tech where we'll
run through the headlines you need to follow.
Speaker 1 (36:20):
And please do rate and review the show, and reach
out to us at tech Stuff podcast at gmail dot com.
We want to hear from you