Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Also media, Hello and welcome to Better Offline. I'm your
host ed zich Tron. As ever, remember you can buy
Better Offline merchandise linkers in the episode notes. Today, I
(00:23):
am joined by Karen Howe, the author of the upcoming
book Empire of Ai, which tells the story of Open
Ai and the arms rais surrounding large language models. Karen,
thank you for joining me.
Speaker 2 (00:32):
Thank you so much for having me ed.
Speaker 1 (00:35):
So you describe the progress of these models in these
companies as a kind of colonialism, Can you get into
that for me?
Speaker 2 (00:43):
Yeah, So, if you think about the way that empires
of old operated during the very long history of European colonialism,
they were essentially taking resources that were not their own,
exploiting massive amounts of labor as in not paying them
or paying them extremely small amounts of money, and they
were doing this all under a civilizing mission, this idea
(01:05):
that they were bringing modernity and progress to all of humanity,
when in fact what was actually happening was they were
just fortifying themselves and the empire, and the people at
the top of the empire, and everyone else that kind
of lived in the world had to live in the
thrash of what the people at the top decided based
on their whims for what was part of their self
serving agenda. And that's essentially what we're seeing with Empires
(01:28):
of AI today, where they are taking data that is
not their own, their laying claim to it. They're taking land,
they're taking energy, they're taking water. They are exploiting massive
amounts of labor, both labor that goes into the inputs
for developing these am models, but also exploiting labor in
the sense that they are ultimately creating labor automating technologies
(01:49):
that is eroding away people's labor rights as we speak,
and they're doing it under this civilizing mission of they
are doing it for the benefit of all of humanity.
And what I say in the book is Empires of
AI they're not as overtly violent as empires of old.
And so maybe that can become confusing and people think, oh, well,
(02:09):
it can't be that bad. But the thing is, we've
had one hundred and fifty years of social and moral progress,
and so empires of modern day are going to look
different in the way that empires of old operated. And
when you look just at like the actual parallels, there
are just so many extraordinary parallels between the kind of
(02:31):
basis of empire building back then and now that I
think it is fundamentally the only frame that I have
found to really help understand and grapple with the sheer
scope and scale and the actual like what is actually
happening here within the AI industry.
Speaker 1 (02:49):
One theme from the book I also noticed was that
despite all of the backs and forths between all the people,
very rarely product came out. Though, Like it was interesting,
there seemed to be all that these sations about research
and all of these things they were saying, but it
usually just ended with like some sort of release and
then kind of just moved on. Yeah, it almost it
(03:10):
almost makes me wonder what they're always what they're working
on half the time.
Speaker 2 (03:15):
Yeah, you know, it's I think it's a product of
two different things that that you noticed that in the book.
One is that I finished writing the book before a
lot of the most recent product releases came out, Right,
That's just the nature of writing things on the timescale of.
Speaker 1 (03:31):
Books, is it's not fun.
Speaker 2 (03:34):
Yeah. I froze the manuscript in like the early days
of January, right before Deep Seek, right before Stargate, right
before you know, a string of other releases. So that's
one is that through most of open AI's history, it
is really it was really more focused on research conversations,
(03:56):
and it's only been in the last year or so
that it's really dramatically shifted much more to talking about product.
But the second reason is that I personally like that
is my expertise. I came up in AI reporting covering
the research, and so I wanted to focus on that
in the book and really unpack it, especially because there's
(04:18):
not as much reporting on the research these days, and
I wanted to kind of track that history and the
internal conversations that happened when people say that they're developing
so called AGI and.
Speaker 1 (04:30):
You talk about in twenty nineteen in the book the
your rose colored Glasses got knocked off by a story?
What was it that really made you start being suspicious
of these companies?
Speaker 2 (04:41):
Yeah, So in twenty nineteen was when I started covering
Opening Eye, and I embedded within the company for three
days in August of twenty nineteen to profile what had
then become a newly minted capped profit nested in a nonprofit.
And I think the thing that but that really started
tipping me off was it was actually really small things. Initially,
(05:05):
the first thing was they publicly professed to be this
bastion of transparency and they were going to share all
their research to the world, and they had accumulated a
significant amount of good will on the basis of this idea,
and they were raising not literally fundraising, but they had
amassed a lot of capital on the basis of this idea.
(05:26):
And when I started embedding within the company, I realized
that they were incredibly secretive, like they were not. They
wouldn't allow me to see anything or time to anyone
beyond very strictly sanctioned conversations. And even in those conversations,
I would notice that researchers were giving side eye to
the communications officer every other sentence because they were worried
(05:49):
about stepping into a lane that was considered proprietary. And
I was like, wait a minute, why are there things
that are proprietary, and why are the people being secretive
if all this is supposed to ultimately be shared with
the public. But the other thing was when I was
talking with executives, Like the very first interview that I
had was with Greg Brockman and Ilia Setskav, the CTO
(06:12):
and she's scientist, And I just asked them very basic questions,
like why do you think we should spend this much
money on Agi and not on something else? And can
you articulate for me what does AGI look like? What
would you even want Agi to do? And can you
articulate for me? You know, part of their origin story
(06:32):
as a company was they want to build agi good
agi first before the bad people built bad Agi. So
I was like, well, what would bad agi look like
as well? Or like what are the harms that are
coming out of some of this rapid AI progress? And
they weren't able to answer any of those questions. And
(06:53):
that was when I thought, hold on a second, Like
I thought that this was a nonprofit meant to care
counter some of the ills of Silicon Valley, one of
the ills being that the most companies end up being
thrown boatloads of cash without like clear articulated ideas about
what they're going to do with that cash. And here
(07:15):
I am in this meeting room trying to just ask
the most basic question, like the most boiler plate stuff
that there should be some kind of answer too, uh,
And they can't even answer that. So it seems like
it is actually very much just an animal of Silicon Valley.
This is not actually something different from what we're seeing
(07:36):
with the rest of the tech industry.
Speaker 1 (07:38):
It felt as well, that was a comment and forgive
me for forgetting it exactly where where it was like
the all secrets could be written on a grain of
rice or something like that. Yeah, and I have to admit,
as I read it, I got this weird feeling like
does anyone actually have any ip because when you actually
look at the conversations they're having, and you likely privy
(07:59):
to more hair, it felt like they wouldn't talk about
what they were doing at all, and not I say,
this is run apof written a lot about the valley.
It feels like they'd say more, but no one wanted
to say anything, not even secret. It's like nobody really knew.
And you even described some of the managerial stuff in there,
like no one really knew what was going on anyway.
(08:19):
It just feels like a remarkably disorganized company considering the scale.
Speaker 2 (08:25):
Yeah, so I think early on at Opening Eye was
completely disorganized in the sense that they had no idea.
You know, they decide, Okay, we're going to build this
ag I think, but then they were like, what does
that even mean? We have no idea, And there was
a lot of there weren't real managers at the company either,
because they had just gathered up a bunch of researchers
(08:46):
from academia, and they didn't really have much of a
sense of how to organize themselves other than a traditional
academic lab where there's a professor and grad students, and
I mean academia, you know, as it's has its function,
But that ultimately wasn't the right structure for trying to
(09:06):
move a group of people towards a similar goal. And
over time, Opening Eye did start cleaning itself up a
little bit, It did start restructuring itself. It started focusing
more on GPT models because they hit on that in
around twenty eighteen twenty nineteen. But similarly, there's still just
because there's no clarity about its mission and ultimately what
(09:31):
it is trying to build, you end up with just
a lot of riffs within the organization over this very
fundamental question. People fundamentally disagree about what Opening I is.
They disagree about what AGI is, They disagree about what
it means to ensure that something should benefit all of humanity.
And I think because there was all this confusion or
(09:56):
there were all these different interpretations ultimately of these like
basic tenants of the organization. I think people also just
there was they wouldn't quite clearly articulate to one another
what they were doing. It wasn't necessarily that they were
trying to be secretive to one another. It was more
just that they weren't really on the same page. And
(10:18):
this eventually became sort of less and less true in
the sense that as sam Oltman's installed himself as CEO
and started really exerting a particular type of path for
AI progress, then they started having, you know, research documents
that explicitly articulated we are a scaling lab, like we
are going after scale way.
Speaker 1 (10:41):
How long did it take them to put those documents
together though? What year about?
Speaker 2 (10:47):
I think their first research roadmap was in twenty seventeen,
so it was one and a half years to them
so bad into the non profit Yeah, yeah.
Speaker 1 (10:56):
So I will admit there is another colonial thing that
stood out well. Two specifically, One, it definitely feels that
there are a lot of unintelligent cousin types who were
put in there because their mate was there throughout the company.
But two it's this kind of religious view around agi,
this kind of nebulous justification for just about every I
(11:18):
was disappointed, and I understand what you mentioned it like
did Yadowski was in there? I think the less wrong people.
This is a personal belief. Just no need to mention
them again for anyone. I think that Yadowski, anyone who
writes a six hundred thousand word Harry Potter book should
be put in prison, including JK. Rowling. But it feels
like there really is this belief system that's pushed throughout
(11:38):
this industry which mirrors colonialism. Mirror is the very judao
Christian push of the British and many other colonial entities.
Speaker 2 (11:47):
Yeah. Absolutely so. One of the things that I was
most surprised by when reporting the book is I had
seen all the divisions around boomers and boomers people saying hey,
I can bring us to Utopiera, people saying I can
kill us all. I really did think initially that it
was just rhetoric and that it was just a tool
for accruing more power. And the thing that surprised me
(12:11):
most was how many people I met that genuinely deeply
believed in both, especially the Dumer ideology like I was
interviewing people whose voices were quivering because they were talking
about their anxiety around the potential end of the world,
and that was a very sincere reaction. And I think
(12:31):
that is part You're exactly right that it is a
huge parallel with empire building in the past, is that
empires need to have an ideology that convinces themselves why
they are ultimately doing something that is for the benefit
of the world. So in the past, when they had
the civilizing mission, we're bringing this to the world. It
(12:52):
also wasn't rhetoric. It was also a deeply seated religious
and spiritual and scientific belief that they were doing something
that was better off for everyone.
Speaker 1 (13:04):
I mean, the origins of the BBC in England were
religious indoctrination on some level. It kind of it's I
admit I'm surprised to hear the quivering voices stuff I
think because I think that again, this personal opinion, Ydowski,
I think is full. I think a lot of those
less wrong guys are full of shit. I think they're
doing it for not for the bit. But it's the
(13:27):
same kind of horse trading shit that people do around anything.
It's like, we don't have anything to believe in, so
let's all agree on this. But it's interesting to hear
that people are I don't know how to put this,
actually believing this crap even though it doesn't feel like
there's any real evidence. You know.
Speaker 2 (13:46):
Yeah, well, I think the analogy that I've started using
is I really feel like opening Eyes done, where you know,
in Dune, there is a mythology that is created by
a certain group of people with four understanding that they're
creating a mythology, right right, But then as they start
to embody and act out this mythology, not only do
(14:11):
many many people who didn't know that it was originally
created come to believe it, also the people who created
it come to believe it themselves. And I think this
is essentially exactly what is happening within AI with the ideologies,
is that maybe there was at some point someone who
was more aware that there was some kind of rhetorical
(14:36):
trick that they were playing around, really propagating this kind
of belief, but it is not We're not at that
point anymore. Like, there are lots and lots of people
who genuinely believe these things, and I think it's self
perpetuating because when you believe it, you look for signs
of it, and you research things that would suggest more
(14:58):
evidence for your belief and so they're kind of continuing
to reinforce their beliefs. And the more these A models
have progressed, the stronger these beliefs have become. Because whether
you believe AI will bring utopia or dystopia, there is
an abundance of evidence that you can point to now
to reinforce your own yeah, exactly, to reinforce your own
(15:21):
starting point. And so it's sort of like a microcosm
of society today where you know, most the average person
no longer encounters information that can change their minds. It
just continues to entrench whatever they already believed before.
Speaker 1 (15:37):
Do you believe Sam Moultman believes this shite? Do you
think he believes in the AGI? Is he part of it?
Speaker 2 (15:44):
It's really It's really interesting because I think the no
matter who I interviewed, and no matter how long they
worked with Sam Aultman or how closely they worked with
Sam Altman, and not a single person was able to
fully articulate what his beliefs are. And I think that
is very much by design.
Speaker 1 (16:04):
Is that it's beautiful that's.
Speaker 2 (16:07):
Yeah, and and and and and it wasn't. And they
would explicitly say this too. They would they would call out,
I'm not actually sure what he believes. And this was
the most consistent thing that people said about him.
Speaker 1 (16:34):
I really noticed as well. If you read your book
and you really look, you actually can't get much of
an idea of who Sam Mortman is at all, And
in fact, you can't work out why he's brilliant at all.
And I've read a lot of stuff about Sam Moultman.
The long and short of it I can understand is
that he's got good psychology and he's really charming. Everyone
(16:54):
talks about the psychology and the charming, and it's just
really it. It is so busy. He's so a bizarre
man like everything about him is so like. The way
that people talk about him is so strange.
Speaker 2 (17:08):
Yeah, So this is what I sort of concluded about
why he's able to pull off what he does. He
is a once in a generation talent when it comes
to storytelling, and he has a loose relationship with the truth, yes,
which is a really powerful combination. And so when he's
talking to someone, he shines most when he's talking to
(17:32):
small groups of people in one on one meetings, and
what he says is more tightly correlated with what that
person needs to hear rather than what he believes, which
is part of the reason why people say, ultimately like
they don't really know what he believes because he doesn't
really indicate it. And so I think that is what
(17:54):
makes him incredibly persuasive. And he is really good at
understanding people and what they and what they want, and
you know, he's well resourced, so he's able to then
deliver to them what they need and want. And what
I realized is with that kind of talent, you would
inherently be incredibly polarizing as a figure, because if the
(18:19):
person agrees with you, you're the best asset in the
world for what they want to achieve. You're incredibly persuasive,
You're able to get the resources you can do exactly
what that person can do for you, exactly what you
want them to do. But if you disagree with this person,
that person becomes the greatest threat ever because they are
(18:42):
so persuasive. You have fear that they're going to be
able to carry out exactly what you don't want them
to carry out. And so that kind of boils down
to why he's just such an enigmatic and extremely polarizing person.
Is it really depends on whether or not someone agrees
or disagree with him.
Speaker 1 (19:00):
He also doesn't seem that smart. I don't know, he
seems quite good at talking to people, but when I
hear him talk, he doesn't seem that eloquent. And it
makes me wonder if perhaps Sam Mortman is a symptom
of a greater problem that so much of our power,
structure and money is based on someone making decisions based
on the last intelligent person or intelligent seeming person they
talk to.
Speaker 2 (19:21):
Yeah, I mean, I think our society is also just
we still have such a have we worship people that
are wealthy. Yeah, like, And so even if he's not
saying something that is convincing you in real time, he
has all of the kind of indicators that this person
(19:43):
has been remarkably successful and you should listen to what
he says because then that will make you successful too, right,
And so I think that is part of the part
of the kind of mythos around him, is that if
you can join up with him, it will greatly enrich you.
And you know, like, there's a lot of evidence to
suggest that too, that like, there have been plenty of
(20:05):
people that I've allied themselves with Sam Aulman and that
have become much richer for it. And so whether or
not people are joining up with him because they necessarily
one hundred percent agree with like his ideology or his
actions or anything like that, or if it's if it's
more because ultimately they get to benefit from that alliance,
(20:27):
I think is Yeah, what.
Speaker 1 (20:29):
Most feels like, it's people connecting with other people to
see how far they can get, far more than AI.
Because one of the other things I really noticed when
you were telling the story of the firing Sam Wulman
getting fired in No. Twenty twenty three, As much as
people wanted to pretend, they kept bringing up the tender
and to explain for the listeners the tender was the
(20:50):
open AI hadn't had a plans to let have plans
to let people sell their stock. It really felt like
that was more the primary concern than any loyalty to Altman.
Speaker 2 (21:00):
It was. I wouldn't say it was the primary concern,
but I mean, yeah, it really depended on.
Speaker 1 (21:06):
Who I was.
Speaker 2 (21:07):
It's taught, Yeah, exactly like every employee sort of had
a different calculus that ultimately led them to revolt against
the board and want Altman back, And there were different
calculuses among Microsoft and investors. But one of the key
things that I think is uh necessary to understand just
why there is so much seeming loyalty around Altman in general,
(21:29):
is he is very very good at at establishing relationships
with money involved, where he is the lynchpin to the
other person accessing that much yes, and so the tender
offer is a perfect example of this, and that employees
ultimately they realized that Altman is just really he's really
good at fundraising, and whether or not an employee believes
(21:52):
in the AGI thing, they all agreed that open AA
ultimately needs an enormous amount of capital, and also many
of them are doing it in part because they can
then like guarantee their own financial future. And so with
Altman gone, it became increasingly clear that open a I
wouldn't survive, and so that's not something that a lot
(22:12):
of employees wanted. It became clear that even if Opening
I did survive, they would be a lot more short
changed in terms of the amount of capital that they
would be able to get because he would no longer
be their champion for that, and also the tender offer
could potentially go away and they would not be personally
enriched as well, and you know, many of them. The
(22:35):
thing to also understand is like it's very expensive to
live in the Bay Area, and so for the worker,
for the employees, in the moment, losing the tender offer
wasn't like oh no, I'm going to lose like my retirement.
It was also this sense of like, I'm literally going
to lose my financial security right now, like I already
(22:56):
tried to, I already bought, you know, a house based
on the fact. I feel like you mentioned that yeah, yeah.
Speaker 1 (23:02):
That was someone who put money.
Speaker 2 (23:04):
Yeah, who put plenty of money down, and that the
tender offer dissolving was a real financial stress. It was
a threat to their financial existence in the life.
Speaker 1 (23:16):
I imagine so, But I'm just the way it was
framed in public was that this was some big loyalty
thing where everyone was like, I love open Ai and
Sam Altman, and that just didn't feel like it was
what was happening. That people seemed angry at Ilia, but
they just seemed angry because something changed, rather than.
Speaker 2 (23:36):
Like, yeah, yeah, I mean I think I think there
were certainly people within the company that did feel loyalty
to Altman and and and that was one of their
primary motivating things. But by and Leche, when I was
interviewing lots of employees for understanding like what ultimately led
them to rally around Sam, there was actually more more
(24:01):
practical concerns than just personal loyalty that was driving the thing,
whether it was financial or whether it was just I
really believe in Agi and I don't want open Aiye
to go away because it'll scrap all of the work
that we've done. And of course, you know, of course
the narrative would I mean, the open ai themselves have
been pushing again and again and again this idea that
(24:25):
all of the employee or whatever, more than ninety percent
of the employees ended up signing the petition, and they
show they cite this number as just a show of
solidarity and loyalty to Altman. But then, of course, if
you look at the track record after the Boyd crisis
of how many people have subsequently left the company once
(24:45):
once things have sort of stabilized and there isn't a
crisis situation, that is I think much more revealing of
how much loyalty people have to him.
Speaker 1 (24:55):
So tell me about Jack Clock. So Jack Clock is
the what is he? And he's one of the co
founders of Anthropic. Now, yeah, I, without putting you on
the spot, kind of feels like Jack Clark has got
off a little easy with everyone not even saying you.
You're one of the few people. Jack Clark worked at
The Register, which is an extremely critical IT publication, and
now he's out in conferences saying that AI agents will
(25:18):
control everything. He just feels like one of the weirdest
characters in this whole story.
Speaker 2 (25:26):
Yeah, Yeah, it's interesting. Like when I went to profile
Opening Eye in twenty nineteen, I actually the first person
I reached out to was Jack because I had spoken
to him before and he had until then, until recently
been playing communications for head for Open Eye, and then
he had shifted into a policy role. And I remember
when I when I was at the company, I was like, hey, Jack, like,
(25:50):
do you think you can actually give me more access
to seeing the things that I like, stuff that I
like to say? Yeah, Like I was literally I was
literally asking so they wouldn't let me go beyond the
first floor. There were there's there were three floors second.
Speaker 1 (26:04):
And I'm so sorry there's computers there. There's not It's
not like they have an AI machine. Come on.
Speaker 2 (26:09):
Yeah, And I was like, hey, like, can I just
literally just go up to the second? Can you like
take me up and just like let me walk around?
And he looked at me with this like like deep
deep side eye of like no, Karen, like you absolutely cannot.
And I was like, so, Fawny, you're a former journalist.
You know how this works. Like the.
Speaker 1 (26:32):
Jack Frow in twenty fourteen for The Register was shock
an a w West the fall of Amazon's deflationary cloud,
just as Jeff Bezos did the books and see these
Amazon's rivals are now doing it. He used to write
these like very grouchy l reg style pig It's just
so weird.
Speaker 2 (26:49):
Yeah, I mean I think this is like I've.
Speaker 1 (26:52):
But it gets back to the thing you were saying
about the kind of the doctrine.
Speaker 2 (26:55):
Yeah. So, like, because I started covering this company intoing nineteen,
I talked with people then that I then talked to
for the book, and I was able to sort of
have this unique opportunity to track how people's individual beliefs
evolve when they are seeped in this world, and there
(27:15):
were people that I was talking to back then that
were like, I don't really believe in this AGI thing,
that by the time I was talking to them for
the book were like AGI all the way, like that
this is a genuine, true belief. And I think there's
a lot of reasons for this transformation, Like one is
that you are only talking to people who believe this,
(27:35):
so you're just constantly in this environment where you're not
talking to people who are challenging or testing that belief
and instead just like continuously being reinforced in this echo chamber.
But I think there's another thing that I kind of
came to realize while reporting on the book is like
(27:55):
people who really really believe that AGI is possible, that
we will actually be able to replicate human intelligence. It's
not a belief about what AI is capable of. It
is a belief about what human intelligence is. And a
lot of people in the AI world today have this
belief that human intelligence or everything in the world is
(28:17):
inherently computable and all you have to do is just
a mass more data and more compute and eventually you
will get to that thing. You will be able to
replicate that thing. And when you are in this kind
of environment where you have people constantly arguing to you
that this is why AGA is possible, because everything is computable,
(28:37):
and then you see the rapid clip of your models
being able to do more and more functions that you
know other people outside in society previously would have suggested
were not possible. It's sort of the self reinforcing belief machine,
like it just manufacturers gives you yeah, exactly, And so
(29:00):
I think. And one of the things that I also
have just has a general realization not just with opening
eye but in general. When I'm covering tech companies, I
kind of have a policy for myself to do a
little bit of a detox after I spend a lot
of time talking with them, because it is really like,
when you're talking with all of these people that exist
(29:20):
in this world, you do adopt their worldview, and you
do adopt their talking points, and you do see things
through their eyes. And usually I then have to like
let myself just be in the actual world for a
little bit and remind myself of what the average person
thinks and what the average person values, and remind myself
(29:43):
that there are you know, there are problems beyond the
Silicon Valley's borders that just look fundamentally different from what
they conceive the world to be. And so I did
that with the Opening Eye profile. I did that with
I profiled Facebook years ago, and I that with Facebook.
I did that with the book, where I would interview
lots of people in like these big batches and kind
(30:05):
of really do my best to try and occupy their
shoes for a couple of weeks a month, and then
I would spend my time explicitly not interviewing open ani people,
just interviewing other people that were out in the world
to just like reset my brain chemistry a little bit,
because it really does feel that way. It really does
(30:26):
feel like you kind of get absorbed into this singular
world view and then you have to kind of remind
yourself of the greater reality.
Speaker 1 (30:49):
I'm gonna ask this question without getting you in too
much trouble. You think that's what happened with Kevin Russe,
because it's really I know, I don't want to put
you in a situation we have to talk kill of someone,
but that interview was bizarre and hard fork I So.
Speaker 2 (31:08):
I've been I think really lucky in that I've covered
the tech industry almost always not living in SF.
Speaker 1 (31:18):
I agree, that's a great thing.
Speaker 2 (31:19):
And and you know, like I've been able to figure
that out in my career, and that was an explicit
dis like I did not want to live in SF
anymore like I had lived in SF. I wanted to
get out. And I think this is a really it's
it's a really hard balance for any journalist. Is you
(31:40):
need to decide whether you're close to your subject and
immerse in their world and therefore might be co opted
by their world, or whether you exist outside of that
world and therefore you don't have as much access. You
don't get to go to the parties where you hear
tips all the time. And and that's one just like
it's been attention in my career as well as like
(32:01):
I constantly feel like I'm missing things because I'm not
an SF. But the thing that I think I have
gained from not being an SF is just a continued
connection to non SF world, you know, Like I I
notice when I spend too much time with s F
people that I start in my vocabulary changes, like how
(32:24):
I talk about things changes, because because people in SF
talk about things in a very particular way, you know,
like they are talking about like optimization hacking, and like
they have a particular utilitarian maximization mindset around how they
do things and why, and I have to then kind
(32:47):
of step away from that and reseet my language, even
when when then I sit down to write a story
that's for the greater public. And so yeah, so I
think this is something that's just challenging in general. Is
like it's really hard to not get too close to
your sources and to not start adopting everything that they
(33:10):
say as as your own, especially if you are literally
living with them.
Speaker 1 (33:16):
And yes, in some cases right could be anyone. But
it does feel like there is a kind of almost
word containent or thought contagion with this stuff with AGI
that it pickles certain people. They hear about the idea
of the autonomous computer and it drives them mad and everything.
To your point, they start chasing it even though there's
(33:38):
not really any evidence that we can do it.
Speaker 2 (33:41):
Yeah, I mean, I mean, like when I first started
covering AI, I also was so enthralled by the premise,
Like when I when I so before I feel AI,
I didn't really like that when I first started covering
it at my two technology review. I did not realize
(34:03):
before then that AI was actually trying to recreate human intelligence.
I thought it was just you know, I mean, it
is a marketing it is a marketing term.
Speaker 1 (34:12):
But even then, this sounds like it might be a
definition that people argue.
Speaker 2 (34:16):
Over right, right right, But I mean, like in the original,
like when AI was coined as a term in nineteen
fifty six, like John McCarthy, he did explicitly coin the
term both to be to attract funders, so as a
marketing term, and because he was trying to describe what
he wanted the field to do, which was to recreate
(34:37):
human intelligence. And that is just it's it's it's such
an evocative thing, like to think, wait a minute, could
we actually do that? And what would that mean? And
there's so much philosophical Uh, it's just a philosophical mindfield.
Like and if you are someone that loves philosophizing, you
(34:58):
can just sit there for like days in days and
days and think, holy crap, like what would that be?
Speaker 1 (35:03):
What would that look like? How can we do times columns?
Speaker 2 (35:07):
And and so I really got I got pulled into
just the kind of sheer enigma of that and yeah,
also the power of that of oh wow, if we
could do this, like if if you know, if I
imagine being in the shoes of someone who's actually doing
(35:29):
the AI research and thinking to yourself, I might be
contributing to the recreation of my own intelligence sort of
of our collective intelligence. Like that's intoxicating, you.
Speaker 1 (35:40):
Know, feels like philosophy marketing though, because I I just
look at this stuff and I hear about this stuff,
and I always think, Okay, but what you're doing today,
And I look at what they're doing today, and I say,
that doesn't seem anything like that, And I understand. I
actually don't think that there's anything harmful in discussing AGI.
What pisses me off is how many people don't seem
to be discussing AGI. They discuss the ramifications on the edges.
(36:04):
Because something that and Casey Kawa, friend of the show,
has brought up a number of times with me, is like,
no one seems to be discussing personage, Like if we
make a conscious computer, do we give it a Social
Security number?
Speaker 2 (36:16):
That's actually really funny because I think there are too
many people discussing personage.
Speaker 1 (36:21):
I don't see them in the black. Well, perhaps they're
not doing it in the media because AGI gets brought
up as this vague term and then they go, ah,
what do you think, Yes, could be good, could be bad,
millions brillion sound fucking And it's just it's so bizarre
because I look at I've been covering I personally with
AI really only started looking at it hard in twenty
(36:42):
twenty three, which is my own fault. And I've looked
and perhaps that has also colored my belief system because
I ken looking for the thing, like the stuff, the
thing that everyone was freaking out about. And you look
and it's like we've extrapolated from large language models that
AGI will come out. But actually that kind of leads
me to another question. You Sam Morton's a confusing person.
(37:05):
What about Dario Amma Day, Darry Ama Day. What do
you think does he believe in AGI? You think you
think he's a true believer.
Speaker 2 (37:14):
I do think Dario is a true believer, yeah, And
I do think that he's a true duomer as well,
like he genuinely has a lot of anxiety around the
AGI creating the end of the world, whether or not,
and also like what does it mean to be a
true believer?
Speaker 1 (37:31):
You know, like, does he believe the bollocks he's saying
because he claims that AGI will be here but twenty
twenty seven or quicker.
Speaker 2 (37:38):
Yeah, So that then is when he's just wearing his
CEO hat and he needs to say something.
Speaker 1 (37:43):
When you say wearing the CEO, can you be a
bit more specific?
Speaker 2 (37:47):
I think Dario is an interesting case in that he
originally he has a different background than Sam. You know,
Sam is a VC that or an investor that then
became the CEO of an AI company, and his skill
is storytelling, right, That's what all investors do. Dario was,
he was a scientist. He studied I think computational neuroscience,
(38:11):
and he had a kind of deep fascination with this
idea of how do you how do you figure out
how the brain works and how do you replicate it
like it was? It was. He didn't initially call himself
an AI researcher, and I think the early days of
his academic career, but like he was essentially studying a
lot of the things that hardcore AI researchers study, the brain,
(38:32):
computer science, like all all of these things. And so
I think he has this fascination and I don't know
this for sure, but I would guess that he is
of the category of people that I described that believes
that everything is fundamentally computable in the world, and human
intelligence is computable, and so he does really believe that
(38:52):
if he can figure it out like AGI will happen.
But then he has to run a company, and a
company can't just do science. And actually, one of the
things that people mentioned to me about their criticisms of
Daria when he initially ran Anthropic was that he didn't
(39:12):
care about the business at all, Like he seemed to
have no interest in anything other than the science. And
there were people within the company that were like, this
is not going to work as a company if you
cannot literally do business, if you cannot raise money, And
so I think what happened I didn't actually report this out,
(39:33):
but my guess is what happened is Dario then had
to shift to not just being a scientist but also
being a businessman, and he had to learn how to storytell,
and is he and I think honestly he tries to.
You know, Sam Altman is a really successful storyteller and
able to accreu a out of capital. I think Dario
(39:55):
tries to match the stories that Altman tells in order
to try and accrue the same amount of capital and
try to take capital away. Maybe because ultimately they are
personal archite nemesis and anthropic and open AI are competiti.
Speaker 1 (40:11):
Hate each other so much? Is it just because Sam
Wlman doesn't like the Warrio walked off?
Speaker 2 (40:19):
I don't know that Sam. I. I can't figure out
whether Sam genuinely ever hates anyone, but people certainly hate him,
and Dario hates him for sure. I think it goes
back to this idea of do you agree or do
you disagree with Sam about something fundamental and therefore do
(40:41):
you perceive him as the greatest asset ever or the
greatest threat ever? And from in Dario's case, he fundamentally
disagreed with Sam about certain key decisions around safety AAI safety,
the Dumer, the Dumer brand of AI safety, where they
Daria was the one that decided to blow up the
(41:06):
amount of computer chips that were being used to train
a single model. So he did that. From GPT two
to GPT three, they went from a couple dozen chips
to ten thousand chips all at once to train GPT
three and Daria wanted to do this because he wanted
to create an internal lead in order to then have
some time to do research on this model that would
(41:28):
emerge from ten thousand chips. And Allmann does this thing
where he will convince, he will ally with people, so
he was like, oh, ten thousand chips, that just a
brilliant idea. We should totally do that. But then once
it was done, he sort of shifted to, Okay, now
we should release it, or now we should give it
to Microsoft because we have this deal with Microsoft. We
(41:49):
need to make them happy. We need to give them
some kind of really exciting thing deliverable to justify the
first one billion dollars they gave us so that they
can then give us more money. And so it was
actually it was like the two It was both Altman
and Amoday together that I would credit as being responsible
(42:09):
for dramatically accelerating the AI race, because Amoday was the
one that decided we need to blow it up to
ten thousand chips, and then Altman was the one that
persuaded him, yes, you should do it because I agree
with you, and then kind of flipped to, Okay, now
we need to get this out in the world as
quickly as possible, and amoday I think feels like his
(42:34):
intelligent Like Altman, as a politically savvy person, was able
to use his intelligence against him to achieve exactly the
opposite of what he ultimately wanted, which was to slow Yeah,
to slow things down rather than accelerate it.
Speaker 1 (42:53):
This sounds like colonial This sounds like colonial Britain. It's
just white guys getting angry at each other over tiny
grievance is from years ago. Here's a weird question. Well,
first of all, do any of them seem happy in
any way? Do any of them seem to enjoy anything?
I asked this seriously, I genuinely do they seem miserable?
That is the consistent theme from all of Jack Clark included.
(43:16):
They all seem pissed off, scared, paranoid. Weird. It's that
they're being driven mad by this.
Speaker 2 (43:23):
Yeah, yeah, yeah, I think that is an entirely accurate description.
I think you cannot be not driven mad in this
world where you have convinced yourself that the stakes are
the future of humanity. You know, like, how do you
not buckle under that pressure?
Speaker 1 (43:44):
I mean, skill issue? I think I'd be fine give
me one billion dollars. But it does make me think
that right now as and Bloomberg came out with the
headline just as we're recording this, that Stargate soft Banks,
Stargate is hitting snags over pariff is they can't seem
to raise the money. I wonder if we're going to
(44:04):
see new levels of paranoia anxiety with all of these
people as the AI trade starts to collapse a bit.
Speaker 2 (44:13):
Yeah, this has been an interesting theme that I've picked
up on with the way that Alman operates is when
he starts sounding incredibly optimistic in public about the future
of open AI, the future of AGI, the future of
all these things, it means that something is going wrong.
(44:35):
Like it's become the opposite signal, because he will roll
out the most grandiose language when he needs to cover
up something that is really stressing him out. And so
we're seeing, you know, like this happening again more recently
where I mean, in the beginning of the year, he
(44:57):
had this post where he was like, we we are
no longer just building AGI, we are now on our
path to building superintelligence. Like he was sort of like
upping the ante saying okay, could continue to hold on
could continue to stay with the program because we're about
to uh supercharge turbocharged this like ten times more. And
(45:22):
it was like at the time when Opening Eye was
starting to really feel weak because it had just lost
a string of executives, including some of its most important
ones Elias ask Govern and Miramurati, and it was under
just a massive amount of scrutiny and it wasn't making
the clip of research progress that it needed to kind
of solve what it itself defined as the key challenges
(45:45):
to reaching AGI and so yeah, so I think the
more that that that it sort of becomes clear that
people are no longer really buying into this AI future
that they've painted. The stronger they're going to painted, the
more they're going to roll out this rhetoric.
Speaker 1 (46:03):
You mentioned this because there was a tweet from April
fifteenth where he said, the Open Ai team is executing
just ridiculously well at so many things right now. The
coming months and years should be amazing. So I'm going
to guess things were bad. Yeah, Yeah, I mean, like
cool read Sam's tweets.
Speaker 2 (46:19):
Yeah, this was like a thing that I just consistently,
consistently every time I was reporting on things that were
going really bad, sure enough Altman would roll out some
kind of like really crazy, some really crazy yeah statement
out in public. So that was actually that tweet's actually
a perfect example because he says things will be awesome
(46:40):
in the coming months and years. It's always like hold on,
stick with me, Yeah, stick with me. Things might look
a little bit weird now, but oh boy, like just
you wait for what I'm seeing inside that like you
need to just have patience for you know. It's it's always.
Speaker 1 (46:57):
That kind of May seventh picture, It's great to see
progress on the first Star Gay in Abilene with our
partners at Oracle. Today will be the biggest day I
training facility in the world. The scale, speed and skill
of the people building this is awesome. And then this
story comes out a week later, bloody hell. This does
so final question, how do you feel? What do you
think this fijisimo? Forgive me if I messed up the
(47:19):
name there. What do you think about her becoming CEO
of Applications and Sam Altman doing something else? Yeah?
Speaker 2 (47:26):
So I haven't actually I haven't actually done reporting on
this myself, but my sense of what happening, what's happening
is Altmand's not a good manager. He's not actually like
he's a he's a fundraising CEO. He's not someone that
can run the company. And I think probably what happened
is that after Miramrati left, she was the one that
(47:49):
was actually doing the actually the day to day operations
and the running of the show. After she left, he
then made a big show of I'm going to be
much more close to the the work. Wow, I'm going
to do the day to day running. And probably his
time is up in doing that because what in my book,
I like talk a lot about how like he's not
(48:12):
good at that, like he is he will he's not
good at making decisions. He's very conflict averse. So what
he does is he'll just say he'll agree with every
single team even when they're disagreeing with one another. And
it causes chaos, and it causes riffs where you don't
the person at the top is not able to make
a decision and say we are all going to go
this way now and some of you are going to
(48:33):
be unhappy. Like he does not do that, and so
it just leads to a lot of tumult chaos. Part
of the reason why Opening Eye has had so many
product releases and features and things like that. I think
is actually also a product of this in that he
doesn't want to tell any team, like all of these
product releases and features are different teams working on these things,
(48:53):
and he doesn't want to tell any team like we're
going to have this person release first and have their
moment in the sun, and then we're gonna like work
a little bit more and then you get your moment
in the sun, you know, a year later. He's like
everyone gets their moment in the sun. Like we're gonna
do releases. We're gonna do like twelve days of ship Miss.
We're gonna just release. That was insane case in twelve
(49:15):
days twelve.
Speaker 1 (49:17):
Days of ship Miss for the listeners that don't remember,
that was when they claimed they were going to release
twelve new twelve new products.
Speaker 2 (49:23):
Twelve new products over the twelve.
Speaker 1 (49:25):
And it wasn't twelve new. It was like four new products,
and like some of them were like an API for
an API. It's just so strange. It feels like, while
you're also describing an empire, you're also describing this kind
of very petty underpa. It really does mirror British colonialisten right,
You've got a guy who doesn't want to rule, who
(49:45):
wants the power of a ruler and all the assets,
but someone else, ideally in another country, should take responsibility. Yeah,
truly awful.
Speaker 2 (49:55):
I mean this is a paradox of empire. Is like
it feels inevitable, well because it feels so strong and
it also feels so weak when you start to look
at an end of the surface.
Speaker 1 (50:07):
It's it was a really great book, and I really
appreciate your time. Where can people find you?
Speaker 2 (50:13):
I am on LinkedIn and blue sky these days, and
also on my website karindhow dot com and Yeah reach out.
I have a contact for him there and I try
to respond to as many people as possible.
Speaker 1 (50:24):
Wonderful. Thank you so much for joining us. I'm of
course aid zechro on. You'll now get a thing I
recorded over a year ago that people still complain about about.
Where you can find stuff. Thank you for listening. Thank
you for listening to Better Offline. The editor and composer
(50:45):
of the Better Offline theme song is Matasowski. You can
check out more of his music and audio projects at
Matasowski dot com. M A T T O s O
W s KI dot com. You can email me an
easy at better offline dot com or visit better Offline
dot com to find more podcast links and of course
my newsletter. I also really recommend you go to chat
(51:06):
dot Where's youread dot at to visit the discord, and
go to our slash Better Offline to check out our reddit.
Thank you so much for listening.
Speaker 3 (51:15):
Better Offline is a production of cool Zone Media. For
more from cool Zone Media, visit our website cool Zonemedia
dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.