All Episodes

December 5, 2023 47 mins

OpenAI, which you may have heard a lot about lately, is the company that developed ChatGPT, a wildly popular AI bot which you most certainly have heard of. OpenAI’s board of directors recently purged the company’s CEO, Sam Altman, and various stakeholders – employees, investors, Microsoft – saw to it that Altman was reinstated. The board itself then faced a purge. This particular collision has it all: Silicon Valley innovation and Silicon Valley hubris, money, managerial snafus, ugly battles, promising outcomes, and, of course, artificial intelligence. AI is set to transform the world, we’re told. Ingenuity and upheaval at OpenAI offer a way for us to consider all of that. Parmy Olson and Dave Lee are both Bloomberg Opinion technology columnists.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome to crash Course, a podcast about business, political, and
social disruption and what we can learn from it. I'm
Tim O'Brien. Today's crash course open Ai versus Sam Altman.
Open Ai, which you may have heard a lot about lately,
is the company that developed chat Gpt, a wildly popular

(00:23):
AI bot which you most certainly have heard of. Open
AI's board of directors recently purged to the company's CEO,
Sam Altman. High drama ensued, and various stakeholders, employees, investors.
Microsoft saw to it that Altman was reinstated. The board
itself then faced a purge. Oh my, this particular collision

(00:45):
has it all Silicon Valley innovation in Silicon Valley hubris, money,
managerial snafoos, ugly battles, promising outcomes, and of course, artificial
intelligence AI is set to transform the world. We're told
ingenuity and upheaval and open ai offer a way for

(01:06):
us to consider all of that. So I've invited parme
Olsen and Dave Lee onto crash Course to help us
outline the lessons of this particular tale. They are both
Bloomberg Opinion technology columnists. Parmey It's based in London and
Davi is in New York and they bring a wealth
of experience and insight to today's show. Welcome my friends,

(01:26):
Thanks him, thank you. Let's start with Sam Altman himself.
Eric Schmidt, Google's former CEO, recently compared Altman to Steve
Jobs and said Open Eyes Board was wrong to can
him and I quote Schmidt here, these founder CEO types
are unusual, they're incredibly valuable, and they change the world.

(01:47):
Schmidt said that at a recent Axios conference. Dave, what
do you think of that observation about founder CEOs and
the sort of idea that there's a unicorn executive out
there who is indispensable.

Speaker 2 (01:59):
I think, look, that's always been Tilicon Valley view of
you know, great founders and what they can do, and
this idea that you should follow a founder's vision rather
than say put it to committee. And you know, Steve
Job's obviously the most famous example of that. Kicked out
of Apple, returned again around a decade later, and of course,
you know, the rest is history, and what happened with
Apple surely wouldn't have happened without Steve Jobs coming back.

(02:22):
And we've seen this since again and again, whether it's
you know, Mark Zuckerberg or elor Musk or any sort
of tech founder, even people like Travis Kalonick, you know,
who founded Uber. There's this sense that there's this founder skill,
this magic, this uniqueness that should really be sort of
listened to, and although it can be questioned, you know,
there's a sort of bias towards always backing the founder.

(02:45):
Speaking of Sam Altman, of the many comments made about
Altman over the past week or so, I think, you know,
Paul Graham, who founded why Combinator, where Sam Moultman sort
of made his name, said a quote that kind of
rang out around everyone, and he said, you could parachute
Sam into an island full of cannon, come back in
five years, and Sam o wont would come out as king.
And I think that's kind of the view of him
in particular, and it does sum up, as you say,

(03:07):
this culture of revering founders like life.

Speaker 1 (03:10):
I don't know actually how to interpret the idea that
you're going to parachute him onto an island full of
cannibals and it'll come out the winter, because that kild
speak to actually just ambition and being extremely as they
say in Silicon Valley, goal oriented as opposed necessarily having
innovative genius, and I think, you know, ideally you want
a combination of both if things are going to get done.

Speaker 3 (03:30):
But you know, if I can just jump in. It's
interesting you mentioned Paul Graham, Dave, because Paul Graham, I think,
is part of the reason why this mentality, this almost
doctrine exists in Silicon Valley. When he started White Combinator,
he was like a guru among startups, and his teaching,
as it were, was that the founder is all that matters.

Speaker 2 (03:51):
In a way.

Speaker 3 (03:52):
The technology doesn't matter what a startups technology would it
has under the hood. What matters is if the founder
is a visionary hacker to hype who wants to build
an empire, and if you, as an investor can find
that kind of person, then you should give them free rein.
The board should give them free rein, and they should
be able to do whatever they need to do in

(04:12):
order to build the empire. And that's why in most cases,
someone like Mark Zuckerberg has majority voting share the board
does whatever he wants pretty much, even through the most
controversial times. Google's founders also had a lot of control
of the company, outsized control, so I think a lot
of it comes down to this kind of ideology that
Paul kind of created and Sam perpetuated when he took

(04:35):
over as y combinator head.

Speaker 1 (04:38):
Steve Jobs famously had Bill Campbell as a management coach,
and Bill Campbell was sort of another one of these
executives who in a different era was a guru to
various Silicon Valley titans. But I sort of wonder if
the Bill Campbell advice managerial advice in the Steve Jobs
era was qualitatively different than the advice that Paul is
giving a Sam Altman right now. Is that the case

(05:00):
Parmi has there been an evolution and the kind of
advice that nascent Silicon Valley titans are hearing from their advisors.

Speaker 3 (05:09):
I have to admit I'm not familiar with the advice
that Steve Jobs got directly, but I think maybe there
is a questioning more and more of whether these founder
types should have all that control. That may still be
the case at Facebook, but I mean, this is what
made open ai so unique, right is that it had
a board that didn't just have teeth, it actually used them,

(05:32):
and it used them to fire him. So I don't
know if that's necessarily changing, but it certainly was very,
very different to open ai.

Speaker 1 (05:39):
Talk a little bit Parmi about Sam Altman's foundational experiences.
Prior to coming to open ai. He had been an entrepreneur,
a young computer with and then lands essentially at y
Combinator and ends up running it. Yeah, or his bona
fides before he came to open ai.

Speaker 3 (05:58):
Well, he paid his dues as a startup founder who
created a failed startup, as so many startup founders do.
He created a startup called looped, worked on it for
a number of years, devoted his life to it, and
then eventually sold it to another company, which you might
say as a success, but in some ways it wasn't
seen that way. And then he went on to become
an advisor to Why Combinator, which is like basically one

(06:20):
of the world's the world's most successful startup accelerator of
all time. You've had companies like Reddit and Stripe and
Airbnb all came through that, And you know, he became
known for having this real ambitious view and pushing startups
to be more ambitious. So he famously said to the
founders of Airbnb, you should change the numbers in your

(06:41):
pitch deck to investors from millions to billions, and then
he kind of went into this stage where he got
really into futuristic technology, and he was kind of gazing
into the future with things like mind uploading and kind
of uploading his consciousness into cloud computers and exploring the
universe he's talked about up publicly. And he then started
investing in very futuristic technology like nuclear fusion, life extension technology,

(07:06):
and he pushed y Combinator in that direction, But his
heart really was in AI when he had apparently he
had this kind of realization at some point in his
late twenties that humans aren't really all that unique and special,
and our brains can be simulated by machines. And if
that's the case, we can build a machine that is

(07:27):
as unique and powerful as the human brain. And that's
what he set out to do. But with this extra
kind of level of idealism around spreading all the benefits
of that to humanity.

Speaker 1 (07:38):
Well, we could go down so many roads here. This
reminds me of the Silicon Valley chatter around the Singularity
that the Google guys were so fascinated with. It's interesting
to me that if you get enough technological success the
next step in your thinking is how do you perpetuate
yourself forever inside of a machine that Maybe that's a
conversation for another day, Dave. You know, the idea of

(07:59):
highly tailed motivated people being given free reign so they
can be creative isn't limited to Silicon Valley. Obviously. There's
a long history of it in the business world, you know,
going back to the Rockefellers and Henry Ford and in
completely different industries. And it's in the arts, right. We
know that film directors and painters and others are sort

(08:20):
of seen as masters or mistresses of their own destinies
and people shouldn't intrude on that. Does all of that
fairly accrue to Sam Altman in this case? Is he
someone presiding over a company who needs a board to
support him but not get in his way.

Speaker 2 (08:38):
Well, I think it's interesting, you know, thinking about Altman's upbringing,
I guess through why combinates, because what that process did,
I think is gained him a lot of allies and
a reputation as being, you know, this real sort of
network guy in tech, and I think that has allowed
him to move into these future roles with a certain
level of trust. I thought it was really remarkable. When

(08:58):
we were hearing just about being fired, one of the
people to immediately sort of go on to Twitter was
Brian Chesky, the CEO of Airbnb, which was a Y
Combinator company, immediately jumping to sam Untiner's defense and almost
kind of reporting on the situation himself, which was bizarre.
So I think, you know, although these founders do get
this kind of reverence, it does have to be earned,

(09:18):
and I think the way that Sam Oltman has earned
that is not so much to be seen as this
kind of visionary genius necessarily, but this kind of smart networker,
this smart sort of bringing together of people. And the
thing that made Y combinators stand out was this really
fast acceleration. When companies would join y Combinator, they had
to say a particular metric that they were going to grow,

(09:39):
and every two weeks they'd checked that it was you know,
two x four x and so forth. And I think
that's the ethos that Sam Wontman has brought. Now the
question is, you know, was that something that needs a
board to rain in or is that something that a
board needs to sort of sit back and say, okay,
off you go and you know, I think sam Moultman
must have been mindful of the reputation of founders, particularly
people like Mark Zuckerberg, who you know, the board of

(10:01):
Facebook could never fire Mark zuckerbo they don't have the
power to do that. I think he was mindful of
how that is seen, and I think he was trying
to make a statement by saying that not only did
he not have any financial stake in open ai, but
he was serving as the behest of the boards. As
it turned out, his reputation ended up being more influential

(10:21):
than the votes of the board open AI. So even
once they made that decision, the rallying and support around
sam Oltman, thanks to his reputation within the tech industry
built over the past several years, meant that was a
bigger protection for him than having board votes would have
been anyway.

Speaker 1 (10:37):
Did you want to jump in there, Permie, Yeah.

Speaker 3 (10:39):
And I think that's a great point about the amount
of protection that he has. Silicon Valley is such a bubble,
and if you're a founder, if you're successful and you've
made a very valuable company, and you're very wealthy and
you're a billionaire, that you have to behave so badly
to fall from Silicon Valley's graces, and Elon Musk is
a prime example of that. So I Meantman's return as

(11:01):
CEO as a prime example of that. But I just
wanted to go back to one point you mentioned, Tim.
You were kind of casting our minds back to the
Rockefellers and people in the arts who have this kind
of power as founders and as leaders. But I really
think that in this situation, someone like Sam Altman and
the leaders of big tech companies are so different to

(11:21):
any of that, just because of the scale and wealth
and resources that these companies have, which are unprecedented in history.
I mean, Google today reaches something like four billion users worldwide.
That's half the global population. You know, although Chatgypt has
something like two hundred million regular users. I think Microsoft services,
which are going to use more and more open AI technology,

(11:44):
touch more than a billion people around the world. So
we're talking about a few people, these founders, with an
incredible amount of free reign, who have like an incredible
global influence at the same time, more so than any
empire even in history.

Speaker 1 (11:58):
And now, of course, the product they have in hand
is a revolutionary and transformative product. With Google, you know,
in its pre AI phase, it was a ubiquitous search
engine that sort of opened the world up to its users.
No one really worried about it taking the world over.
You know, it was an information provider. It could be
a you know, assessed pool of disinformation too. And we'll

(12:20):
get into this morning. I think our conversations, so I
think this particular product AI and technology is shaping people's
views in different ways as well. But Dave, tell me
what you think.

Speaker 2 (12:32):
Well, I think is also worth stressing in terms of
Oltman's value to open AI. You know, this is a
company that people are talking about being valued at around
eighty six billion dollars, and you know, a lot of
this is because of the personality of Sam Oltman. The
value of open II comes from the fact that for
the past year, every week or so, it seemed like
Sam Martman was shaking hands with a world leader. It

(12:53):
was sitting there in front of Congress talking about AI,
talking about regulation, and so there is value. There's clear
value of what Sam Mortman is to open AYE. And
I think it's reputation and the deal with Microsoft and
the fact that it was and should still be positioned
as the front runner here is down to the fact
that he has this ability to communicate what the company

(13:13):
wants to do clearly, and that doesn't necessarily come all
that naturally to other people in Silicon Valley. So there
is an intrinsic value in him himself well, and.

Speaker 1 (13:21):
You know, all of the Steve Jobs comparison, Steve Jobs
was not an engineer. Steve Jobs wasn't an inventor of technology.
That was Steve Wozniak, his early partner, and Jobs was
the person who was capable of verticulating the kind of
Nirvana vision of a laptop and every desk and then
a mobile phone in every hand. And he was a
good manager, and he was good at ultimately at sort

(13:42):
of harnessing creative people, and that is a skill as well.
But what still feels very mysterious to me, you guys,
is why he got pushed out to begin with. Correct
me on this if I'm not up to date, But
at this point my understanding is the only kind of
public reason that's been given for why the board pushed
him out is that board, in their own words more
or less, and I'm synopsizing, felt that he wasn't being

(14:04):
candid with them in his communications with the board, and
there is now I think an Altman commissioned investigation into
what led up to that. It's being led by Larry
Summers and some other board members, which also kind of
speaks to whether or not that actually be an independent investigation.
But anyway, how did we get here? Why did the
board end up feeling concerned about this wonder kid in their.

Speaker 3 (14:29):
Miss Well, I think, first of all, no one still knows,
apart from it seems the people on the board were,
and possibly Sam because given a couple of his first
public interviews, he will not answer questions about why he
was fired. But I think the reason because it's all
of speculation now, but it seems like the reason kind
of goes back to what we were just discussing about

(14:50):
the outsized influence and power of these tech founders. It
goes unchecked in most cases, and from what people have
been saying and reporting, the board uneasy essentially with Sam's
behavior outside of open Ai and some of these ventures
that he was pursuing. So for example, the iris scanning
company world Coin, which aimed to scan the eyes of

(15:12):
billions of people around the world in order to identify
them when the Internet becomes flooded with bots, and also
to help distribute the trillions of dollars of wealth that
Altman believed would come about through the attainment of AGI.
He was also pursuing, according to reports, creating a so
called quote iPhone for AI with Johnny Ive who was

(15:34):
the former lead designer at Apple. And he was also
on top of that, reportedly talking to Middle Eastern sovereign
wealth funds to raise something like ten billion dollars to
build a chip company. This is a lot of fingers
to have in different pies, and I think it sounds
to me like the board was just overall kind of
uneasy with where he was going with all these different

(15:55):
ventures and whether he was going to use open AI's
technology for any of that. So when they say he
wasn't candid, it seems like they were kind of caught
on the back foot with some of this stuff, and
they didn't feel like they were performing their fiduciary duty
to humanity, which was actually their mission as the board.

Speaker 1 (16:12):
Yeah, I want to get into some of this in
more depth, you know, specifically this particular board's unusual mission.
But one of the other things that's been out there, Dave,
is it's been reported and then dismissed by others that
there was this mysterious Project Q being hatched inside open
AI that maybe ran a foul of this idea of
AI being harnessed ultimately for the good. It's unclear to

(16:35):
me but that the board had concerns about this product.
Others have said, actually, there were no letters written about it.
So the reporting has been very murky around this, and
I don't know that we should rely on it, but
it gets to this issue of the board being concerned about,
you know, the idea of bots controlling the world and
open an AI losing its mission along the way.

Speaker 2 (16:56):
Right right, I mean the ultimate endgame concern here is
that an AIO created that poses a real risk to
the human race, and any step along that journey makes
people pretty pretty nervous. Now, the question around whether this
was a factor in Oltmand's firing, as you say, that
still mrk it at this point, and I think actually
it sort of stresses this issue, which is that had

(17:16):
the board that forced Oltman out, had it had reasoning
and public explanations for why it did, that we could
have avoided what's now going to be, you know, this
endless filling of the void with possibly conspiracies of other
you know, sort of hyperboles around what open ai is developing. Now.
Q Star, this sort of rumored project that supposedly can

(17:37):
solve basic math problems. I mean, for starters, calling it
Q anything seems like a mistake. It's very bonred like,
I mean, it's remarkable. And just days later we Amazon
named one of it's a's Q as well, which I
just find staggering that these companies would be so sort
of mindless in doing that. But that's another that's another
matter altogether.

Speaker 1 (17:55):
It's like naming something X well, right.

Speaker 2 (17:58):
Yeah, I mean you might as well call killer AI.
I mean, it's just going to make people sort of
nervous or just conspiratorial, which you know, we have enough
of that right now in the world, of course. But look,
so the issue is, you know, it's very hard to
sort of sometimes for people who aren't experts in AI
to draw this line between something that might be able
to do basic math and something that might destroy the universe.
But supposedly there was some concern within open ai that

(18:20):
this was you know, a significant break through, a significant step,
and you know, speaking of x Elon Musk was in
New York recently talking about the firing of Samultman, and
he made what I thought was unusually lately a pretty
reasonable point. He was saying, Look, either the board needs
to be clear about these concerns and allow the public
to know what they were, or if there wasn't a
good reason, they need to you know, well they have

(18:42):
stood down now because then you know they needed to
be accountable for that error. But I think either way
there needs to be a feeling of this vacuum around
what exactly the concerns may have been. Because still one
of the big unknowns in all this is, you know,
one of open AI's co founders, Ilias skav who was
the chief scientist open AI and was on the board,
and he was the sort of pivotal changed mind on

(19:03):
the board that meant the board had a majority to
fire Altman. We don't quite know what it was that
concerned him so much or why he changed his mind.
I mean, he's subsequently come along and said he regretted
the decision, and Sam Moultman, in part of his statement
about returning to the company, Sam Moltman said that he
harbors no ill will towards him, And you know, I've
never heard that phrase said with much sincerity. So I'm

(19:26):
kind of keen to see where he lives. Yeah, I
mean it's like a football owner saying they backed the manager.

Speaker 3 (19:31):
You know.

Speaker 2 (19:31):
It's sort of ominous sign that things might not be
so well. And so this is one of these details
that we need to sort of hear more about. And
I think open AYE, if it wants to be at
the front of talking to governments and being this hugely
an entry company, the trade off has to be that
it is more public, perhaps than tech companies would typically
be in the past. It doesn't get to hide behind

(19:53):
this super secrecy that Silicon Valley is more comfortable with.
It has to tell us what was going on here.

Speaker 1 (19:59):
We should also know. And Elon Musk says he was
an early board member at opmen AI along with Sam Altman.
Before Sam became CEO, he was on the board and
as Parmi noted earlier, Elon has run rough shot over
his own board. We're going to take a break in
a sec But this idea that AI is going to
take over the world and that open AI is sitting

(20:19):
on top of a Pandora's box, is that warranted. Parmi.

Speaker 3 (20:23):
First of all, I would say an answer to that
question that I think open ai secretly loves that people
think that because it makes their technology. However, but they
keep it under wraps, it makes their technology seem that
much more powerful. You know, for people to fear it
doesn't mean they don't want to pay for it necessarily.
Companies kind of see that almost as a kind of

(20:46):
nice attractive feature to software. But I think I would
say that with something like this new development with q starr,
for example, that open ai is working on this ability
to do great school math. You so this means that
AI can reason, because when you do math, you have
to figure out steps to solve a problem, and it's

(21:06):
a little bit like solving problems in real life. And
so right now, chatgybt is amazing because it can generate language,
it can generate text, and a lot of that is
just statistical based predictions. But if a model can do
things much more strategically, if it can plan, if it
can solve problems, many people say in the AI field

(21:28):
that this is a step towards more general intelligence. And
the reason why I'm saying that in response to your
question is because although I don't think AGI is necessarily
going to take over the world. I personally don't subscribe
to the rogue AI theory, certainly not anytime soon. But
this idea of humans outsourcing more than just tasks, but

(21:49):
actual responsibility to AI is certainly going to be happening
in the next couple of years with these kinds of
developments like q Star, and also Google's new model called Gemini,
which hasn't released yet, also has the sexpertise and strategic planning,
so it's kind of similar to what open ai is
working on, and I think that's going to be really

(22:10):
interesting to see how business leaders, how anyone uses these
kinds of systems not just to generate text or to
ask advice, but to actually carry out some of our
day to day work tasks and responsibilities.

Speaker 1 (22:24):
On that note, we're going to take a quick break
to hear from a sponsor and then we'll come right back.
We're back with parme Olsen and Dave Lee, and we're
discussing all of the recent upheaval surrounding open ai and
its impact on its product chat GPT. Dave sam Altman

(22:45):
was the CEO and on the board at OpenAI. He's
now just the CEO. How do we interpret that? Did
he win the power struggle that ensued after he was
forced out and brought back. Does he have more influence now?
Does he as less influence? Can we even know?

Speaker 2 (23:03):
Yeah? I mean, look, I think a lot depends on
a couple of things. I mean, one of them being
how does the rest of this board look like in
the coming weeks and months. You know, the interim board
of three is going to expand possibly through as many
as nine or they haven't quite confirmed the shape up
of that, and there's going to be lots of sort
of questions that's about what the makeup of that board's
going to be. I mean, already the board has lost

(23:24):
its only women. It's now an all male interim boards,
So there's going to be pressure to reflect a diversity there.
But then also there's going to have to be people
who are incredibly versed in safety round AI, around policy
and AI, people that are going to be at least
seen as potentially being able to hold someone like Sam
Altman to account or at least sort of reign in
perhaps some of those commercial interests that might sort of

(23:46):
provide conflicts or just kind of have him going full
steam ahead. So I think it we'll know more about
sam Altman's power. Once we know more about that board.

Speaker 1 (23:54):
Party you were mentioning earlier, how sam Alton was allowed
to pursue all these other ventures outside of open Ai,
including companies that possibly would have ended up as competitors
getting funding for those rather than possibly getting funding for
open Ai, just all of the myriad financial and professional conflicts.
One of the other things that I find very strange

(24:15):
in the story is that open ai is a nonprofit
company housing a for profit subsidiary that offers a product
Chat GPT for free unless you want extra Chat GPT features,
and then as a consumer, you have to pay extra
to get those. This confuses me, even in that structure

(24:36):
at one point, the company at evaluation of about eighty
six billion dollars, isn't this structure in it of itself
bound to cause problems.

Speaker 3 (24:44):
One thing I underestimated about this whole story when it
happened was how much power this board had, And I
underestimated and didn't fully appreciate the fact that they really
could fire sam Altman. But the idea of trying to
combine for profit the nonprofit is not completely unusual In
Silicon Valley. Mozilla is connected with the Mozilla Foundation that

(25:06):
is a nonprofit, Signal, which is the encrypted messaging app
that is also a nonprofit, and even just recently, two
of the leading AI firms in Silicon Valley. One is
called Inflection, the other one is Anthropic, the latter of
which split away from open Ai a couple of years ago.
Those are also trying to thread this needle between being

(25:29):
businesses that make money but also building AI that is
safe and beneficial to humanity, and so they are also
tinkering with these kind of unusual corporate structures, like being
a public benefit corporation. I think in the case of Inflection,
they're structured in such a way where they have to
prioritize the environment and consumers on the exact same level

(25:53):
as their investors. So profit is not the number one
priority for them. It has to be upheld with these
other things now, and this is clearly very difficult to
pull off open Ai. It doesn't seem to have worked
for them. Deep Mind, which is the big AI lab
that's part of Google, also tried to do this. They
tried to spin out of Google for several years. They

(26:14):
tried to create a governance structure called a Global Interest Company.
They wanted to be like a nonprofit style company. It
totally failed. Google next it. So there's this history of
AI builders trying to do this because they know this
technology is so transformative and they don't necessarily feel comfortable
even with it being controlled by monopolistic corporations essentially, but

(26:37):
it's been very difficult to figure out how to make
it work.

Speaker 2 (26:40):
I think I was sop on. Isn't it the case
that this model of a nonprofit running a technology product,
it comes under a lot more strain given the sheer
amounts of money necessary to pull it off. With aim
absolutely eating power Microsoft place. I think it was up
to thirteen billion dollars to have open ai use its
servers to do the crunching that makes AI possible. So,

(27:01):
you know, Mozilla is a great example there of an
organization that runs a browser, but creating and running a
browser isn't in the same league as having to build
cutting edge AI. And even in the case of Mozilla,
you know, much of their funding comes from companies like
Google and others. So it's always been a bit uneasy
this tech model with nonprofits, and it's been particularly strained

(27:23):
when it comes to AI just because of the magnitude
of what needs to be done well.

Speaker 1 (27:27):
And you also have, you know, a chief executive whose
mission is to grow the company and increase profitability and
expand the share of the product that's selling. And then
a board which is invested with responsibilities for helping to
achieve that, but also being kind of a self regulatory
device that is trying to stand in the way of abuses.
All of it, Parmi was saying, And you know, I

(27:49):
think this is the historical tension anyway between boards and executives.
It happens in lots of industries. Boards came into being
to sort of make sure, especially publicly traded companies, that
investors interests were being looked after and that you didn't
have rogue CEOs. And there's a lot of situations even
outside of tech, where you can have a rogue CEO.
But of course, and again, as Party mentioned earlier, we're

(28:11):
talking about a product and a technology at play here
that scares people. You know, the board, Parmi continues to
interest me because, as you've noted in one of your columns,
open Ai had a chance to get more women on
its board and it just drove right past that, which
now has an all male board. One of the people
on the board that was reconstituted, Helen Turner, interested me

(28:34):
because Helen Toner is an academic. She has done a
lot of work herself on AI and the uses of technology,
and it has been reported that she had worked on
a paper that raised some questions about even whether or
not open AI's own products could be problematic, and that
Sam Altman confronted her about that and they had a dispute.

(28:56):
She was a pivotal person in his departure based on
the report we've seen. Tell me about those two things,
about the fact that we have an all male board
there now and what happened to Helen Toner.

Speaker 3 (29:07):
Well, I'll start with Toner, and absolutely right. She did
write that research paper. You can see it. It's public
and has written in association with Georgetown University, and she's
scathing about open Ai. She referred to frantic corner cutting
in the progress of building chat Ept, and then she
compared open Ai with its rival Anthropic and said that Anthropic,

(29:30):
which was the group of people at open Ai that
broke away to make Safer AI, had done a better
job of more slowly deploying a product, making sure that
it was safe before they put it out into the wild,
and sam Altmand apparently was livid about that. Didn't like
the fact that she was putting that into the public
domain when you know the open ai is being investigated

(29:51):
by the Federal Trade Commission for potentially infringing on people's privacy.
He told her, according to reports, that it was compromising
the company, and so, yes, there was a lot of
tension there, but I think tim, I mean, you could
look at it both ways. Yes, what she was doing
was compromising open AI's reputation, but she was an academic.
They knew that when they brought her on the board.

(30:13):
And as you also mentioned, this is kind of what
boards are supposed to do, push back a little bit,
and boards typically have a fiduciary duty to shareholders. This
was not the case with open Ai. The board had
a fiduciary duty to the open ai mission to benefit humanity.
It's kind of, if you think about it, quite a
bizarre phrase. Feels weird saying that, but that is literally

(30:34):
how it was written and signed and agreed by everyone.
But Toner was the one that lost her seat, and
so was Tasha McCauley.

Speaker 1 (30:42):
It's smacks of Sir Gey Brand and Larry Page do
no harm in their early Google literation.

Speaker 3 (30:48):
Absolutely yeah. And Google also famously said in the very beginning,
we don't want advertising, we hate advertising. And now they're
the world's biggest ad giant.

Speaker 1 (30:56):
So there you go, Ah, the children, but we need
them among us. Whatever the merits are of how the
board intervened here, and I have sympathy for a lot
of them, it also had quite the clown show aspects
to the whole thing. You sort of wondered why this
could have been choreographed or sorted in a more private way,
but maybe that was impossible. Here is there another way

(31:17):
this could have played out at open AI?

Speaker 2 (31:19):
I mean yes, And the way it could have played
out was the phrase that was bringing your receipts. If
the board had this feeling that there was issues with
how Sam Wontman had been communicating with them, it needed examples.
It needed a way to sort of explain its thinking
in a way it just hasn't done. It kind of
reminds me of when you have a sort of argument
with a spouse and you kind of well, give me

(31:40):
one example, and you go, well, I can't think of
one right now, but you never quite get anywhere, do you,
And nobody gives any concession on either side, and that's
kind of where we're at with Open AI. It was
being described as a coup, and I think that's kind
of accurate, and that there was a sort of swing
for the king and you've got to have everything in
a row and if you swing, you better not miss,
and that's what they did. They swung and they and
they missed because they didn't have the reason to back

(32:02):
it up. And you know, as we've now discovered as well,
they were fighting the forces of the commercial interest and
open AI, so you could argue they perhaps never stood
a chance. But I spoke to one the business professor
recently who made the point that talking about this sort
of nonprofit management and so forth, we shouldn't necessarily write
off this model because what this could have just been

(32:22):
was in competency in this board in particular when it
came to carrying out this coup or whatever we want
to describe it.

Speaker 3 (32:29):
I completely agree with that, and I think that's what's
such a shame about the whole thing, because it does
sound like they did come across as quite incompetent, and
if you look at them they didn't have a lot
of experience on other boards. So if they had just
executed it a little bit more professionally, then I think
people could maybe consider this kind of model as potentially working.
It's not a terrible model. I mean the idea behind

(32:50):
it is quite noble when you think about it. It
just couldn't work.

Speaker 2 (32:54):
There's the thing, you know, talking about the paper from
Helen Toner. I mean, I think it's worth remembering that
that paper, like you say, it was written, it was
public and Helen Tona was still on the border Open AI.
So that was an indication that that dynamic did work right.
I mean, it created tension, but ultimately the governance was
still in place. It was just this sort of follow
up move that caused all this chaos.

Speaker 1 (33:13):
Let's take another quick break, my friends, and then we'll
come right back and get into our last act here. Today,
we're back and we're chatting about Sam Altman, Open Ai, chat,
GPT and the AI Revolution with Parmie Olsen and Dave
Lee David. It's not clear to me that open AI's

(33:35):
management problems are solved and they're not operating in a
protected market as it were. They have a head start
they have a product that is early out of the
gates and much beloved, but they have to deal with
a lot of competitors who are in this market too,
And do you think that their management problems are in
rain enough that this episode won't affect their longevity.

Speaker 2 (33:58):
Well, it's interesting. I was the other night and they
just happened to be a fairly senior person open aiye there,
and they said to me, you know that prior week
when all this was up in the air, Yes, it
was disruptive. They didn't know whether they were going to
be working for Microsoft in a week or any of
these other sort of different outcomes that we could have seen.
But as of sort of Monday and Tuesday, the week after,

(34:18):
things were basically back to normal and people were coding again,
people were planning again, and it's almost as if nothing
is that had ever happened. Now, of course, a big
thing had happened, and we've yet to see how that
fully shakes out. But I think given where they were
at on that Friday evening when this announcement was made
and the weekend of complete madness, I actually think it's
been a pretty impressive sort of pulling back into just

(34:41):
sort of normality, So we'll see whether that has any
long standing impact. I think you know those other companies,
Google in particular, part Me mentioned Gemini earlier one of
their AI projects that's delayed for various reasons. So a
company like Google's probably thinking, oh, wouldn't it be good
if open Eye was slowed down a little bit because
we could do with catching up here. So I think
competitors were mildly delighted to see that, But I still

(35:03):
think open eye is going to be considered to be
the leader here still, and that perhaps wouldn't have been
the case had they successfully asked sam Oltman Harmy.

Speaker 1 (35:12):
It's been almost exactly a year since chatchept was released publicly.
What has open ai done since then to protect and
expand its franchise. I think chatchipt can accept more queries.
It's not only techt base. Its database is more up
to date. But as you've noted, also, mistakes have bound

(35:33):
still when you use the product. So how do you
see the evolution of the company over the last year?

Speaker 3 (35:41):
Oh, over the last year? I think really the success
of Chatchipt, first of all, took them by surprise. They
were taking bets internally about how many people would use
it within the first week, and the top bet was
one hundred thousand users and it ended up being more
like a million. So I think, really the past year
has just been open ai scrambling to keep up with
viral success that it absolutely did not anticipate. Having said that,

(36:05):
it has from a business perspective, done incredibly well to
keep up with and satisfy the demand for people who
want to use this, whether you're a consumer who wants
to spend twenty dollars a month on chat GPT plus,
or a company who wants to get access to open
AI's software and technology, or a company that wants to
access it through Microsoft. And I think that's really where

(36:28):
open ai has real stability is through that partnership with Microsoft.
Because the thing about Microsoft is that it's big product
through which it dispenses open AI's technology is cloud. It's
known as the Azure platform, and it has something like
eighteen thousand customers and their big names, you know, like Rolls, Royce, Adobe,

(36:51):
the Seattle Seahawks, just all sorts of random big companies,
And the great thing for Microsoft is that these customers
are locked in to this product because it's actually really
expensive to extricate all your systems from what one cloud
provider like Microsoft and move on to another cloud provider
like Amazon. The customers and IT professionals hate this. They

(37:14):
really don't like the fact that they can't shop around.
But this is really good for Microsoft and also very
very good for open Ai. So this past year kind
of integrating themselves more and more and serving their top investor, Microsoft,
who owns forty nine percent of open Ai, has really
stood them in good stead, and it certainly did over
this past weekend because part of me wonders would Altman

(37:36):
have been able to be reinstated if he didn't have
such an Adella pushing so hard for.

Speaker 1 (37:40):
Him well, and that brings us, Dave to the question
I wanted to ask you about Microsoft. Given that Sacha Nadella,
Microsoft's CEO, was very enmeshed in the open Ai upheaval,
he almost got Sam Altman and many of his employees
for a song. It probably would have been one of
the most epic, sort of cheap takeovers in corporate history

(38:01):
if that had panned out. Nadella has referred to Microsoft
as being the co pilot company now, like the company
that makes products that sort of assist you in having
a better corporate liver a better personal life. And obviously
chat GPT fits right into that. Microsoft is a player here,

(38:22):
and it's a heavily influential player both in this drama
and where I think open ai is going to go correct.

Speaker 2 (38:29):
Yes, I mean it's not just a players, you know,
as the player externally. I mean there are other investors
in open ai, but it's only Microsoft that really has
the power here. I think, you know, Palme was touching
on a really important point there, and one that I
think's worth remembering is that the winner in AI isn't
necessarily going to be the company that has the best

(38:50):
AI or at least the most sophisticated AI. It's going
to be the company that can put that AI into
applications that consumers are either already using or that they
want to use in future. And so when you have
Microsoft on board, that is incredibly powerful. Because people have
been using Word for years, people have been frustrated at
PowerPoint for years, Excel, you know all these things that

(39:11):
people do, you know, Search changes, and people laugh about
Microsoft and Clippy, the little the little mascot that used
to have and that sense you you know where we
sort of heading again forgotten all about Clipping. I love
Clip was he was underrated. But this is where we're
going again. And I think that's why this deal is
so valuable to both open I and to Microsoft, and

(39:33):
it's why Satching Thedella leapt to action to step in
to stop his investment in ape and ai going up
in flames. That when Microsoft said, you know, we will
take on and we will salary match as many open
ai employees are interesting in joining us. I mean, that's
a huge thing that was still I saw one estimate
that was going to cost around a billion dollars to

(39:54):
do that. Now that's arguably getting a eighty billion dollar
company a huge sort of black fried a deal. But
that's a huge amount to add to your Headcunt. They
weren't even sure where they were going to put them.
They were clearing out desks at an old office that
used to be used for LinkedIn, which one of the
companies that Microsoft owns, and so this would have all
been very very very very difficult for Microsoft. What they've

(40:16):
come to instead, I think is their sort of preferred option.
Sam Oltman back in the job, and also on this
new board that's being made up. They have an observer seat,
so they can't vote, can't have a say in the decisions,
but can observe the board now, I asked, at the
time we're recording this, we're still learning about what exactly
that means. I asked Microsoft. They weren't sharing any more

(40:37):
about what exactly they'd be able to observe and what
imput they could have or indeed who at Microsoft might
take that position. But Microsoft has gone from a situation
where it had a sort of very much outsider perspective
on open Ai, even though it invested more than ten
billion dollars, to having a situation now where it's got
a much closer relationship and we'll get a heads up

(40:58):
on any crazy decisions like this future. One of the
things that was gobsmacking about how the Sam Altman firing
went down was that Microsoft learned just about the same
time I did when the blog post went up. There
it is, and that given the reliance that Microsoft has
on open Ai and its importance to Microsoft's strategy.

Speaker 1 (41:16):
I mean, it's a major investorable thing.

Speaker 2 (41:19):
Yes, absolutely, and typically your huge investors, part of that
investment would mean getting a board seat, getting two board
seats maybe, but such as the popularity of open ai
and also were stressing. Is one of the reasons why
Microsoft was something of an outside is because it worried
that being too close to OpenAI would raise some level
of scrutiny from competition regulators, sort of wondering, you know,

(41:42):
it is Microsoft going to flex its power to stop
others getting access to good AI. So there's a lot
of factors at play, but in sure, I think Microsoft
has found a scenario that suits it very very nicely.

Speaker 3 (41:53):
Indeed, Yeah, And I would just add that if it
had been a subsidiary of Microsoft, if Sam Altman had
been part of this advanced AI group, that's not what
Inodella wanted at all, because that would mean Microsoft takes
on all of the reputational risk, all of the legal
risk for all the crazy stuff that open ai regularly does,
like putting chatchypt out into the world. Microsoft would never

(42:15):
do that because copywriter issues for a start, and crazy
things that this bot could say. That's why big tech
companies don't do the kinds of things that open ai does.
So it's much better to be an investor take on
all the glow of open ai and none of the liability.

Speaker 2 (42:30):
Which saw a sign off that didn't we part me
a few years ago, which I'm sure you remember, when
Microsoft had an AI bot that it kind of released
onto Twitter. I believe it was called Tay the Box. Yeah,
and almost immediately people realized that if they said certain
things to it, it would repeat them back to them,
and it became sweary, it became racist, and it was
just this is just a tiny, tiny experiment, but it

(42:51):
caused Microsoft no end of reputational issues and things to
be accountable for. So you can totally see why they
want to be outside to some degree.

Speaker 3 (43:00):
Saw people still cringe at that memory.

Speaker 1 (43:02):
And I think Microsoft still licking its federal antitrust investigation
wounds of the nineteen nineties. That makes it a little
as as I put it earlier, absolutely has it meant
to appear to be the titan in this nascent.

Speaker 3 (43:15):
And then these who were dealing with that in the nineties,
like Brad Smith the Legal Council are still there, like
they all remember this stuff. So yeah, and satchin Adella two, I.

Speaker 1 (43:25):
Want to ask you each about lessons that you've learned
from this dramatic little moment in Silicon Valley. I like
to ask people that at the end of the show.
So let's start with you, Parmie. What is something you've
learned watching this recent debacle that you didn't know before
it happened.

Speaker 3 (43:41):
Hmmm, it's a good question. I guess I would just
go back to my answer previously about the fact that
a board like this could actually topple someone like Sam Altman.
But I suppose the other thing is that this isn't
so much something that I've learned, but just a suspicion.
The cynic me has been confirmed that Silicon Valley will

(44:03):
Silicon Valley, And even if you set up a board
and a governance structure that looks good to everyone, that
looks like it's there to help humanity, it will still
ultimately serve the purpose of its investors. And that is
precisely what happened after this dramatic weekend where Sam was
ousted and brought right back in. So a little bit

(44:25):
of hypocrisy there, and also just Silicon Valley being Silicon Valley.

Speaker 1 (44:30):
Dave, I don't know, can you still learn things? Both
of you know so much that I sometimes wonder.

Speaker 2 (44:35):
Learning every day I've learned Listeners to parme me on
this podcast and I'm glad you let her go first.
I can have a moment to think about this, because
it's interesting because on some levels it feels utterly predictable
this happened. And I think one thing I've learned or
was surprised by, was I think we always knew that
at some point there would be some friction around the
direction of one of these companies. I'm surprised it's come

(44:59):
so soon for open ai I thought we'd be more
sort of tangible threats or fall out or crisis that
we'd all sort of know about that would lead to
a moment like this. I'm surprised it's come sort of
from what seems to be more just sort of human
personality clashes than anything else. So that surprised me. The
frigidity is surprising. But I think that the lesson, if
we can take one so far, and I think it's yeah,

(45:21):
we're still learning the lessons. I think the lesson is
that open aiy is a tech company. It's a tech
company like Facebook, to tech company like Google's the tech company.
And I think I'm going to constantly remind myself of
this sort of this week and remember that ultimately it
was the needs of the tech industry and the money
of the tech industry that essentially had the final say.

(45:43):
And so I think open AI. Although we can talk
about governance and nonprofit status, I think ultimately it's still
a tech company and we should cover it as such
and think about it as such. So that's my lesson
from this.

Speaker 1 (45:54):
I think we are out of time. Prime and Dave,
thank you so much for coming on today.

Speaker 3 (46:00):
Thank you.

Speaker 2 (46:00):
Thanks.

Speaker 1 (46:01):
Tim parme Olsen and Dave Lee are both columnists with
Bloomberg Opinion. You can find their work on the Bloomberg
Opinion website and on the Bloomberg terminal. Here at crash Course,
we believe that collisions can be messy, impressive, challenging, surprising,
and always instructive. In today's crash Course, I learned that
no matter how talented or how smart you are, no

(46:22):
matter what industry you're working in, even if it's in
Silicon Valley, humans can be humans and they can do crazy,
crazy things. What did you learn? We'd love to hear
from you. You can tweet at the Bloomberg Opinion handle
at Opinion or me at Tim O'Brien using the hashtag
Bloomberg Crash Course. You can also subscribe to our show

(46:44):
wherever you're listening right now, and please leave us a review.
It helps more people find the show. This episode was
produced by the indispensable and decidedly non crazy Adam Azarakus
and me. Our supervising producer is Magnus Hendrickson, and we
had editing help from the Sage Bauman, Jeff Grocock, Mike
Nitze and Christine Vanden Bilart. Blake Maples does our sound

(47:06):
engineering and our original theme song was composed by Luis Gara.
I'm Tim O'Brien. We'll be back next week with another
crash course
Advertise With Us

Popular Podcasts

Dateline NBC
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.