All Episodes

November 20, 2023 51 mins

In a dramatic turn of events, OpenAI's board of directors fired CEO and co-founder Sam Altman. Then they tried to hire him back. Then they announced a former Twitch CEO will lead the company. What the what?

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeart Podcasts and How the
Tech Are Yet Now. Normally I say tech news items

(00:24):
for Tuesdays and Thursdays, But over this past weekend a
pretty big sequence of events happened and it really merits
a deeper discussion. I'm sure most of you have at
least heard something about this story. The short version is
Sam Altman, the CEO of open Ai, received his walking

(00:46):
papers from the company's board of directors. Then the board
kind of flipped out and begged him to come back
to the company, and he ultimately decided now I'm good,
y'all can do this on your own. And last I heard,
he has now joined Microsoft's Advanced AI department. Which is
a heck of a weekend. So today I thought we

(01:10):
would talk a bit about Altman, We talk a bit
about open Ai, We chat about what went down behind
closed doors this past Friday, why the board of directors
fired Altman, why they then switched gears so quickly, and
what it all means going forward. Now. I did an
episode titled the Story of Open AI at the beginning

(01:32):
of this year, which published in January. I am going
to retread a lot of that same ground here in
a slightly different context, because it's necessary to really unravel
what was going on over the last weekend. So first up,
who is Sam Altman. Well, he grew up in Saint Louis, Missouri,
and as a kid he became really interested in programming.

(01:54):
According to The New Yorker, he was programming as early
as age eight, and that he went so far to
take a part a Macintosh computer in order to learn
how it worked. He also challenged social restrictions and taboos.
When a Christian group announced a boycott for an assembly
that was supposed to be focused on sexuality, Altman came
out to his community as gay, and he challenged them

(02:17):
to adopt an open attitude toward different ideas. And he
was just a teenager at the time, so very much
someone who is curious, motivated, and by most accounts I've read, fearless.
Altman attended Stanford, but he was only there for two years.
He studied computer science while he was there. In fact,

(02:39):
he studied artificial intelligence by some of the leading thinkers
in the discipline at Stanford, but he dropped out of
college in order to work on an app and a
business idea. So in many ways, it was the stereotypical
founder story of Silicon Valley. Right. You go to Stanford,
and when you're there, you're really there to make connections.
You don't bother completing your studies, You drop out of school,

(03:02):
you make a company, and then you get rich. It's
kind of like, you know, the whole idea of step one,
go to Stanford, step two, drop out, step three profit.
I guess you could argue that if you can make
a successful tech business, there's no real point to completing
your studies. I mean, if your studies are all about
learning the technology and it turns out you already have

(03:25):
a good mastery of that and you can make a
profitable business, why would you continue to spend money going
to school, unless, of course, maybe you would grow more
as a person and develop a deeper appreciation and understanding
of things that could perhaps help you when you make
decisions in the future. But don't listen to me. I

(03:45):
graduated with a degree in the humanities, so I have
these wacky ideas about how the experience of college is
about more than just learning a subject. But that's beside
the point. Let's get back to Sam Altman. So the
app he was working on with a few friends was
called Looped loopt, and it was meant to let users

(04:07):
to share their location data with selected other users, So
you can make friends with people on the app, and
then you could share your precise location with that person,
kind of a shorthand way of saying here I am.
And it could facilitate stuff like real world meetups. Like
imagine that you're heading to a concert venue and it's

(04:27):
a big venue. There's a lot of different interests and stuff.
You get there, you're going to meet up with your friends.
You use this app to say this is specifically where
I am, so that you can find each other. That's
kind of a use case. The Looped team applied to
the Why Combinator accelerator company to become part of their program.
So let's talk about Why Combinator for a moment. It is,

(04:50):
as I said, a startup accelerator organization. So Why Combinator
its purpose is to provide early funding in promising startup
ideas in order for them to start to get off
the ground. So it's an early investment part of a
startup so that they can at least get a chance
to mature into an actual business. Now, in return for

(05:14):
this early investment, why Combinator takes a small percentage of
ownership in the startup. So let's say that why Combinator
provides you know, a fairly modest sum in the early days,
Like it was a lot of money, don't get me wrong,
Like maybe like one hundred thousand dollars or maybe one
hundred and twenty thousand dollars. That's a lot of money,

(05:35):
but it's a tiny amount when you think about what
a company needs to actually run. So this is really
just to get a startup to go from idea to
something slightly more you know, coherent. But in return, why
Combinator gets like, you know, seven percent ownership of that startup. Now,
let's say that startup is something like Dropbox, and then

(05:58):
years down the road it's worth you know, more than
ten billion dollars. Well, that becomes a heck of a
return on investment. Right, you can have a huge profit
as this startup accelerator, even if just a few of
the startups really hit it big. Ideally, you want them
all to hit big and then you get huge payouts
down the line, and you become an important part of

(06:22):
the whole tech startup ecosystem, which is exactly what why
Combinator said up to do now why Combinator launched in
two thousand and five, that was the same year that
Looped would become part of its inaugural class of startups. Essentially, now,
not all startups make it. Obviously, Looped at least appeared
to do well initially, at least on paper. Altman and

(06:44):
his colleagues secured a couple of major rounds of investment funding,
Series A and Series B. The company's valuation hit more
than one hundred and seventy million dollars. But they were
running into a tiny little problem. They had developed this app,
and they couldn't convince people to use it. They were
all thinking that, you know, Looped was going to be

(07:06):
this really useful and popular tool. Everyone was going to
download it. But turns out the general public didn't seem
to agree with that, and so in twenty twelve, Looped
that team accepted an offer from the company green Dot,
and they sold Looped for around forty three million dollars.
That's also a lot of money, but it did not

(07:28):
cover the amount of money that venture capitalists had invested
into looped, so it was a negative return for investors.
You know, sometimes you bet on the ponies and you lose.
But Altman walked away with around five million bucks, so
it was a pretty decent return for him, even though
the app that he had worked on for several years

(07:49):
never really gained traction. Now, in the wake of this disappointment,
Altman founded a venture capital company of his own, and
it was called Hydrazine Capital. So he sunk most of
his personal wealth that he had earned from this sale
into this new venture capital company, and he also raised
millions more from other investors. He focused on investing in

(08:10):
companies that were in the y Combinator program, and he
was largely successful in this. He was picking some really
good startups, and he was backing ones that would become
a big deal in the tech space a few years
down the road, and so he was seeing big returns
on those investments. Within just a few years, his venture
capital firm increased in value by an order of magnitude.

(08:33):
But Altman wasn't super happy doing this work. He didn't
find it rewarding on a personal level. Financially sure, but
on a personal level, he didn't really like the work,
so he then extricated himself from the venture capital company
to try and do something else. Now, around this same time,
the folks who were behind why Combinator were looking to

(08:56):
hand off the whole accelerator program to someone else to
lead it, and that someone else ended up being Sam Altman.
The guy had gone through the process with looped and
now he would be running it, and he reportedly agreed
without hesitation. He was eager to do this job. It
was something that he didn't even necessarily know he wanted
to do, but once he was offered it, he was

(09:18):
really enthusiastic about doing it. So he really ramped up
y Combinator, and Altman began recruiting, you know, startups that
were focused on science and technology, so we're talking about
bleeding edge stuff like quantum computing or nuclear power or
AI that kind of stuff. Around this time, he started

(09:39):
to be part of a group that included Elon Musk,
and this group ostensibly wanted to develop artificial intelligence in
a responsible, accountable way while being extra careful not to
do anything foolish to make safe artificial intelligence. You know,
they didn't want to go down the wrong path and

(10:01):
do something like accidentally unleash Skynet and terminators all over
the place, so this group would become open ai. Now,
the original open ai was a non profit organization, and
the whole idea was to help in this effort to
foster the development of artificial intelligence in a responsible and

(10:22):
safe way. It wasn't some for profit company pushing a
generative AI chatbot at that time, and it was not
yet a partner to massive companies like Microsoft. So Altman
was running y Combinator, and he also began to work
with the open ai folks and tried to recruit various

(10:44):
leaders in artificial intelligence to join open ai. In twenty fifteen,
Sam Altman published a two part blog post about machine
intelligence and quote why you should fear it end quote,
And it starts off with the comfort sentence and I
quote development of superhuman machine intelligence SMI is probably the

(11:08):
greatest threat to the continued existence of humanity end quote.
Now Altman allows that other massive threats, like say, an
asteroid hitting the Earth, are possibly more likely to happen
than super human intelligent machines run amok, but he also
points out that a lot of the other threats that

(11:30):
we think of, like super volcanoes and climate crisis might
end up having a massive impact on the human population,
but probably wouldn't just totally wipe out humans in totality.
But superhuman machine intelligence, he argued, did have that potential,

(11:51):
and that this is why he thought of it as
being the most important or perhaps most dangerous threat. Almand
goes on in those blog posts to point out that
SMI doesn't have to be malevolent to be a threat.
Just setting an SMI to complete a task like trying
to manage resources could end up causing massive human harm.

(12:13):
The SMI might determine that the biggest cause of resource
depletion is the human race, So presto, you get rid
of the people, and now you don't have to worry
about these resources running out anymore. Now that's an oversimplification,
but you get the idea. Altman's point is if you
don't develop artificial intelligence in a way that is safe,

(12:34):
you can get terrible consequences, whether that was your goal
or otherwise. And of course there are malicious ways to
use AI. Right You could develop AI in an effort
to try and come up with new biological weapons. For example,
that's often a scenario that's cited by concerned critics of
artificial intelligence. That's certainly something that could potentially happen. So

(12:58):
again Altman saying, well, you need to have the right
team responsible to develop AI in a way that is
most likely to benefit people and to protect people from
malicious or badly designed AI. So Altman makes an argument
that machine intelligence could hit an inflection point once recursive

(13:19):
self improvement becomes a real possibility. That means, if we
get to the point where we can create machines that
are smart enough to reprogram themselves, and to reprogram themselves
in a way that is better than what humans could do,
so to program these machines at a higher than human

(13:41):
level of capability, a superhuman capability, if you will, then
machines suddenly engage in self improvement and can do so
at increasingly shorter intervals. They get better at doing the
thing that they're doing, so they get better at improving themselves,
and they improve themselves over and over, and this becomes
a version of the singularity, which is a moment where

(14:03):
change is so sudden and it's happening all the time
that effectively it becomes impossible to even describe the present,
everything will change and continue to change at a rate
that's beyond our ability to describe. Altman says, we might
be creeping toward that now, and maybe we're creeping toward
it at a rate that's just impossible for us to

(14:23):
notice because it's so gradual. That makes it really tricky,
because it could be that it goes from it's happening
so slowly that we can't notice it, to it's happening
so quickly that we are unable to describe it, and
that there's no point in the middle where we can say,
wait a second. So in part two of his blog posts,

(14:45):
Altman makes a clear argument. He says, quote, the US
government and all other governments should regulate the development of SMI.
In an ideal world, regulation would slow down the bad
guys and speed up the good guys. It seems like
what happens with the first SMI to be developed will
be very important end quote. Essentially, what Altman is arguing

(15:08):
here is that if ethical researchers develop a superhuman machine
intelligence first, they can employ that SMI to prevent the
development or deployment of malevolent or poorly built SMIs. So
we unleash our good guy Superman against their bad guy
General Zod or you know whatever whichever superhero supervillain pairing

(15:31):
you happen to like. Interestingly, this is going to come
back again when we talk about Altman and his appearances
around the world while talking about the potential for AI regulations.
Before we dive any further into this, let's take a
quick break to thank our sponsors and we'll be right back.

(15:56):
So Altman went so far in his blog posts to
say that he thinks, generally speaking, that tech is often overregulated,
but on the flip side, he doesn't want to live
in a world that has no regulation at all. In
some cases, you can see regulation as a necessary evil
that maybe it does slow down innovation or it has

(16:17):
unintended consequences, but in the absence of regulation you can
have some really poorly thought out deployments that can cause
a lot of harm. From twenty fifteen to twenty eighteen,
Open Ai operated as a nonprofit organization. The organization championed
the open part of its name, claiming that it would

(16:39):
freely share research and its patents with AI researchers all
around the world, all in an effort to ensure safety
in AI development. Greg Brockman, one of the co founders,
identified a short list of top AI researchers, and the
organization as a whole began to recruit several of them
to join open ai as the first employees of the organization.

(17:03):
The talent helped attract more talent. Some folks said they
actually joined open ai because it was where you could
work on really exciting research with the most brilliant and
talented people in the discipline, even though it would mean
you wouldn't be making as much money there as you
could somewhere else. Even high paid individuals at companies like

(17:24):
Google found themselves switching jobs for the chance to work
on something that they saw as important and challenging and
potentially critical to the survival of humanity. One of those
people who would also be listed as a co founder
of open ai was Ilia Sutzkiver, who would become chief

(17:44):
Scientist at open ai and would join the board of directors.
Musk reportedly played a critical role in recruiting Sutzkiver over
to open Ai. Like it went back and forth between
open Ai and Google, which really wanted to hold on
to him, and reportedly the Musk was a big reason
why Sutskever eventually moved over to open ai, and also

(18:05):
Ilia Sutzkever is one of the people who would ultimately
be part of the decision making group that fired Sam
Altman this past weekend. Anyway, we're up to twenty eighteen,
and behind the scenes there was drama a bruin, and
much of it was in the cauldron known as Elon Musk.
Not a big surprise there, right, because Elon Musk is

(18:27):
kind of a magnet for drama in the tech sphere
and the business sector. So Musk was on the board
of directors for open Ai, but in twenty eighteen he
left open ai entirely, and the official story was that
Musk chose to step down because of a potential conflict
of interest because there he was on the board of
directors for an organization working on artificial intelligence, but he

(18:51):
also was CEO of Tesla, a car company that was
pushing hard to develop and deploy autonomous driving capabilities to
the market, and autonomous driving is of course a subset
of artificial intelligence, so stepping down was the responsible thing
to do because of this potential conflict of interest between
the two companies. There was, however, more to his decision

(19:15):
than just that. So, according to Business Insider, Musk was
not happy with open Ay's progress. He compared it negatively
to Google. He was saying Google is spending huge amounts
of money and is getting ahead in artificial intelligence research,
and he argued Musk argued that Google, and he specifically

(19:37):
targeted Larry Page in this criticism, was not paying any
attention to safety, that safety was not a factor when
it came to Google's approach to artificial intelligence, and so
that was one of the things he said that it
was critical of open ai, saying you're not doing enough,
and he was kind of pointing at Sam Altman as

(19:59):
the reason for them, that Altman's leadership was the reason
why open ai was lagging behind. So Musk then reportedly
went to other co founders of open ai, including Sam Altman,
and essentially he said, I want to run open ai,
and he was told in no uncertain terms that this

(20:21):
would not happen, and so, again, according to Business Insider,
Musk decided to take his ball and leave. His ball
also included a sizable investment or donation to open ai,
so when he left, he left with a whole bunch
of money that otherwise was going to go to the
organization and didn't. Musk would later say he disagreed with

(20:42):
the direction of open ai and that the company wasn't
nearly as open as its name would suggest. That last
criticism happened after open Ai would create a for profit
company in twenty nineteen. Musk actually leveled the lack of
openness at open Ai that critique. Around twenty twenty, Musk

(21:03):
also was founding his own ai research organization and would
occasionally throw shade at open Ai and Sam Altman. And
I am not an Elon Musk fan. Most of y'all
know this. I'm not a huge fan of Elon Musk. However,
at least some of the criticisms he had toward open
Ai I actually agree with, or at least I think

(21:23):
they were true, like the fact that open ai was
becoming less open. I think that criticism has merit. Meanwhile,
open Ai was in a pretty tough position because, as
it turns out, artificial intelligence research is expensive, so you
need access to a whole lot of compute power and
that's not cheap, and then you also need to have

(21:45):
the money to attract the best talent, especially if your
goal is to be the first to develop superhuman machine
intelligence that is ethically sound. Like, if that's your goal
and you need to outpace everybody else who's also working
on developing superhuman machine intelligence, you got to spend the

(22:08):
big bucks to get the top of the class to
come over to your organization. And a nonprofit organization is
just not the fastest way to gather huge amounts of
money needed to fund research and operations. It would be
way easier if you could get investors to pour money in,
but investors want a return. Meanwhile, a nonprofit is a

(22:29):
place where you donate money. You're not expecting return on
your donation. It's no an investment. This is what led
to the decision to create a for profit arm of
open ai, which in turn would generate money that could
be theoretically at least used by the nonprofit part of
open ai to further the original organization's goals and mission.

(22:52):
So the result was open Ai LP, which open Ai
called a capped profit company. So what the heck is
a capped profit company? That's actually a really good question,
because I've found two somewhat conflicting answers from various sources.
Like they lay it out in two different ways that

(23:13):
are similar but distinct. So I'm going to give you
both of the ways that it has been explained in
various sources, because I'm gonna be honest with y'all. I'm
not a business person. Despite the fact that I have
hosted a business podcast of the Best, I'm not really
a business person, so I can't pretend like I have
a firm grip on this. And also, open ai was

(23:34):
kind of charting new territory while they were announcing this.
But here are the two ways that it is frequently described.
So version number one means that open ai would accept
investments from venture capitalists and that would pay out returns
on those investments from profits, but only up to a

(23:54):
certain amount. So in open AI's case, the early backers,
the people who first poured money in to open ai,
would have a cap of one hundred times their initial investment.
So let's take a very simple scenario. Let's say that
some kids in your neighborhood want to start a lemonade
stand and you invest one dollar into their lemonade stand. Now,

(24:16):
let's say the kids running the stand turn out to
be business geniuses and your dollar investment helps lead that
stand into making tens of thousands of dollars in profits, Like,
even after the expenses, these kids are raking in tens
of thousands of dollars. However, when you invested, you did

(24:37):
so knowing there was a one hundred time cap on returns,
So that means the most you're ever going to get
from the stand is one hundred dollars. It's a one
hundred times return on your investment. Meanwhile, those snot nosed
kids who never could have made the stand without your
dollar are pocketing thousands of bucks and they're franchising across
the town, those rotten kids. Anyway, that's one version of

(25:01):
how the capped profit structure work. It works, investors can
make a return, but only up to a certain amount,
the early backers being one hundred times whatever they put in,
and that means if they put in ten million dollars,
they could potentially make as much as a billion dollars
in returns if open ai profited that much. So you
know it does add up. However, there is a second

(25:25):
explanation for capped profit that, like I said, is slightly different.
So in this version, investors would pour money into open
ai and open ai would hold back on distributing any
returns on profits until those profits reached at least one
hundred times the investments that had been made. So using

(25:47):
our limonade stand example, you've donated one dollar to the
limonade stand business, you would not see a return on
that investment until the liminade stand made at least one
hundred dollars in profit. At that point you could start
to receive returns. And a few explanations kind of combine
the first version I mentioned in this version, and frankly,

(26:10):
just to be transparent, this kind of confuses me. So
for example, Time at time dot Com uses the second
explanation right that you don't get any returns until the
profits reach one hundred times whatever your investment level was,
but then includes the phrase quote anything above that being

(26:30):
the one hundred times profit would be donated back to
the nonprofit. So if that's the case, it means you
wouldn't get a return until the profits hit that one
hundred times your investment, and then anything over one hundred
times your investment would be going toward the investment into
the nonprofit or a donation into the nonprofit, which means

(26:53):
I guess you would be limited to one hundred times.
I don't know, like maybe it's a combination of these two,
but it's just been poorly reported in various places. It
just it seems a little confusing to me, and it
also seems like it'd be confusing from an investor standpoint
of whether or not it would even make sense to
pour money into this. I think a lot of reporting

(27:16):
around the cap profit nature is just incomplete, and that's
the problem that there were just there's just a lack
of good explanations of this. And also, I mean, I'm dense,
so that's the other part of the problem. But anyway,
however you frame the context of a capped profit company,
the structure would give open ai the chance to court

(27:38):
investors and to hold a whole bunch of money that
they could then pour into research and recruiting. A one
hundred times factor is pretty darn big, And arguably you
could say this was necessary because while open ai had
this noble mission, the truth is you still had massive
companies like Amazon and Google and Meta. These companies have

(28:03):
really deep pockets and a desire to invest in AI research,
and if you didn't do something, there was just no way,
no matter how noble the cause you were going to
keep up with these companies. So that was kind of
the decision making factors that drove open eye to launch
this for profit arm of the organization, and that didn't

(28:28):
make everybody happy. In fact, it was controversial, to put
it lightly, There were critics who were asking if it
would even be possible for open ai to continue to
pursue its mission of ethical AI development while also operating
a commercial business that was profiting off of artificial intelligence development.

(28:50):
That these two things could not be in alignment and
would mean that ultimately open ai would not be able
to achieve its mission. Complicating matters was that open ai
began to back away from that whole open part of
the philosophy, which again Elon Musk would criticize. In twenty twenty.
Open a Eye sighted a concern that malicious developers might

(29:12):
take the information the research being shared from open ai
and use that information to develop nasty and harmful applications,
or at least poorly designed ones. So now they're saying,
you know, our knowledge is dangerous. So I know, we
said we were going to share it so that we
could benefit humanity, but now we're scared that if we

(29:33):
share it people will misuse it, so we're not going
to do that anymore. So again, like Musk was saying,
it was no longer being as open as the name
had implied. So yeah, his criticisms had weight. Open ai
really was moving away from an unassailable nonprofit status and
also was getting less open in the process. Sure, the

(29:55):
folks that open ai had explanations for why they were
doing this, but it didn't change the fact that the
open Ai of twenty nineteen was fundamentally different than the
organization that had started in twenty fifteen. All Right, we're
going to take another break. When we come back, we'll
talk more about what happened in the following years that

(30:16):
then led to the situation that we saw unfold this
past weekend. But first, let's thank our sponsors. Okay, So

(30:37):
we left off in around twenty nineteen. We're gonna skip
ahead a few years. So the battle for AI talent
was a constant one in the tech space, but the
world at large remained pretty much oblivious to open ai
and the folks who were involved in that company. Open
ai just wasn't a name that your average person was

(30:57):
aware of. But that would change in November twenty twenty two.
That's when OpenAI introduced the chatbot called chat GPT. This
chat bot drew on a large language model, the GPT model,
which had gone through a couple of iterations, and it
would use that language model to generate responses to queries

(31:20):
and input. The responses often seemed like a human being
had actually written it. It didn't come across as your
typical AI generated text. It seemed more natural than that.
It also seemed like it was a really smart person
who wrote the response, someone who appeared to be an
authority on whatever the subject was. Like any subject you

(31:41):
could think of, you could put into this thing, and
chat gpt would generate a response that seemed to be
pretty definitive. And there were limitations on chat GPT's expertise.
The open Ai announced that chat gpt really only had
access to information leading up to September twenty twenty one,
and if you asked it to explain anything that happened

(32:03):
after September twenty twenty one, you'd be out of luck
because chat GPT wouldn't have access to that information. But
right out of the gate, chat gpt seemed incredible. Now
over the following weeks after its introduction, we would start
to see various critics and skeptics raise concerns about generative

(32:24):
AI in general, really and chat GPT in particular. Now,
some of these conversations had already started because there were
already text to image generative AI tools out there that
had prompted some concern. Chat GPT created new discussions about
how generative AI could make misinformation. They could engage in plagiarism,

(32:48):
it could slander someone, or it could just produce the
wrong response due to something that the AI field calls hallucinations.
Sometimes they call it confabulations instead. So this is when
an AI model fabricates an answer for whatever reason. Like
one reason that AI might just make something up is
that it doesn't have access to relevant information that relates

(33:12):
to the query. So instead the chatbot produces an answer that,
from a linguistic perspective, is statistically relevant. In other words,
it's creating sentences that are linguistically correct but factually incorrect
because it doesn't know the difference, and it's just trying
to provide a response to the question that was asked

(33:34):
of it. Now, this meant that sometimes you could ask
chat GBT to solve a problem for you, and the
response you would get would sound authoritative and sound like
it's correct, but in fact it was entirely wrong. And
following these criticisms came concerns from lawmakers who began to
ask the very same questions that open AI was intended

(33:56):
to address when folks first got together back in twenty
fifteen to create it in the first place. Now, as
we all know, the law trails behind technological development, sometimes
by years. It takes time to make laws, and then
it takes time to approve them and to pass them
into law. If you rush, you're likely to make problems worse,

(34:18):
or at the very least, you're likely to complicate matters
so that it becomes very difficult to comply with the
laws you've written. So Altman, who had already made his
philosophy around regulation known in that blog post from twenty fifteen,
began to meet with various officials all around the world. Now,
the idea was that sam Altman would help legislators understand

(34:41):
the potential risks of artificial intelligence and presumably create the
most responsible approach to regulations to ensure safety. But skeptics
were worried that what sam Altman was actually doing was
just stacking the deck to favor Open a Eye over
other AI companies. You see, Altman had long held this

(35:04):
position that it's really important for an ethical group of
researchers to beat everybody else to the punch to develop
that superhuman machine intelligence in order to prevent catastrophe. That
if you don't do that, you're essentially sealing your own doom.
And of course Altman viewed open AI as that ethical group.

(35:25):
It is the group that's dedicated to creating ethical, safe AI,
and Altmann felt that regulations could help mitigate the risk
of bad actors or inept creators from making dangerous machine intelligence.
And so these skeptics were arguing Altman's position was that
regulations should really hold back everybody else and then favor

(35:50):
open AI and allow it to move forward towards this
goal of creating benign, protective machine intelligence. So in other words, yeah,
the AI field needs rights, but more importantly, those regulations
need to stop every one other than open AI. That
was how the skeptics saw Sam Altman's position as he
met with all these different leaders, and in fact we

(36:12):
saw proposals for putting AI research on hold for half
a year. In fact, Elon Musk argued for this. Now,
the skeptics came out again and said, well, I see
the need to kind of pump the brakes so that
we make sure that all this work in AI isn't
going to cause enormous harm in the future. But of

(36:35):
course Musk is arguing for this because he wants to
create his own AI research division. And if you force
the industry as a whole to hold off for six
months on moving forward, it would give Musk the opportunity
to start to build out the foundations for his own
AI division while not losing more ground to competitors like

(36:57):
open Ai. It's a cutthroat world out in that AI field, y'all. Now.
In January of this year, in twenty twenty three, news
broke that Microsoft was investing around ten billion with a
B dollars in open Ai. Microsoft had already invested billions
in open Ai over the previous years, in twenty nineteen

(37:18):
and twenty twenty one, specifically, so this was seen as
Microsoft's effort to catch up to rivals like Google and Amazon,
which had already been spending their own billions in AI research.
The relationship between Microsoft and open Ai would manifest in
lots of different ways, including in Microsoft incorporating chat GPT

(37:38):
into its search feature in bing. For a long time,
Microsoft has been pushing being an edge toward becoming more
important in the market, but as come up time and
again up against the brick wall that is Google. In August,
open Ai announced in an enterprise version of chat gpt,

(38:00):
and then in September, open Ai allowed the chatbot to
access information on the Internet for the first time. So
now that restriction where it could only access information up
to September twenty twenty one, had been lifted. Now it
could access real time information around the world. On the
political side, lawmakers around the world, particularly in the United

(38:23):
States and in the European Union, began to grow more
concerned about AI and its possible uses and the risks
associated with it. So more pressure was building on the
artificial intelligence discipline in general and open ai in particular,
because open ai was seen as sort of the leading
authority in artificial intelligence. Chad GPT had really captured a

(38:46):
lot of interest around the world, and at the same
time you had some people within open ai who were
really clinging onto the ideals of the original nonprofit organization
and who had growing concerns about where open ai was
headed because of the for profit arm of the company.
So similar to Elon Musk, there were people, high level

(39:09):
people in open ai who were starting to feel uncomfortable
with where the organization was going. Now, just a couple
of weeks ago, open ai held its first developer conference.
Sam Altman took the stage on November sixth, so not
long ago, and he took the stage as CEO of
open Ai, and he listened off some pretty incredible statistics

(39:30):
like the fact that open ai can count more than
ninety percent of Fortune five hundred companies as customers. That's incredible.
It shows how influential open ai is in this field.
Open AI's partnership with Microsoft also played a huge part
in Sam Altman's presentation. But again, behind the scenes, things

(39:54):
were far from honky dory. You had open Ai doing
gangbusters on a business level, but again some of the
scientists who were part of the board of directors were
growing increasingly concerned that the company was guilty of the
very behaviors that open Ai was meant to head off.
That open ai was developing and deploying tools without putting

(40:15):
in appropriate safeguards or considering the consequences of unleashing these
tools that open ai had become more about monetizing technologies
and innovation and less about ethical development of artificial intelligence.
There were also some personal tensions that were growing between
Altman and other members of the board, such as Ilia Sutzkever. So,

(40:39):
according to Time, Altman reduced Ilia's role in the company,
while Ilia worried that Altman was launching side projects that
would benefit from open AI's work but also not be
accountable to open ai. So, in other words, ILIO was
worried about this sort of conflict of interest that Altman

(40:59):
was going to end up pursuing some developments of artificial
intelligence that were not governed by the board of open
ai and thus not constrained by these ethical concerns. So
this really boiled over. During the developer conference, Altman made
several announcements that Ilya reportedly objected to, including the unveiling

(41:21):
of a customizable version of chat GPT that, in theory,
could run autonomously once it was told what tasks it
was supposed to handle. So the critics on the board
outnumbered Altman's supporters. You had essentially two camps. You had
the people who thought Altman was in the right, including
Altman and Brockman, and then you had other members who

(41:42):
were more concerned. And last Friday, that's when the board
decided it was time to fire Sam Altman. They saw
Altman as being too reckless, not nearly cautious enough, and
despite open AI's market performance, which was incredible, they felt
the company was moving in the wrong direction and that
it needed new leadership as a result, so they decided

(42:04):
they had to fire Sam Altman as CEO. Now. Reportedly,
Altman learned of this fate in a Zoom meeting. Ilia
was the one who told him that he had to
go to the Zoom meeting, and it happened shortly before
the board announced their decision Publicly. Greg Brockman, co founder
president of open Ai, he was not told of this meeting,
and in fact, he found out about Sam Altman getting

(42:26):
fired just shortly before open Ai released the news to
the public. Similarly, Satya Nadella, the CEO of Microsoft, also
found out essentially when the news got released to the public,
and this set off a metaphorical explosion in the tech world. First,
open AI's board didn't exactly have a real good transition

(42:50):
plan in place, to handle this, nor did it seem
to really comprehend the extent of the fallout this decision
would have, particularly in the way they did it, where
they did not consult with Microsoft, a partner that was
going to invest ten billion dollars into the company, or
talk it over with the other executives before making the decision.

(43:14):
So even the folks who agreed that Altman was perhaps
not being cautious enough felt that the board's move was
poorly thought out and even more poorly executed. It's really
hard to argue against that. Like, even if you feel
that Sam Altman was absolutely leading open Ai in the
wrong way the wrong direction, you can also say that

(43:36):
the way the board handled this ultimately was disastrous. So
Greg Brockman announced that he was leaving open Ai once
the news went public that Sam Altman had been fired,
So open Ai would see both its CEO and its
president leave the company in one day. Some members of

(43:58):
the board, like Ilia, expressed regret for having supported the
measure to fire Altman, like they said later like I
kind of wish we hadn't done that. Iliosutskiv would go
on to sign an open letter saying as much, and
even threatened to leave Open a Eye along with more
than five hundred other staff members over this decision. Just

(44:19):
a big old whoopsie, right, so the board found itself
in extremely hot water. They had done the classy thing
of waiting until a Friday to announce a massive decision.
My guess is this was probably in an effort to
take at least some of the sting out of the
news cycle, the idea of being like, well, if it's
on a Friday afternoon, no one's going to pay attention

(44:42):
because we're going into the weekend. By the time it
comes around to Monday, things are going to cool off
a bit. Plus here in the United States, we're going
into a holiday week with Thanksgiving, so there won't be
a whole lot of opportunity to bring a whole lot
of attention to this, and we'll be able to get
away relatively unscathed. That is not how it turned out, however. Instead,

(45:02):
the news media went bonkers with this decision, and how
could you not chat GPT and open AI. They had
been the center of so many headlines throughout the whole year.
Of course, this was going to get a lot of attention,
and so while they were hoping that they could get
away with this without it being too painful. Immediately, investors

(45:25):
started to freak out about this change. A bunch of
them essentially indicated that they would pull out of open
ai and they would back whatever Altman chose to do next.
So if Altman launched his own competing artificial intelligence company,
they were going to back Altman, not open Ai. There
were hints that Microsoft could potentially even do the same,
and that's ten billion dollars plus. You had the general

(45:49):
staff who felt that this was the wrong move, and
they felt that this was a terrible mistake, and they
were threatening a mass walkout of the company. It was
pretty much the worst reaction you could expect from a
big announcement. So it did not take very long for
news to break that the board was trying hard to

(46:10):
take back what it had done and to try and
convince Altman and Brockman to return to the company, but
by then the damage had been done. Altman was not
interested in coming back. Specifically, he said unless the board
stepped down, he would not come back, and that would
become a real sticking point. Mira Murrati, the chief technology

(46:34):
officer for open Ai, would serve as interim CEO for
about two whole days. Murati reportedly was the person who
actually reached out to Altman to try and convince him
to come back to open AIE. But while Altman did
return to open AI's headquarters to negotiate, and he said
it was the first and only time he would ever
have a guest badge to open AIE, those negotiations didn't

(46:58):
really go very far, so the board decided on a
new interim CEO, perhaps because of a perception that Murrati
was maybe a bit too pro Altman and they needed
to get someone who would be more in their pocket.
They chose the former CEO of Twitch, Emmitt Sheer, who
doesn't have any experience with artificial intelligence. By this time,

(47:21):
the board directors consisted of just four people, people who
have been pressured to step down but refused to do so.
Thus Altman did not come back to the company. Altman
and Brockman, meanwhile, weren't exactly on the job market for
very long, because Microsoft swiftly hired both of them to
head up a new advanced AI research team within Microsoft.

(47:43):
Nadella also said Microsoft remains committed to supporting open AI,
hinting that those ten billion dollars, most of which has
not made its way to open Ai yet. Open Ai
has received just a fraction of that ten billion dollars,
but the hint is that that money will continue to
go to OpenAI, that Microsoft is not backing out of

(48:05):
that agreement, but Altman is going to end up working
directly with Microsoft and will have the title of CEO
for whatever this Advanced AI part of Microsoft ends up
being called. A few other prominent executives and scientists from
open Ai are apparently moving over to Microsoft as well,
so there are already other defections from open Ai to Microsoft. Meanwhile,

(48:31):
back at open Ai, a lot of folks who work
within the company have been posting their support for Altman
on platforms like x so there's a concern that there's
going to be a mass walkout and resignation following this move.
Certainly other companies like Microsoft, Google, Amazon, and Meta would
be eager to get hold of some of that talent,

(48:52):
and it's entirely possible that the board of open Ai,
in a move made out of concern for the company's
safety and humanity safety, may have actually doomed the organization entirely.
I'm not sure hiring a former CEO of Twitch is
going to be enough to prevent disaster. Now, all that
being said, open ai is in an incredible position. A

(49:12):
recent evaluation placed the company at around eighty six billion dollars.
Microsoft says it is committed to this ongoing relationship with
open Ai. Chad GPT is still an incredibly important tool
in the tech space, particularly with the introduction of the
enterprise product of chat GPT. So could open ai just

(49:33):
be too big to fail? Maybe? I think this monumental
misstep will test that hypothesis. I do not know how
it's all going to shake out. From a business perspective,
I would say open ai is in a really strong position.
But then if the organization suddenly sees a mass defection
from its researchers and staff, that could very well change.

(49:55):
So we'll have to see. And of course, now we're
on a holiday week, so it might be another week
but before we start getting answers. But yeah, that's kind
of an update on what went down this past weekend
and why it happened, Like all those different factors that
built up to this big explosion of activity. Now you
have a bit more background as to what was going on.

(50:17):
As to whose side I'm on, I don't know. I
do think that Altman's leadership was not always the best
as far as trying to achieve the goal of ethical AI.
I do not think that it was almost like engaging
in a necessary evil kind of thing. But I'm not
sure that the evil is really necessary. But what do

(50:40):
I know. I know that AI is very hard and
very expensive, and I don't know how you get the
money to do it the right way and still beat
out all the companies that don't have those restrictions on them.
So I don't know. I just know that it's a mess.
But now it's a mess we can put behind us
until we see what happens next. I hope you are

(51:02):
all well, and I'll talk to you again really soon.
Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio,
visit the iHeartRadio app, Apple Podcasts, or wherever you listen
to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.