All Episodes

May 16, 2025 • 23 mins

Dario Amodei is navigating a tricky transition from academic to CEO of a $61 billion startup. By Shirin Ghaffary

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Can Anthropic win the AI race without losing its soul?
Dario Amiday is navigating a tricky transition from academic to
CEO of a sixty one billion dollars start up by
Scheren Goffery read aloud by Mark Ledorf, Aanthropic Chief executive officer,
Dario Amidae received a message on Slack one day in

(00:23):
mid February. Senior members of his company's safety team were
concerned that, without the right safeguards, the artificial intelligence model
they were about to release the public could be used
to help create bioweapons. This startling revelation came at a
time when pressure was already ratcheting up on Amiday. The
modeling question Claude three point seven sonnet was only days

(00:45):
away from release, as Anthropics sprinted to keep pace with
competitors who were rushing their own models to market. At
the same time, the forty two year old Ammi Day,
a bespectacled, ringlet haired man who'd spent the early years
of his career in academic labs care fully extracting the
eyeballs of dead salamanders, was in the process of closing
a multi billion dollar investment round valuing Anthropic at more

(01:08):
than sixty billion dollars. It was hardly an opportune time
to tap the brakes, but that's effectively what Amiday had
promised to do when he helped start Anthropic four years earlier.
More than most other leaders in the AI industry, Amiday
has argued that the technology he's building comes with significant risks.
At the time, a group of Entropic staffers known as

(01:29):
the Frontier Red Team were at a security conference in
Santa Cruz, California. They hold up in a hotel room
to deal with the issue, along with outside experts from
biosecurity consulting company Griffin Scientific who were also attending the event,
With Amiday participating via Google meet, the group ran the
model through a series of tests. The Anthropic staffers told

(01:50):
him what most bosses in this situation would presumably want
to hear. They'd pull an all nighter or two to
assess the issue and stick to the release schedule. They
were saying, we can stay up all night, we can
get this done in time. We can get no sleep
and do this in seventy two hours. Mma Day says
at Entthropics headquarters in San Francisco. I'm like, if you

(02:10):
do that, you will not do a good job, he says.
He told them to take the time to test the
model more rigorously. Anticipating a moment like this, Anthropic had
built a framework called the Responsible Scaling Policy, loosely modeled
after the US Government's Biosafety Lab standards to determine how
to handle the risks associated with increasingly advanced AI. As

(02:32):
long as its models stayed below a certain level, which
it called AI Safety Level two, it was business as usual.
An ASL two system might have the ability to give
instructions on how to build a biological weapon, but not
reliably useful ones, or in any more detail than what's
available through a search engine. An ASL three system could
significantly help a user, particularly one with some basic technical knowledge,

(02:56):
actually create or deploy such a weapon. The model's Anthropic
had released to that point were all at ASL two
or lower. If one were to reach ASL three, and
Thropic's internal guidelines require that the company increase its safeguards,
These actions could include hardening defenses so malicious actors couldn't
steal the code or trick the system into giving away

(03:18):
dangerous information. Until Anthropic implemented these advanced measures, it would
have to take interim measures, such as intentionally weakening the model,
blocking certain responses, or not releasing it at all. After
almost a week of work, the team determined that the
model wasn't as powerful as Miday's staff feared after all,
and Thropic released it a little later than expected, and

(03:39):
so far, at least it hasn't led to the collapse
of human civilization. Miday says the brief delay was painful
given the competitive pressures, but if Anthropic succeeds in building
technology as powerful as it says it plans to, there
are even more uncomfortable decisions ahead. Amiday is convinced that
AI is going to transform the world by creating a

(04:01):
country of geniuses in a data center. On the bright side,
this AI could cure cancer, but it might also cause
most of the world's population to lose their livelihoods. Also,
the technology that will cause this massive reordering of society
is coming as soon as next year, according to Amiday,
and almost certainly not after twenty thirty. It's almost an

(04:22):
abdication of our moral responsibility not to try to describe
in clear terms and as often as possible, exactly what
is happening, he says. Anthropic was founded to usher in
this transformation in the most responsible way possible, but customers
are also beginning to pay real money for access to
its technology. As of April Enthropic was on track to

(04:43):
generate two billion dollars in annual revenue, double the rate
from four months earlier. Anthropic says it's not profitable now
because of the enormous cost of training AI systems. Amiday
has said it could eventually cost as much as one
hundred billion dollars to train a cutting edge model. Customers
are almost certainly going to keep wanting more powerful AI.

(05:04):
Anthropic fully expects to hit a s L three soon,
perhaps eminently, and has already begun beaving up its safeguards
in anticipation. In recent years, the company has hired at
least a half dozen prominent researchers from open Ai, some
of whom have criticized their previous employers for moving away
from their own stated commitments to safety. Presumably they won't

(05:25):
stand by quietly if Anthropic tries to sidestep its own
commitments when the time comes. When that reckoning might arrive
remains a matter of debate. Like Google, Meta Platforms and
open Ai, Anthropic has fallen behind on its projected time
lines for releasing new versions of its most costly powerful
line of AI models. Skeptics question whether all the talk

(05:46):
about the dangers of AI is intended to make the
technology appear more powerful than it actually is. People who
worry about AI safety, meanwhile, say the market pressure to
build as quickly as possible could push companies into irrespond
consible decisions. Those investors didn't give Anthropic fourteen billion dollars,
after all, to lose to open AI, Deep Seek or Meta,

(06:08):
and deciding to ignore commercial incentives stopped being an option
once it took all that money. You can't really fight
the market head on, Amiday acknowledges, but he also says
he can create what he describes as a race to
the top where Anthropic pulls the entire AI industry along
by demonstrating how to build world changing AI without destroying

(06:28):
the world in the process. Amiday is a native San
Franciscan who's never considered himself a tech person. He grew
up in the Mission District before tech money transformed it.
The Amidays were working class. His late father, who'd grown
up an orphan in Italy, worked as a leather craftsman
before chronic health issues forced him to stop when Amiday
was a child. He passed away when Amiday was a teenager.

(06:51):
Amiday's mother was a library project manager. Amiday's sister, Daniella Amiday,
also a co founder of Anthropic and the company's president,
remembers her brother being an unusually gifted child, particularly in
math and science. As a toddler, he would declare counting
days and count as high as he could. It'd be
like a whole day. She says, What kind of three

(07:13):
year old has that attention span? He started taking classes
at the University of California at Berkeley while still in
high school, before studying physics at the California Institute of
Technology for two years, then transferring to Stanford in college.
Danielle remembers Dario first getting interested in AI after reading
Gray Kurzweil's The Singularity Is Near When Humans Transcend Biology,

(07:36):
which predicted that AI would reach human intelligence by twenty
twenty nine and that people would merge with machines by
twenty forty five. Ammaday got his undergraduate degree in two
thousand and six, then switched focus to the neurological and
biological applications of physics For his graduate work, he moved
east to pursue a doctorate in biophysics at Princeton. His

(07:57):
research involved studying the neural structures found in the ganglion
cells of amphibians, which is how he found himself slicing
up salamanders to examine their retinas. I wasn't thrilled by
the animal rights implications of that, says Amidae, who's been
a vegetarian since childhood. He makes an exception for shrimp
and other invertebrates, but he adds I was a scientist.

(08:19):
I wanted to solve the problems of biology human health.
The thing that actually got to Amiday wasn't the ethics
of laboratory life, but the pace. While he was struggling
through the drudgery of his day job, Amiday saw things
moving much faster in another attempt to uncover the essence
of intelligence. The development of artificial neural networks, so called
deep learning, had fallen out of fashion among computer scientists,

(08:43):
but the field was starting to pick up around twenty twelve,
and Amiday was impressed by the advance's researchers were making
using the technology to improve computer vision. I was like, Wow,
this really works, Amidae remembers thinking. In twenty fourteen, Andrew Um,
a professor in the computer science department at Stanford where
Amiday had done postdoctoral research, recruited him to work on

(09:05):
AI at a unit he was running for the Chinese
tech company by Do. He jumped at the opportunity. Amida
ended up spending a year at Baydo, then another at
Google Brain, an AI focused research team within Google, where
he started thinking about the ethical considerations of AI's rapid progress.
In twenty sixteen, he published a well regarded paper called

(09:26):
Concrete Problems in AI Safety, outlining the five key areas
where AI could cause unintended and harmful behavior. Although Google
was probably the best place a rising AI researcher could work,
in the early twenty tens, a new nonprofit lab called
Open Ai seemed to align better with Ammi Day's interests.
He joined in twenty sixteen as the safety research lead.

(09:49):
He lived in a shared house in San Francisco's Glen
Park neighborhood with several roommates, including three other Open AI
colleagues who would later become co founders of Anthropic. One
of them was Daniella. At the time, both MMI Day's
siblings ran in social circles affiliated with effective altruism, a
philosophy that emphasizes rational thinking as the most efficient way

(10:10):
to improve the world, which was popular among people interested
in AI safety. The movement fell out of favor after
one of its most prominent leaders, the cryptomogul Sam Bankman Freed,
whose company was an anthropic investor before selling its stake
during bankruptcy proceedings, was convicted of defrauding his investors. Ammoday's
most substantial research contribution at OpenAI was developing the concept

(10:33):
of scaling laws, the idea that you can make fundamental
improvements to a neural network simply by increasing the model
size and adding more data in computing power. For much
of the history of computer science, the assumption was that
such breakthroughs would come primarily by designing ever better algorithms.
By helping to pioneer the bigger is Better strategy, Amiday

(10:54):
played a key role in the rise of the large
language models that dominate the current AI boom. This earned
him a place of prominence within open AI and the
broader industry. Amiday's feelings of responsibility about AI weighed on him,
and over time he soured on open Ai. In twenty twenty,
he and six open ai colleagues left to start Anthropic,

(11:16):
promising to build a more responsible AI lab. It was
easy for the gang to get along. They'd already worked
with one another, three of them had lived together, and
two were related. The defection remains a subject of intrigue
within Silicon Valley, which loves MESSI startup drama, especially when
the companies involved are some of the most valuable unicorns
of all time. Ammiday remains vague about the subject, but

(11:39):
talks about losing confidence in open AI's leadership. I don't
think there was any one specific turning point. It was
just a realization over many years that we wanted to
operate in a different way, he says. We wanted to
work with people we trusted. At the time, anthropics prospects
seemed if he at best. Given open AI's access to
immense capital and its head start in building the actual models.

(12:03):
The doctrine a few years ago was Anthropic would not
be able to scale because it wouldn't be able to
raise the money, says Eric Schmidt, former Google CEO and
an early investor in the startup. Yet, Nthropic has developed
into a serious rival with comparable tech and a growing
roster of paying customers in finance, pharmaceuticals, software development, and

(12:23):
other industries. It also makes a publicly available AI powered chatbot, Claude,
but is less focused than open Ai on the consumer market.
Schmidt remembers making a twenty eighteen visit to Amiday and
his partner Camilla Clark now his wife, in the Starter
apartment they lived in near the Freeway in San Francisco.
Amiday was still at open Ai then, but Schmidt was

(12:46):
impressed and ended up investing in Anthropic Later. Schmidt was
dubious of Amiday's plan to run Nthropic as a public
benefit corporation, a type of for profit organization dedicated to
pursuing a public mission. When Schmidt urged m Day to
establish it as a traditional startup, MMI Day refused. Such
debates were common in anthropics early days. There was a

(13:07):
lot of discussion around are we just going to be
philanthropically funded? Are we primarily focused on purely doing research
on safety? How much funding do you need? Says Jared Kaplan,
a friend of M. A. Day's from graduate school who
became Nthropics co founder and chief science officer. There were
some folks who were thinking we should be a nonprofit.
I think Dario and I both kind of thought that

(13:29):
that was probably not a good idea. We should kind
of keep our options open. Anthropic now seems well positioned
to be among the handful of winners to emerge from
the current AI boom, says Hammont Tanasa, CEO of investment
firm General Catalyst, which backed Anthropic in its most recent
funding round. This is a company that probably has the
right things going for it to be one of those

(13:50):
that's going to matter in the end, he says. But
I have never written a check from GC this big,
with this much uncertainty. I will tell you that we'll
be right back with Can Anthropic win the AI race
without losing its soul? Welcome back to Can Anthropic win
the AI race without losing its soul? Even as it

(14:14):
barrels ahead, Anthropic has cultivated a reputation for taking issues
such as safety and responsibility more seriously than the company
from which it is sprung open. AI's brief twenty twenty
three ouster of Sam Altman, whom its board accused of
being not consistently candid, has been followed by persistent questions
about the company's integrity and commitment to its initial mission.

(14:36):
Ammaday generally avoids direct criticism of his former employer, but
he and his company aren't above taking some thinly veiled shots.
Anthropic has paid for billboards around San Francisco with taglines
that read AI that you can trust and the one
without all the drama. Unlike other tech executives, including Altman,

(14:56):
Ammi Day has made little attempt to ingratiate himself with
the true administration, saying his message is the same now
as it was when Joe Biden was president. He refers
to a number of players in the industry who, by contrast,
say whatever to the party in power in an attempt
to curry political favor. You can tell that it's very unprincipled,
he says. This January, Ammiday made his first trip to

(15:20):
the World Economic Forum in Davos, Switzerland, where he put
on a pinstriped suit and engaged in some thought leadership
and high stakes deal making. On the same day that
he gave a talk touching on deepseek and AI powered
healthcare at Bloomberg House Davos, he spent the better part
of an hour huddling with AIG CEO Peter Zaffino, Enthropic

(15:40):
walked away with a multi year contract to help analyze
customer data during the insurance underwriting process. AIG says the
deal came out of an eighteen month pilot during which
Enthropic helped speed up that work by eight to ten times.
Zaffino says he picked Nthropic because of its specific focus
on trustworthiness and accuracy in citing specific data sources, especially

(16:02):
in the highly regulated industry of insurance. Zaffino says he
was impressed at what a quick study Ammiday was. For
whatever Dario lacks in business experience, the algorithm in his
brain moves really fast. Zaffino says, he is able to
apply what he's learning and what we're talking about in
terms of what the business objective is. When the day's

(16:23):
work was done in Davos, he skipped the evening parties,
retreating instead to his hotel room to write an essay
about how Deep Seek highlighted the need for stronger export
controls on semiconductors. Anthropics recent fundraising round has made him
a billionaire and he often travels with a security detail,
but he also still lives in a rental house in

(16:43):
a suburb south of San Francisco, raising chickens in his yard.
He has committed to donating the vast majority of his
wealth to charitable causes. When he hosts a Bloomberg BusinessWeek
reporter at Anthropics office in March, the pinstripes are nowhere
to be seen. Amidae is noticed, content to be back
in what Clark calls his cozies, stretchy gray sweatpants and

(17:04):
a similarly comfortable looking, also gray T shirt. The outfit,
he says, helps me think. Amiday's way of thinking, rooted
in his years of academia, drives the culture at Anthropic.
Every two weeks, Anthropic employees they call themselves ants, gather
to listen to Amiday deliver roughly hour long lectures known

(17:25):
internally as dariovision quests. Accompanying documents are distributed in advance
for employees to read before the meeting. Under Amiday's leadership,
Anthropic also researches subjects that aren't immediately monetizable, such as
mechanistic interpretability, the study of how opaque algorithms make decisions,
and AI welfare, the ethics of interacting with computers if

(17:48):
they ever do achieve sentience. Through this track record, Anthropic
has cultivated a reputation for being genuinely serious about responsible
AI development at a time when others in tech can
appear to be just paying lip service to the idea
or in some cases, expressing hostility to the suggestion that
an ethical framework is even a reasonable goal if it

(18:08):
slows down development, Dario and the whole team deserve credit
and confidence for acting in good faith on safety, says
Matthew Iglesias, a prominent writer on economics and policy, with
whom Mmiday has consulted on his writing. But it's not
clear if that changes the structural situation. If you're racing,
it's hard to be safe, even if you're acting in

(18:29):
perfect good faith. Anthropic aims to build machines that can
do almost anything, but there's one thing It's AI already
does particularly well write computer code. The company recently released
an app for coders, claud Code, and its tech also
powers popular independent coding apps, including Cursor Andthropic's own February

(18:51):
economic Index report showed that thirty seven percent of all
job related interactions about Claude were for coding, the highest
of any category. Arts and media came in second. At
around ten percent. Ammaday says automated coding has probably been
the fastest growing part of its business in recent months.
AI generated coding doesn't have the same emotional resonance as

(19:14):
computer generated music or painting. Unlike with the song, consumers
don't much care if the code underlying the app they're
using comes from a real person. Coders themselves have also
largely accepted that AI is part of the job. A
survey by GitHub last year of two thousand technical staff
found that almost all ninety seven percent had used coding

(19:35):
tools at some point in their work, but looming job
losses in the field also feel less hypothetical than AI
safety issues such as computers making dirty bombs. Anthropic has
found seventy nine percent of programmers who use clawed code
do so to automate rather than augment tasks. The Economic
Index report itself was built in part by claud Code.

(19:58):
According to Entthropics head of GO policy and co founder
and former Bloomberg News reporter Jack Clark, this is the
area where Amiday is already confronting the harsh realities of
his company's work. In a March tenth talk in Washington,
d C hosted by the Council on Foreign Relations, Amiday
predicted that AI could be writing almost all computer code

(20:19):
within a hear. A clip of the comments went viral,
sparking a mixture of fear and skepticism. Amiday says the
remark was taken out of context at the event. He
also said humans will still be involved in the overall
coding process, such as specifying what kind of app to
make or how it should integrate with other systems. At

(20:40):
the end of the day, these models are going to
be better than all of us at everything, he says.
We have to deal with that at a societal level.
Everyone's got to deal with that. My goal is not
to create an underclass in the period before then. Awkwardly,
for Anthropic, the affected workers in this technological shift could
be the people in its offices today building those tools

(21:02):
of automation. The vibe is, oh, it's getting real, Clark says.
Ammiday made the subject the focus of a recent Vision
Quest talk, in which he told employees that Anthropics technology
is leading to substantial changes in the way the company
organizes its work. We may slow down our hiring because
of Claude, and will do that because we don't want

(21:23):
to fire anyone because of Claude, he says, recounting what
he told his staff, He added that the company will
help coders adapt to their evolving roles. In an internal
memo that accompanied the meeting, Ammiday wrote that there was
a seventy percent chance that sometime this year, AI's ability
to perform key technical tasks such as writing code, debugging,

(21:44):
and proposing and managing experiments, will go from a helpful
tool to something absolutely indispensable that does the majority of
these technical tasks, doubling Anthropics execution speed. The majority of
the contribution to AI progress will come from AI itself,
Amiday wrote, with the caveat that humans will still play

(22:04):
a very central role, probably for a while, due to
comparative advantage. The role of humans could be gradually whittled away,
though until AI begins to create new AI in a
kind of recursive loop. This ability, if it's indeed developed,
would send Anthropics models shooting up its danger scale. At
ASL four, an AI would have the ability to fully

(22:27):
automate the work of an entry level remote only researcher
at nthropic, there's an ASL five when the AI has
the ability to improve itself with increased acceleration. In Machines
of Loving Grace, a widely read essay m A Day,
first published internally and then publicly on his personal blog
last October, he laid out what the endgame looks like

(22:49):
if everything goes right with AI. Drawing on his expertise
in biology, he says AI will speed up scientific discoveries
at ten times the current rate, helping cure almost all
infectious diseases, most cancers, and Alzheimer's, and ultimately doubling the
human life span. Anthropic now prints pocket sized bound copies

(23:11):
of the Machines of Loving Grace essay to give to employees.
The tone shifts in the part about AI's relationship to
work and meaning. He says this issue is particularly difficult
because it is fuzzier and harder to predict in advance.
Ammi Day anticipates that AI could eventually replace most human labor,

(23:31):
leaving people to live off a universal basic income or
other redistribution method unless they find some yet to be
determined way to continue to be economically valuable. We will
likely have to fight to get a good outcome here.
Exploitative or dystopian directions are clearly also possible, and have
to be prevented. Amiday wrote much more could be written

(23:51):
about these questions, and I hope to do so at
some later time.
Advertise With Us

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.