Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Bloomberg Audio Studios, podcasts, radio news. Bloomberg reporter Riley Griffin
has a pretty unique beat.
Speaker 2 (00:12):
I am a healthcare reporter and I'm based here in Washington, DC,
and I have a particular interest in where health and
national security meet.
Speaker 1 (00:20):
And there's been no shortage of stories at that intersection
of health and national security since the COVID nineteen pandemic.
Speaker 2 (00:27):
We were all incredibly humbled by that moment. And I think,
you know, prior I had been really focused on the
pipelines of pharmaceutical companies and how they were going to
ensure a profitable and booming business. I was not thinking
about healthcare as something that threatened our economy or threatened
(00:52):
our ability to act in everyday normal ways.
Speaker 1 (00:55):
Newly alert to the possibility that it could be a threat,
Riley was intrigued when she got a tip.
Speaker 2 (01:01):
I heard this from government officials who witnessed the briefing
by a guy who had walked in with a black box.
Speaker 1 (01:09):
A black box, and that briefing wasn't held just anywhere.
Speaker 2 (01:14):
He brought that black box in with him to the
Eisenhower Executive Office Building EEO B. That's the building right
next to the West wing. Really, it's where a lot
of White House staffers are.
Speaker 1 (01:26):
Riley says she talked with several people who were in
that meeting, and they told her that inside that small
plastic container.
Speaker 2 (01:34):
Were test tubes with synthetic DNA.
Speaker 1 (01:36):
Synthetic DNA that included ingredients that could be used to
make deadly diseases. That was frightening enough, but Riley's sources
told her what really startled them was when they found
out how that guy, a former UN weapons inspector, had
figured out what to put into those test tubes. It
was with the help of an AI chatbot.
Speaker 2 (01:56):
He had his team basically pretending to be a bioterrorist.
If a biotean art wanted better instructions on how to
create a biological weapon or help with the process of
actually making that in a laboratory, what would it ask,
What would it need to know?
Speaker 1 (02:10):
They typed in questions and they got answers, But it
wasn't just recipes what to do with that synthetic DNA,
Riley says. Those government officials heard about how that chatbot
recommended where to put pathogens and how to deploy them
to do the most damage, which raised an alarming question.
Speaker 2 (02:28):
If a bad actor could acquire the synthetic DNA and
the instructions to put it together in a lab, what
would that mean.
Speaker 1 (02:44):
I'm David Gura and this is the big take from
Bloomberg News today on the show, how information that could
be used to make bioweapons slipped through AI safeguards and
what that tells us about the challenges of regulating is
still pretty brand new technology. Of course, information about how
(03:07):
to create deadly weapons of all kinds is out there
and it has been for a while. Information that could
help weaponize diseases are buried in books, and Riley Griffin
says they're online if you know where to look.
Speaker 2 (03:20):
There's a lot of information on the Internet about viruses, pathogens,
respiratory illnesses, fungal diseases, right, And the reality is there's
a lot of information on Google that is dangerous when
it comes to the field of biology.
Speaker 1 (03:36):
I asked Riley, what's different now? What could a bad actor,
a bioterrorist do with an AI chatbot that he couldn't
do with a standard search engine.
Speaker 2 (03:45):
One thing that really struck me through the interview process
was just hearing from some of the researchers explain how good.
The chatbot was in the brainstorming phase in offering up
ideas that weren't necessarily in the first place.
Speaker 1 (04:01):
AI companies know that their transformative technology has a lot
of potential perils, and they're also aware of the damage
it could do, not just to their business models. So
many of them have tried to get ahead of a
lot of this stuff, and Anthropic is one of them.
It was founded by people who had worked at Open Ai.
Speaker 2 (04:21):
Nanthropic was created really with a bent towards safety. They
wanted to put in checks and balances, test these things
out before they reached the public, and commercialize at a
little bit of a slower pace at the time.
Speaker 1 (04:34):
It was Anthropic who hired that former UN weapons inspector
who brought that black box to the heart of Washington.
His name is Rocco Casa Grande, and Anthropic hired him
to put its AI chatbot, Claude through its paces.
Speaker 2 (04:49):
So that's the origin of this story. He had vested
it out with a team for more than one hundred
and fifty hours. He had virologists, microbiologists really examining what
it is could and couldn't do, and so they went
through that process and what they found scared Casa Grande.
Speaker 1 (05:06):
He was concerned by how simple it was to get
pretty accurate instructions for how to make bioweapons and how
to maximize harm. But Riley says something else made Casa
Grande even more worried.
Speaker 2 (05:19):
Another notable piece that was shared with me was the
chatbots could advise on how to acquire the materials, where
to go purchasing them, and how to evade scrutiny when
doing so. Some companies that sell the synthetic DNA this
man made DNA, you know, look to know their customer.
They want to see who is purchasing this. Is this
(05:39):
a researcher at the CDC? Is this an academic who
studies ebola? If not, you know, a red flag is raised.
Speaker 1 (05:48):
But Riley says not all companies that sell man made
DNA have or follow strict protocols.
Speaker 2 (05:55):
Like with all companies, there's a varying level of commitment
to that cause, and the chatbok could recommend where to
look that might be better at evading such scrutiny.
Speaker 1 (06:06):
That motivated Cossa Grande to put together that black box
and to carry it with him into one of the
most secure office buildings in the world. He called it
a stunt, but he wanted to make a point, and
Riley says it resonated with the people Cossa Grande met with.
Speaker 2 (06:23):
He really struck the imagination of Washington with that black box.
Speaker 1 (06:29):
Coming up after the break, how government officials responded to
Roco Costa Grande's black box warning and what that could
mean for everything from AI safety to our ability to
treat and cure diseases. Just a few months after Roco
(06:53):
Cassa Grande's briefing, in October of last year, President Joe
Biden called for greater oversight of government funded research and
new safety protocols for tech companies and AI developers to.
Speaker 3 (07:05):
Realize the promise of AI and avoid the risk. We
need to govern this technology, not and there's no other
way around it. In my view, it must be governed.
Speaker 1 (07:14):
He laid out a new way for AI companies to
think about the threats posed by the technologies they're creating.
Speaker 3 (07:21):
I'm about to sign an executive order, an executive order
that is most significant action any government anywhere in the
world has ever taken on AI safety, security, and trust.
Speaker 1 (07:31):
Well since then, Bloomberg's Riley Griffin says the debate over
AI regulation in Washington has gotten louder and more urgent
since October.
Speaker 2 (07:41):
There's really begun in earnest a serious conversation about regulation
and or what steps can be taken without hard and
fast rules, just to limit the risks. So this is
top of mind. Every agency is thinking about it, of
Homeland Security, Commerce Department, the Department of Defense. I mean,
(08:04):
it's a full government conversation.
Speaker 1 (08:06):
Late last year, Vice President Kamala Harris announced the establishment
of the US Artificial Intelligence Safety Institute, and in her remarks,
Harris specifically singled out AI made bioweapons as a threat.
Speaker 4 (08:18):
From AI enabled cyber attacks at a scale beyond anything
we've seen before to AI formulated bioweapons that could endanger
the lives of millions of people. These threats are often
referred to as the existential threats of AI, because of course,
(08:39):
they could endanger the very existence of humanity. These threats,
without question, are profound and they demand global action.
Speaker 1 (08:51):
But at the same time as the government has put
up new guardrails, some researchers are saying not so fast.
Speaker 2 (08:58):
Nobody wants to stop or certainly many including in the
scientific world, do not want to stop the innovation that
AI is contributing to around new treatments and diseases and
ways of diagnosing healthcare conditions. There are a lot of
benefits that scientists point to that they fear could be
(09:21):
hindered by regulation.
Speaker 1 (09:23):
One hundred and seventy four academics have signed a letter
pledging to use AI in their research responsibly. They argue
that when it comes to AI, its benefits quote far
outweigh the potential for harm. Riley says, they're advocating for
a more measured approach to regulation.
Speaker 2 (09:41):
You know, a focus of the letter is we recognize
there are some risks, but hold up, we don't want
to limit the progress that can be achieved. We can
do good things with these tools, and that is a priority,
and we promise to move forward with that work while
practicing safe behaviors around the for example, only purchasing synthetic
(10:02):
DNA from kind of reliable providers, which isn't again an
AI question, but there's a it's not just AI. It's
kind of a broader conversation about security in the field
of biology. As technology advances.
Speaker 1 (10:15):
Some critics decry what's being called AI doomerism. What they
see as a misplaced worry that generative AI poses an
existential threat to humanity.
Speaker 2 (10:26):
Some people think the conversation about biological threats and AI
is a.
Speaker 1 (10:30):
Distraction, and Riley notes that while Casa Grande and his
team were able to do something with that chatbot that
is objectively scary, the test they ran led to a
positive outcome.
Speaker 2 (10:40):
My reporting suggests that these initial briefings were really the
beginning point of a massive undertaking by government to think
about biological risk in a new way in the wake
of a pandemic, but also in the wake of a
fast moving emerging technology.
Speaker 1 (10:57):
And those meetings also led to calls for a broader
conversation so that it's not just the largest AI companies
like Anthropic and Open Ai that are briefing government officials.
Speaker 2 (11:08):
You know, one interesting comment that I've heard is many
would like to see smaller AI companies more present in
the room. It's the biggest players that are engaged with
government on this subject, but some of the smaller companies,
particularly those tailored to biological data, may not have as
active of a voice.
Speaker 1 (11:29):
The debate continues, and according to Riley, the trend is
looking better than it was last year when she got
that tip about that black box.
Speaker 2 (11:38):
Hopefully we're at a place where it's harder to get
that information from the chatbots.
Speaker 1 (11:43):
One of Anthropics founders told Riley the company has made
changes to address vulnerabilities Cosa Grande and his team identified,
but more work is needed.
Speaker 4 (11:53):
Well.
Speaker 1 (11:53):
Riley says she doesn't want the takeaway from this story
to be one of fear, but more so about how
we think about what we do when there is so
much innovation happening in so many places.
Speaker 2 (12:05):
You know, what I want the takeaway from the story
to be is not that we are in a doomsday
scenario where the worst is here and present and going
to impact us tomorrow. It's more around the nuance of
a debate about how to regulate a fast moving technology.
And I think everybody knows that there's a question around
(12:25):
regulation of AI because AI is accessible, but the field
of biology is not so present in that conversation, and
we talk about synthetic biology or this kind of revolution
in the field of biology. It is moving just as quickly,
so that will be kind of my note is what
happens when too fast moving revolutionary technologies crash into each other.
Speaker 1 (12:56):
This is the Big Take from Bloomberg News. I'm David Gera.
This episode was produced by Thomas lou It was fact
checked by Alex Segura. It was mixed by Robert Williams.
Our senior producers are Naomi Shaven and Kim Gittleson, who
also edited this episode along with Becca Greenfield. Our senior
editor is Elizabeth Ponso, Nicole Beemster bor is. Our executive producer.
(13:18):
Sage Bauman is Bloomberg's head of podcasts. Thanks so much
for listening. Please follow and review The Big Take wherever
you get your podcasts. It helps new listeners find the show.
We'll be back on Monday.