All Episodes

October 20, 2025 11 mins

The global battle among the three dominant digital powers―the United States, China, and the European Union―is intensifying. All three regimes are racing to regulate tech companies, with each advancing a competing vision for the digital economy while attempting to expand its sphere of influence in the digital world. Across the globe, people dependent on digital technologies have become increasingly alarmed that their rapid adoption and transformation have ushered in an exceedingly concentrated economy where a few powerful companies control vast economic wealth and political power, undermine data privacy, and widen the gap between economic winners and losers.

Anu Bradford, Henry L. Moses Professor of Law and International Organization at Columbia Law School, explores a rivalry that will shape the world in the decades to come and breaks down her highly acclaimed book, "Digital Empires: The Global Battle to Regulate Technology." Professor Bradford speaks with Tim Stenovec and Emily Graffeo on Bloomberg Businessweek Daily.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Bloomberg Audio Studios, Podcasts, radio news. This is Bloomberg Business
Week with Carol Masser and Tim Steneveek on Bloomberg Radio.

Speaker 2 (00:14):
My AI detector is better than my colleagues because they
were fooled by a video a couple days ago, and
I was like, that's AI.

Speaker 3 (00:21):
Okay, Well, that's exactly where I want to go with
our next guest. DONW. Bradford is Henry el Moses, Professor
of Law, an international organization at Columbia Law School. She
was last on with us just over two years ago
when the Will Smith eating spaghetti test looked not realistic
at all, just to show how far things have come.
She's the author of several books, including most recently, Digital Empires,

(00:43):
The Global Battle to Regulate Technology. This was published back
in twenty twenty three. And that's exactly what I want
to start where I want to start with you, Professor Bradford.
I got an invite today to try out Sora from
a friend of mine. I think he's a paying subscriber
of open ais GPT and that's why he has an invitation.
And I was watching some of the videos that he

(01:03):
made and I'm thinking to myself, we are so cooked
are we.

Speaker 4 (01:12):
So? Thanks so much for having me, Tim and Emily.
We may well be cooked. But the question is what
are we most worried about. I think there are many
exciting developments. I think we're certainly more entertained in many ways.
But at the same time, I think those who warned
us early on that this is not the kind of
AI revolution that we should be just leaving for the

(01:34):
tech companies to govern to manage. We do need governments involved.
We need some guardrails, we need some regulation to make
sure that these fast advances that we are witnessing are
moving to the direction that we're comfortable with.

Speaker 3 (01:47):
It seems like it's too late, though.

Speaker 4 (01:51):
I don't think it's too late. I think we certainly
are not at the point where we can say that
AI is done. I think we will continue to see
massive developments in coming years. And we already have AI
governance regulation on the rada of many lawmakers. And obviously
the Europeans have been most proactive, as they usually are,

(02:14):
and they have the AI Act, a comprehensive piece of
legislation that is already in force. And now it's then
the matter of implementing it effectively, and then we also
need to see what happens in the United States at
the state level. China is definitely interested in governing AI.
We have many other jurisdictions, so I think there is

(02:35):
a lot that is happening and a lot more that
needs to be done.

Speaker 2 (02:39):
In your new book Digital Empires, you break apart how
basically different regimes across the globe are governing and regulating
AI differently. So there's a US, there's Europe, there's China.
In your research, have you found one need is kind

(03:00):
of doing that balancing act the best so far in
terms of balancing out They don't want to stifle innovation,
but they also want to protect users of AI, consumers
of AI, companies that are getting involved with AI. Who's
doing the balancing act the best in your view?

Speaker 4 (03:19):
So I think any regulator really needs to think about
this balancing to make sure that we harness these tremendous
benefits that are associated with AI, but also really safeguard
our citizens and societies from various risks. And in many ways,
there is a perception that the Europeans are airing on

(03:39):
the side or preemptively protecting against these risks and maybe
then foregoing some of the innovation benefits, whereas the Americans
would be airing on the side of maybe being very
techno optimists and not thinking about all those potential downsides.
I think in many ways I do like and endorse

(04:01):
the European model in the sense that, in my view,
that best safeguards the public interest and really takes seriously
the fundamental rights of individuals and democratic structures of the society.
But there is always I really reject this notion that
this comes at the cost of innovation. There definitely is

(04:22):
a gap where the Americans are doing much better in
generating AI innovations compared to the Europeans. But the reason
is not that the Europeans are so keen on regulating.
I think there are many other reasons that explain why.
There are just fundamental pillars of the tech ecosystem in
the US that are much stronger and the Europeans have
fallen short in replicating that. So regulation as such, the

(04:46):
protection of those rights is not a choice that needs
to come at the cost of making beneficial progress in
this space.

Speaker 3 (04:56):
So, you know, going back to our conversation that I
had with you two years ago, because the world has
changed so much since then. Two years ago, Joe Biden
was president, there were a lot of folks who didn't
think that Donald Trump would win another term. Fast forward
two years, Donald Trump is the president, David Sachs is

(05:19):
crypto and aizar. The regime is thinks about this completely
differently than I think it's fair to say the Biden administration.
What do you think the US needs to be doing
right now to regulate this technology? What would you like
to see David Sachs do?

Speaker 4 (05:38):
Yeah, so you're so right. There has been a complete
U turn in many ways. Towards the end of the
Biden administration, there was closer alignment between the traditional Transatlantic allies,
where the US was really moving closer to the European
view that technology like AI needs guardrails, and there was
a genuine attempt to join force among the world's techno

(06:01):
democracies in order to halt the advances of Chinese digital
authoritarian views of governing technology. So I really saw this
potential for the US and the EU to join forces
to bring about a very beneficial chase in this space.
But now the US is doing I think two things.
So it's first of all, giving a lot more power

(06:22):
to the tech companies walking away from regulation, impraising these
deregulatory zeal That really reflects the very strong form of
techno libertarian, techno optimist firl wheel. But in many ways,
the US is also playing Beijing's game and becoming very
state driven. We see our massive state investment in some

(06:47):
of these leading tech companies. We see our export controls,
investment restrictions, we see subsidies. So the US is to
me are losing some of its own And if you
think about how that will also impact the US's adamant
goal of being a leader in AI, what is happening
in the space of immigration, I think that is really

(07:10):
counterproductive if you think about where all those AI innovation comes,
innovations come from the US. So what the US would
need to do first? The US would need to regulate
this space. We need to make sure that fundamental rights
are protected, we need to make sure that those societal
risks are under control, and we need to also at

(07:31):
the same time make sure that we will continue to
invest in the development of the AI by retaining the
world's best talent, which often is immigrant talent, including then
Chinese data scientists who have been contributing to advances in
the space in the US.

Speaker 2 (07:46):
Is there any specific regulation that comes to mind that
you would want to see in the US that would
prevent this kind of idea that ten percent in the
beginning of the segment about this the spaghetti test. It
sounds silly, the will Smith spaghetti test, but it gets

(08:07):
at the heart. Yeah, concerned that people have that suddenly
the Internet is going to be, you know, filled with
these videos.

Speaker 3 (08:16):
We're not and it's the end.

Speaker 2 (08:21):
I'm sorry, Well, maybe we can have a professor help.

Speaker 4 (08:25):
She said.

Speaker 3 (08:26):
She said, it's not the end already, which I'm great, right, it's.

Speaker 1 (08:28):
Not the end.

Speaker 2 (08:29):
Is there a specific piece of regulation that that comes
to mind? Is it about I don't know, digital privacy,
people needing disclaimers on top of every video that you
see on the Internet.

Speaker 4 (08:41):
So I think there are many aspects, and there's no
easy way to say that you just need to do
one thing in order to then address this multitude of
different hams. But one thing. It does start from the
protection of privacy and our agency and our ability to
be able to tell what is fiction and what is not,
and our ability to engage in conversation based on real

(09:04):
information that is not manipulated by our AI and this
information obviously existed even without JUDGPT type of tools, but
they are now fueled with this AI driven ability to
manipulate our sense of reality. So in many ways, I

(09:26):
think it does need labeling. It does need the kind
of transparency and accountability that we have a sense of
how these AI systems are built and how we engage
with them. But then there are also risks around and
just protecting content content creators. We need to take copyright
seriously and the idea of how we actually train these

(09:47):
models with the data that has been generated by individual authors,
by journalists, and that needs to be also compensated well
so that we still have the incentive to engage in
that kind of content production. But privacy is obviously very
high on my list, This information is very high on
my list. Then there are questions that are more about

(10:08):
existential risks, more about systemic risks, and even if it's
hard to sometimes know the probabilities of some of the
most severe risks and how likely they are to materialize,
we still need to be prepared to also as a
society to confront that kind of reality when AI advance

(10:29):
is really fast and we reach the point when we
find even harder to govern that technology. So I think
there are all these layers and we are not even
really having, at least at the better level, a real
conversation in how we go about regulating this space.

Speaker 3 (10:44):
Professor ONW brad for Henry el Moses, Professor of Law
and an International Organization at Columbia Law School. She's the author
of several books, including her most recent, Digital Empires, The
Global Battle to Regulate Technology, published back in twenty twenty three,
but as relevant right now as it was to years ago.
M
Advertise With Us

Hosts And Creators

Tim Stenovec

Tim Stenovec

Carol Massar

Carol Massar

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.