Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
On this episode of newch World. Vice President J D.
Vance attended an AI Summit meeting in France on Tuesday,
hosted by France and India. In his opening address, he
described his vision of a coming era of American technological domination.
He said, quote, the Trump administration will ensure that the
most powerful AI systems are built in the US with
(00:26):
American design and manufactured chips. Here to discuss the AI Summit,
I'm really pleased to welcome back my guest, Neil Chilson.
He's a lawyer, computer scientist, and author of the book
Getting out of Control Emergent Leadership in a Complex World.
As head of the AI Policy at the Abundance Institute,
Chilson works to create a policy and cultural environment where
(00:50):
emerging technologies, including artificial intelligence, can develop and thrive. Neil,
welcome and thank you for joining me again on new tool.
Speaker 2 (01:12):
It's great to be back.
Speaker 1 (01:13):
Before we get in an AI tell us just for a
minute about the Abundance Institute. Yeah.
Speaker 2 (01:18):
So, the bud Ins Institute is a mission driven nonprofit.
We're focused on the policy and the culture around emerging
technologies and making sure that the policy and cultural environments
let emerging technologies get full throated market tests to try
to solve the problems that we know that historically that
technology can do to create widespread human prosperity. So we
(01:41):
do that in lots of different ways, including policy at
the federal and the state level in the US.
Speaker 1 (01:47):
When you think about emerging technologies beyond artificial intelligence, what
else do you think about?
Speaker 2 (01:53):
There's so much We have quantum computing, which is another
new model for computing that could be super fast and
super exciting. It's still in the very early days. There's
things like biotechnology. There is so much in that space
about how to design new drugs custom to individual bodies
and what even like maybe what a drug looks like
(02:15):
in the future, it could be quite different. And so
those are some of the new ones that are coming up.
There's all these new areas and energy that are really
exciting abundances to right now is very active in the
policy space around small modular reactors, which are small safe
nuclear reactors that can power essentially a data center to
get back to the AI topic or other small uses.
(02:36):
So we're very excited about all of those.
Speaker 1 (02:38):
Do you also look at robotics.
Speaker 2 (02:39):
Robotics is a key part of moving from the world
of bits to the world of atoms, and robotics has
played a big role already in manufacturing, and with these
new generative AI models and deep machine learning models that
we can do, the things that robotics can do are
becoming much more expansive as well. Very excited about robotics.
Speaker 1 (03:00):
As well well, specifically on the AI topic. On February
tenth and eleventh, France hosted the Artificial Intelligence Action Summit,
where notable world leaders, including US Vice President Jadvans showed up.
Can you give us an overview of the France AI conference,
which I noted was also co hosted by India. What
(03:22):
was its main purpose and what were some of the
key takeaways.
Speaker 2 (03:25):
So this is the third in a series of international
summits that have happened in the wake of the popularization
of chat, GPT and all the innovation that that has
kicked off, and this one shifted a little bit. The
two previous ones were called AI safety summits, and they
really focused on how do we make sure this technology
(03:45):
is safe? And they were driven largely by this fear
of catastrophic risk from AI systems. This is sort of
the terminator scenarios that people think about. And there were
some agreements that came out of those in the UK
and Seoul, Korea that were voluntary pledges from some of
the companies, the leading companies, but then signed onto by
(04:06):
a bunch of the different countries as well. It seems
like that storyline that fear, that concern has shifted as
people have seen the rise of US companies, the dominance
that the US has had in this space, and just
the excitement and the potential for this technology. I think
a lot of the other countries of the world are
less worried about safety, and they're a lot more worried
(04:28):
about getting left behind. How do they build a space
that has this type of innovation, how do they make
sure that their citizens benefit from it? And so the
emphasis I think at this summit was ideas around inclusion
or widespread availability of the technology, and jade Vance certainly
brought a very different perspective to this than the Biden
(04:48):
administration had brought to the previous two. Really, Barnburner of
a speech that was pro American that really emphasized the
fact that the path to AI innovation is not through
heavy regulation but one through deregulation, and he urged the
European Union to take that path if they wanted to
(05:09):
be part of the future.
Speaker 1 (05:11):
He picks up on the cit his speech, where we
have really in the United States emphasized innovation, the Europeans
and a whole range of issues have emphasized regulation. And
part of a result is, I think, with the exception
maybe of two Chinese companies, virtually every trillion dollar company
in the world's American. There are no competitors in Europe
(05:34):
for whole sections of this, and efforts by the Europeans
to punish us by creating fines and other kinds of
things is just going to backfire, particularly with the Trump administration,
which will respond, I think, very aggressively to that kind
of narrow minded policy making, and.
Speaker 2 (05:51):
They certainly did. J. D. Evan specifically called out fines
and regulations that had been unfairly applied, almost unfairly created
specifically to target large American companies, and he said the
US wasn't going to tolerate that anymore. And he also
made a strong case that it's not good for the
European Union either, and for their economy as well. So
(06:13):
it was a very strident speech that Biden administration had
in many cases worked with the European Union to enforce
European laws against American companies, and that was certainly a
different tack that the Trump administration is taking.
Speaker 1 (06:27):
The long tide of history is sort of leading Europe behind.
I'm the European historian by training. I've lived in France, Germany, Belgium,
and Italy, and I love the continent, but I'm watching
it sort of commit technological suicide.
Speaker 2 (06:44):
The moment that gelledas for me the most in the
AI space was the cheery signing of the European Union
AI Act, which was signed about a year ago now
and is starting to go into effect. That was their
big moment. They really celebrated the fact that they had
passed this very comprehensive regulation that is now looking quite
unworkable because the technology has already left behind a bunch
(07:07):
of it. For example, they had been working on this
pre chat GPT and they had to sort of loo
retro fit how the regulation of generative AI technologies like
chat GPT into this big regulatory structure they'd already designed,
and the world has already moved on even since that
final text was finalized, and I think they're figuring out,
(07:27):
and there's a lot of discussion at this Paris summit
about how do they maybe dial back some of this
regulatory pressure in order to get their own national champions.
Speaker 1 (07:39):
The so called Statement on Inclusive even Sustainable Artificial Intelligence
for People in the Planet, which apparently was signed by
about sixty countries, but both the United States and the
United Kingdom refused to sign it. Now, what was that
all about.
Speaker 2 (07:55):
There was a series of statements. Like I said at
the previous summits, there were statements that were similar. Those
ones were much more focused on safety issues. And I
think the main reason. First of all, like all of
these international things, they don't say a ton right. A
lot of it is sort of very positive words about,
you know, where we want to go, not really operational
(08:16):
in how we're going to get there. But there was
a definite shift in emphasis for this one, and I
think it left nobody happy.
Speaker 1 (08:23):
Really.
Speaker 2 (08:23):
The people who are worried about existential risk hated this
statement because they thought it didn't address their ideals. People
from the Trump administration, I think, are rightly skeptical of
like this language of inclusion to the extent that that
might drive some sort of international or national mandates about
how the technology is deployed. And I think the people
(08:44):
who signed it are largely the people who have not
been building these overwhelmingly great products, and so it's not
clear what effect it would have on the other countries.
And so I think maybe I shouldn't say this without
knowing it for sure, but I think maybe I had
heard that China did sign it, I should double check that.
And so for them, it may be a way to
help bolster their relations with Europe or who they I
think probably see as if the Europeans are going to
(09:08):
continue to suppress American companies, perhaps it's a new market
for China to get in their cheap AI models.
Speaker 1 (09:14):
A nice thing about China is they can sign anything,
but they'll intend to enforce it. They get to be
at the press conference and get their picture taken, and
then they go about doing whatever they want to. In
the first place, there's this entire emerging pattern where folks
go from fancy hotel to fancy hotel to have big
meetings talking about big ideas with big words, none of
(09:37):
which relates to reality except they keep inventing more and
more multinational organizations which have to hire their friends and relatives,
so they can have meetings where they talk about things
that have no connection with reality, and it's almost like
an industry.
Speaker 2 (09:53):
Oh, it's definitely an industry. I think there's a lot
of academic conferences that are basically exactly what you described
over and over. It certainly is an industry, and in
the AI space this is the hot topic. But we've
seen international stylings of things like this around privacy, et cetera.
You know, sometimes it's good to get on the same
page with everybody and figure out like how people can
(10:15):
secure human rights across borders. But I don't think that
this particular document really moves the ball very much towards
anything that's productive.
Speaker 1 (10:39):
I think what JD did there was really important mark
in terms of his own evolution as a national leader,
and I thought I was worth listening just for a bit.
Speaker 3 (10:47):
Now. Our administration, the Trump administration, believes that AI will
have countless revolutionary applications and economic innovation, job creation, national security, healthcare,
free expression, and beyond, and to restrict its development now
will not only unfairly benefit incumbents in the space, it
would mean paralyzing one of the most promising technologies we
(11:10):
have seen in generations. Now, with that in mind, I'd
like to make four main points today. Number One, this
administration will ensure that American AI technology continues to be
the gold standard worldwide, and we are the partner of
choice for others foreign countries, and certainly businesses as they
(11:31):
expand their own use of AI. Number two, we believe
that excessive regulation of the AI sector could kill a
transformative industry just as it's taking off, and will make
every effort to encourage pro growth AI policies, and I
like to see that deregulatory flavor making its way into
(11:52):
a lot of the conversations this conference. Number three, we
feel very strongly that AI must remain free from ideological
bias and that America AI will not be co opted
into a tool for authoritarian censorship. And finally, number four,
the Trump administration will maintain a pro worker growth path
(12:14):
for AI so it can be a potent tool for
job creation in the United States. And I appreciate Prime
Minister Modi's point. AI I really believe will facilitate and
make people more productive. It is not going to replace
human beings. It will never replace human beings, and I
think too many of the leaders in the AI industry.
When they talk about this fear of replacing workers, I
(12:37):
think they really miss the point. AI, we believe is
going to make us more productive, more prosperous, and more free.
The United States of America is the leader in AI,
and our administration plans to keep it that way.
Speaker 1 (12:51):
Now. You recently tweeted about Vice President Jdvins's speech at
the AI summon in Paris, calling it quote one of
the most pro innovation speeches you have heard from an
elected politician. What stood out to you and his remarks.
Speaker 2 (13:05):
What stood out is it was the first time that
I can remember that the US has stood up to
European regulators and said, hey, you know, this is not
the path towards innovation and progress. That really stood out.
He also pointed out the connection, for example, between AI
and energy and how linked those two topics are, and
(13:27):
that Europe has been kneecapping itself in energy policy by
essentially de industrializing in some ways, and that's grown their
reliance on providers such as Russia, and he pointed to
that as a big problem. The other thing that I
loved about his speech was he pointed out we often
think about AI and software as being part of the
(13:49):
laptop class, right, that the people who build these things
are people who sit behind laptops and type into computers,
and that it's in the world of bits. And he
pointed out that AI and computation takes large data centers,
which are enormous construction projects that involve lots of hard
hat workers who are plumbers and electricians and skilled labor
(14:11):
and so I really liked his pointing out that this
is good for jobs, not just in the tech space,
but this is good for blue collar jobs all over.
The types of investment that the US has private companies
have been pouring into building data centers in order to
meet the demands of the software space have been a
really big construction investments, infrastructure investments, And I really liked
(14:34):
him pointing that out.
Speaker 1 (14:35):
When you talk about artificial intelligence, when you're watching, how
do you think it's going to manifest itself in terms
of the average small company or the average individual.
Speaker 2 (14:47):
So artificial intelligence is a general purpose technology, and I
think sometimes the very name of it kind of conceals
more than at reveals. I like to think of it
just as advanced computing. But the way I think that
a lot of these new tools are going to manifest
themselves are by making the boring parts of the types
of content creation or analysis that people do a lot easier.
(15:09):
And so you're tracking all of your accounting for taxes
and you're not quite sure how to categorize different things.
This is a type of tool that can easily categorize
different things just by asking it. And so whereas right
now you have to learn the interface of any software
program by figuring out like which buttons to click and
what commands to type in, I think this is going
to add a lot of more conversational so you can
(15:30):
just ask the software like how do I do this
thing I'm trying to do in natural language? And it
will help you navigate things like that. So I think
those are some of the very direct ways. I think,
less directly, more behind the scenes. I think everybody's going
to be impacted by how this affects medicine. These tools
are already showing remarkable ability to diagnose say, cat scans
(15:54):
or mammograms with extremely good accuracy in far advance of
where humans are able to do that, and they can
spot problems early, and that's going to make people's lives
healthier and safer.
Speaker 1 (16:07):
We'll all let's talk about crossing borders. When you look
at emerging consumer technologies like the cell phone, they just
cross the border because people buy them. They suddenly become
worldwide because in fact they're a better outcome for normal people.
I mean, are we going to see in that an
extraordinary spread of artificial intelligence capabilities just because they're so
(16:30):
enhancing to what we can do?
Speaker 2 (16:32):
Yeah. Absolutely, I think we're already seeing this. You know,
chat GPT was the fastest growing app that has ever
been deployed by a lot. It had one hundred million
users within two months, and so there's a real hunger
for using these types of tools, and I think they
are spreading wildly, and I think that's part of why
we have these international summits around AI, where I think
(16:53):
in the similar timeline for say the Internet or the
automobile or say the telephone, I don't think you saw
international summits because people are so excited about what AI
can deliver that everybody wants to figure out how to benefit.
Speaker 1 (17:05):
But also there's now sort of an international information ecosystem,
so that every place on the planet has some connectivity
in a way that would not have been true an
eighteen hundred or nineteen hundred or even nineteen eighty. Let
me switch gears for just a second, because one of
the stories, which is slightly confusing, has been the whole
issue about deep Seek, the Chinese artificial intelligence company, which
(17:29):
has gotten a lot of controversy now. They were claiming
that they used open ais technology to develop this brand new, inexpensive,
fabulous model that leap frogs everything that's going on. What's
your sense of that whole story.
Speaker 2 (17:44):
So Deepseek was a venture fund, a financial firm in China,
and they used a bunch of compute that they had
to develop these new models, and so there is a
controversy about how much they did something called distillation, and
this is a tool where you ask a bunch of
questions to somebody else's model and you use those answers
(18:04):
to help train your own model. And open ai says
that they have some evidence that deep Seek did this
for one of their what's called a reasoning model, the
R one model that they have. I think that's a problem.
It's a problem that's relatively easy for companies like open
ai to solve by screening who their customers are and
(18:26):
setting their terms of service. But I think it's also
a pretty useful tool often, and most importantly, I don't
think it is the primary innovation that Deepseek developed. Some
of their other innovations around streamlining the process, the algorithms
that they use to be more efficient, those are real innovations,
(18:47):
and the great thing is, honestly that they released those
and told everybody about them. So US companies are going
to be able to retrofit those innovations back into their
own software as well, And so overall, I think what
Deep Seek reminds us is that you know, it's not
inevitable that the US is going to lead this technology,
that we need to step up. We need to continue
(19:08):
to innovate. The American system is very strong in this space.
Our funding mechanisms, private funding, our markets, and our tech
talent are really strong. But we can't rest on our
laurels and sit back. This is a fast moving space
and we need to move faster.
Speaker 1 (19:23):
From your perspective, did you see deep seat as a
real leap forward or simply a very clever development of
existing capabilities?
Speaker 2 (19:32):
So its final models are not better than the cutting
edge models in the US, they are on par with
the cutting models in the US. So the advancements that
Deep Seek reached are more in the how do you
implement these things in a way that's less expensive. It's
actually a very common model for Chinese innovation, taking something
that the US has innovated, has created, and doing it cheaper, faster,
(19:57):
or at bigger scale. And so I don't see it
as a giant forward. I see it as an incremental improvement.
And I think some of the techniques that they used
are going to help other companies move forward as well.
But yeah, it's a spur to US innovators to stay
on top of things.
Speaker 1 (20:12):
You posted on x the deep Seek's success reminds us
that the AI race is global. What do we have
to do to make sure we win it well?
Speaker 2 (20:22):
In many cases, what we need to do is keep
doing what we're doing, but make sure we don't get
in our own way as well. We talked a lot
about the sort of European model being proved that people
are growing increasingly skeptical of it. Unfortunately, in the US
we have people who are advancing those types of models,
especially at the state regulatory level. So we have over
(20:42):
three hundred and fifty AI regulatory bills that have been
introduced around the States already this session, and there's just
an avalanche of this stuff coming in, including in red
states like Texas, for example. I'm flying down to Austin
to talk to some people about these bills, and I
think the lesson that we need to take is that
we need to continue to double down on the American
(21:03):
model of permissionless innovation. Copying the European model of regulation
or the Chinese model of centralized control, control and command
is just not going to work for the US. And
what has been working in the US has been working
really well.
Speaker 1 (21:19):
Ultimately, does the federal government need to make this a
federal issue for the national economy rather than have it
broken up in micromanaged by fifty states.
Speaker 2 (21:29):
I think and AI development, especially at the model level,
is a federal issue. The types of concerns that it
might raise around national security in particular are obviously federal issues,
and so I think outside of even the legal whether
or not this is in in interstate or international issue,
I think from a policy perspective, this is the type
(21:51):
of space that the President should step forward and say
this is important enough to the US that we need
to be thinking about this on the national level rather
than creating a patchwork of fifty different regulatory environments that
will slow down innovation and will especially harm the smaller
innovators and people who are trying to deploy this day
to day and get the benefits from these technologies.
Speaker 1 (22:29):
Do you think that the way forward for us will
be collaborative with other countries or will just in the
sense that Google and Facebook, Malmeta, Microsoft, Apple, these were
all American companies then spread worldwide. As you look at
how AI will probably evolve, to what extent do you
(22:50):
think will be American driven and then adopted overseas? And
to what extent do you think will be collaborative?
Speaker 2 (22:58):
So I think it will be American driven and adopted overseas.
We have a lead at this point, and if we
continue to develop this technology and make it accessible to
the world, I think we will be the leaders across
the world that it will be our standards and our
technology that get adopted. There are some threats to that, however.
We have some concerning rules that, for example, the Biden
(23:20):
administration put in just before he left office, something called
the Diffusion Rule that focuses on AI chips, and it
tries to deal with some real concerns about say China
misusing these technologies, but it puts a lot of burdens
on countries that are strong allies and completely non threats.
So we're talking everybody from Israel to Portugal have to
(23:44):
follow these what are called Tier two rules to get
chips that are US manufactured, and I just don't understand
the strategic model for that. Most of the world, other
than a small select group of countries, are going to
have to go through a lot more paperwork to get
US technology, and I don't think that really helps our
security model. And I think it actually means that it's
(24:06):
a market opportunity for China if they are able to
develop these kinds of chips.
Speaker 1 (24:11):
Do you think we may actually restrict ourselves and our
ability to dominate markets.
Speaker 2 (24:16):
I do. And another example of this is deep Seek's model.
For example, is what's called open weights, which means anybody
can download these weights and use them on their own computer,
on their own software. This type of open source development
is really important and it's really useful to researchers and
startups who don't want to spend to train their own models.
And if we in the US restrict the ability of
(24:40):
open weight or open source development, which some of our
policies have some implication of doing. So we could lose
that market in the world, and I think that would
be bad because, somewhat uniquely to technology, these AI models
embody values. They're based on language, and so when they're
trained in the West, they have more or Western values.
(25:00):
And when they're trained in China, for example, the deep
Seek models struggle to tell you anything critical about Chinese
government and they won't even talk about Tianamen Square and
things like that. And so I think it's to the
benefit of US security and influence on the world to
have our open source models be the ones that are
adopted around the world.
Speaker 1 (25:19):
Open ai, how much does that fit that model?
Speaker 2 (25:23):
So open ai, despite the name, they have some open
source technologies, but overwhelmingly they are a closed source business
where you subscribe or sign up for an account and
then use the technology that they are providing, rather than
downloading it to your own machine and using it. And
that's a you know, a totally viable and important business model.
(25:44):
It's not that open source is the only way we
should be doing this stuff. But open AI's business model
right now is not an open source model, so.
Speaker 1 (25:52):
It doesn't it tell us from it. About this whole
rivalry between Elon Musk and Sam Altman. I mean, it's
almost like a soap opera. They co open AI in
twenty fifteen, supposed to be a nonprofit, Musk leaves. In
twenty nineteen, Altman launches a for profit subsidiary, which has
made it remarkable. And you know, Musk is now clearly
(26:13):
offering a hostile takeover and effect supposedly leading a ninety
seven point four billion dollar fund to take it over.
What is all this about.
Speaker 2 (26:24):
That's such a complicated story. It's like a so proper
that's been going on for many seasons. At this point,
there was a lot of acrimony between Musk and the
leadership of the Open AI nonprofit. Musk was very concerned.
His main focus with this was how can we build
it fast to prevent sort of existential risk And so
to him, this wasn't so much about building a company.
(26:46):
It was about this safety concern that he had, and
I think he thought that some of the choices that
were being made weren't the right choices to pursue that goal. Obviously,
he has his own rival company, XAI, and as Altman
is trying to transition open ai from a model that's
a nonprofit model to a for profit company. The way
(27:08):
they're doing that is complicated, and it has to value
the assets of the nonprofit properly and compensate the nonprofit properly.
I see Musk's bid here as largely a sort of
lawfare what is already a very complicated process of moving
from a nonprofit to a for profit. That is I
(27:29):
think the latest valuation's worth potentially three hundred billion dollars,
and I think Musk's bid is just raising the costs
to do that. He's making it more complicated to do that.
I don't know how serious the bid is. I mean,
obviously I think he could get the money together if
Altman accepted it. But I don't think Musk had any
(27:50):
supposition that Altman or I should be clear the board
of the open Ai Foundation would accept this offer. But
it does make it more legally complicated for Sam Altman's
continued transition of open ai to a for profit company.
Speaker 1 (28:04):
It's a fascinating story, and Musk at least implies that
if he took it over that they would be much
more public and much more open. On the other hand,
if that's true, how does he earn back the ninety
seven billion? Right?
Speaker 2 (28:19):
It's a tough thing, and I think there's somewhat plausible
business models, but it does seem like you would have
to spin out at some point a for profit of
his own, or find some sort of funding mechanism to
earn back that money. In some ways, the initial donations
that Musk gave to open AI and I don't think
you really saw them as business investments, but those are
(28:40):
a whole different scale than ninety seven billion dollars. It's
hard to get investors in to do charitable work at
that scale.
Speaker 1 (28:47):
That's what I was thinking, Neil. I want to thank
you for joining me. I'm sure we're going to come
back to you again in the future because the whole
concept of the Abundance Institute is so much down the
road of what I believe in and what I think
is the future of the country. I want to let
our listeners know they can find out more about the
work you're doing as the head of the AI policy
at the Abundance Institute by following you on x at
(29:09):
Neil Underscore Chilson, or by visiting your website at Abundance
dot Institute and we'll list all that on our web page.
And I'm really grateful you took the time to talk
to us.
Speaker 2 (29:20):
Well. Thank you so much for having me on. It's
always a pleasure.
Speaker 1 (29:26):
Thank you to my guest Neil Chilson. You can learn
more about the Abundance Institute on our show page at
newsworld dot com. New World is produced by Gayard three
sixty and iHeartMedia. Our executive producer is Guernsey Sloan. Our
researcher is Rachel Peterson. The artwork for the show was
created by Steve Penley. Special thanks to the team at
(29:47):
Ganward three sixty. If you've been enjoying news World, I
hope you'll go to Apple Podcast and both rate us
with five stars and give us a review so others
can learn what it's all about. Right now, listeners of
news World can sign up for my three freeweekly columns
at Ganghamstreet sixty dot com slash newsletter. I'm newt Gangrich.
(30:07):
This is neutrald