All Episodes

December 5, 2025 49 mins
On this episode of The Federalist Radio Hour, Neil Chilson, former FTC chief technologist and head of AI policy at the Abundance Institute, joins Federalist Senior Elections Correspondent Matt Kittle to sort through the fact and fiction about artificial intelligence, explain AI's role in the job market, health care, and politics, and examine the legal challenges that come with governing its use.

You can find Chilson's book Getting Out of Control: Emergent Leadership in a Complex World here

If you care about combating the corrupt media that continue to inflict devastating damage, please give a gift to help The Federalist do the real journalism America needs.  
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:18):
And we are back with another edition of the Federalist
Radio Hour. I'm Matt Kittle, Senior Elections correspondent at the Federalist,
and your experience sharpa on today's quest for Knowledge. As always,
you can email the show at radio at the Federalist
dot com, follow us on ex at FDRLST, make sure

(00:38):
to subscribe wherever you download your podcast, and of course
to the premium version of our website as well. Our
guest today is Neil Chilson, former FTC Chief Technologist and
currently head of aipolicy with the Abundance Institute. Where are
we taking artificial intelligence? Where is artificial intelligence taking us? Well?

(01:02):
We ask those questions and many more on this edition
of the Federalist Radio Hour. Neil, thank you so much
for joining us. It's great to be here speaking of here.
Is this really you? Or is this AI? Or as
they used to say when I was much younger dating

(01:23):
myself here, is it live or is it memorys? We've
come a long way since cassette tapes, certainly in America.
I jest, of course, this is the real Neil Chilsen,
but it does raise the question of just how much
we are inundated with AI and the difficulty sometimes of

(01:47):
telling artificial intelligence from reality. I think this thing is
only going to intensify, of course, as we move forward.
Where are we going with all of this?

Speaker 2 (02:01):
Well, as much as my wife and my kids might
appreciate an AI upgraded dad and husband, they're stuck with me,
and so are you guys, the real me.

Speaker 3 (02:13):
But you know that's I say that sort of jokingly.

Speaker 2 (02:15):
But where we are right now with AI is continuing
a long trend of adding new capabilities to computers. And
they're surprising in this case, and they're surprising in some ways.
But the history of artificial intelligence, a term that was
coined in the nineteen fifties, has been exactly this, surprising

(02:38):
new things everybody gets excited about. We figure out, oh,
this is not actually the same thing as human intelligence,
not exactly. It's powerful, and then we sort of it
gets adapted. It's in our phones, it's in our computers,
and we move on. So, you know, artificial intelligence. The
cutting edge of artificial intelligence technology when I was first

(03:01):
getting into computers in the.

Speaker 3 (03:03):
Nineties was chess playing.

Speaker 2 (03:05):
And now everybody's phone can play chess better than ninety
nine point nine percent of humans on the planet, and
we don't think of that as somehow you know, terminator
style artificial intelligence.

Speaker 3 (03:18):
And so so.

Speaker 2 (03:19):
Where we are now is we have these really powerful
tools called large language models that can be used for many,
many different types of things. But one of the ways
they're being used is in a sort of chatbot form
where you can ask questions and get really comprehensive, detailed,
sometimes made up answers, and that's that's really powerful. It

(03:44):
teaches us that there's a lot to learn about humans
that you can that you can collect and gather into
computers in these in these formats and then query in
a way that gets you know, very persuasive, very interesting,
often very entertaining content, whether it be text, video, audio.

(04:06):
I love making songs. Actually, my little girls love making
up new songs using some of the apps out there
where you can just type in a prompt and get like,
you know, a song about unicorns or a song about
you know, their kids club running around the neighborhoods as
kids spies or something like that, and so like we
love that stuff. And so yeah, there's a lot of entertainment.

(04:29):
There's a lot of power in these tools, and there's
some risks and people need to be aware of that,
and policymakers do as well.

Speaker 1 (04:36):
Some of the uses that you mentioned, you know, are
fun to play around with. Certainly, like you said making
up songs, I've seen that in play and the technology
is really quite good and it takes you literally seconds
to do what the Beatles spent, you know, the better
part of a year doing to make Sergeant Pepper's Lonely

(04:58):
Hearts Club band. I'm not sure saying that the artistry
is the same, but I'm you know, the production value
is certainly there. Yeah, the risk that you talk about
in this arena copyright infringement, Where are we with that today?
And where are we heading in terms of intellectual property?

Speaker 2 (05:20):
So there's two concerns on the copyright front. One is
on the training side, So when a company is building
these models and they use a bunch of different content
in order to train on, what are the legal restrictions
on that? And then the other side of it is
on the output side. So when a user types in
a prompt and says, you know, give me a picture

(05:41):
of Mickey Mouse, you know, battling the Marvel team or
something like that, and the model puts out something that
includes you know, maybe some sort of intellectual property protection
protect and content.

Speaker 3 (05:58):
Who's responsible for that? Is it the user, is it
the model? And so on.

Speaker 2 (06:03):
The first thing about like the sort of ingestion of
this content, the use of this content. We've had some
court decisions and there is a category of this that
judges are thinking of as fair use right. And so
you're training these models the way you might train you know,
anybody reading a book, and so the content of the

(06:23):
model or the content isn't copied into the model, It
is used to train the model, and then the model
understands that content in the context of you know, lots
of other content. And so there are ways in which
the courts are saying that that is fair use. There
was an anthropic settlement in this space where the big
problem the anthropic had was that they hadn't paid for

(06:47):
the original content, right, and so they hadn't bought the
books that they were scanning. They had just downloaded pirated
copies of the book. And the court said, no, no, no,
you can't do that. You need to buy the content.
You need to buy you the copy of the content
and use that to train. You can't just use pirated content.
But overall, that doesn't that doesn't say that much about

(07:08):
you know, I think that still leaves the door open
for companies to train on a lot of this data,
which I think is probably the right decision. I think
it is very analogous to training to reading a book.
And therefore, you know, we don't have copyright that says
you can't learn from a book without permission from the
copyright holder, and so on the other side, we don't know.

(07:30):
We don't know yet. Those cases are still ongoing. We
don't know how courts are going to partition liability for
copyright infringement between the user who writes the prompt and
the company who is running the model or trained the model.
There's some really complicated questions there about what is infringement

(07:52):
even on that side, but then splitting it up between
the user who's asking for something and the model who's
generating it, it gets pretty complicated there. You know, we
don't we don't sue pencil companies because people draw images
of of you know, Mickey Mouse, sure with using a pencil,
and that wouldn't.

Speaker 3 (08:09):
Make a ton of sense.

Speaker 2 (08:10):
And so we're just trying to figure out the right
analogies here and what makes sense economically.

Speaker 1 (08:15):
Still, that's very complicated in the same legal neighborhood. What
about reputational damage. We've already seen some cases on that front.
And listen, there are AI companies you know that are
I think very responsible AI companies. They're doing what they

(08:35):
can and they're investing a lot of money to make
sure that these systems are working right and they don't
cause damage. But sometimes they do reputational damage. What about
the liability issue on that front? Where are the courts
on that particular matter?

Speaker 2 (08:56):
So the big, the big challenge. The courts have looked
at this a couple of ways. There's a bunch of
challenges when you're talking about defamation or libel. The biggest
challenge is that often this involves public figures, and public
figure liability requires showing that there was a sort of
malicious intent.

Speaker 3 (09:17):
By the party.

Speaker 2 (09:19):
And the question again, the same question comes up here
that comes up in the generation who is the party
at fault here? Is it the person who typed the
prompt who said, like, tell me everything you know about
you know X public figure? Or is it the model generation?
And then on top of that, unless it's made public,

(09:39):
this type of generation by itself is not. I don't
think it's usually subject to defamation law. So the question
is if I generate, If I type a prompt into
chat GPT and I get inaccurate information back, but then
I publish it on my social media website or in

(09:59):
a newspaper article or something, it's the publication of that
that is the real problem. It's not the fact that
the chat GPT came up with it, because nobody sees
that except the person who asks for it. And so
I do think that these defamation cases against the models
are going to be complicated by that factor. I think
that it's difficult to say that the model is that

(10:20):
fault for the distribution of a falsehood when the model
just generates content and then the person has to make
a choice to then.

Speaker 3 (10:29):
Put it out into the world. And so I think
that's a complicating factor. And I think that.

Speaker 2 (10:34):
Means while the models are trying very hard to be
accurate on this sort of stuff and users, it's still
really on the users to verify that the content that
they're posting, that they're taking from that and using in
the real world is accurate. And I think that's where
the liability probably should sit.

Speaker 1 (10:51):
Yeah, tied end to that. We have some very interesting
cases on free speech. We have some and AI. We
have some some concerning moves by politicians, by lawmakers. I'm
thinking of a guy who would like to be president
of the United States in twenty twenty eight. There's no

(11:11):
doubt about that. Gavin Newsom in California and the battles
that have gone on there legally speaking in terms of
content that is driven for political advertising political parody. Really,
that's the interesting part about this, the brave new world

(11:36):
of political advertising or political parody where, for instance, you
have a famous AI video that came out last year
that had Kamala Harris, then the Vice President of the
United States, and a candidate, the candidate the Democrats candidate

(11:56):
for president. The AI technology made her say some things
that she certainly did not say. It was very amusing
to about half of the country. Wasn't so amusing to
the other half. So Gavin Newsom and crew in California
took that and some other instances and used that as.

Speaker 3 (12:20):
Kind of a red flag law for or a red flag.

Speaker 1 (12:24):
Incidance for AI and communication and put limits on what
you could do or penalties on what you could do
if you did this. Kind of thing. The courts have
entered into this case. Where does all of that stand today?

Speaker 2 (12:40):
So yeah, So California and some other states have looked
at how to govern the use of AI generated content
in advertising, in political advertising in particular, and they faced
some real constitutional challenges here. Political speeches right at the
core of First Amendment rights. Lying in political speech is

(13:04):
not policed by the courts as because they often say, essentially,
now you might be able to bring a defamation case possibly,
but again we're talking about public figures, and even in
the political context that gets even harder. Courts tend to
really say, like, if you're going to do political ads,

(13:25):
it's going to be up to the voters to decide
who's telling the truth. And and AI doesn't really change
that that much. It's always been easy to create false
content right now. What you can do maybe with AI
is the types of deep fakes that you're talking about,
the types of putting words in somebody's mouth in a

(13:46):
very convincing way.

Speaker 3 (13:48):
I think deceptive content.

Speaker 2 (13:51):
Like that is still it's right for you know, competitors
to call it out. I think that companies are trying
to figure out how to balance at but political parity
is highly protected free speech, and I think any type
of you know, government thumb on the scale about what
people can and can't say is just ripe for abuse.

(14:14):
You'll end up getting police, you know, from one political
perspective or another. One party will will bear the brunt
of this more than the other. And I just think
that it's a super risky road to walk down. The
better the better options here are more speech, not censoring
the ability to create content in the first place.

Speaker 3 (14:33):
And I think that some of the companies.

Speaker 2 (14:35):
Have had some early experiences in trying to shape the
content that was coming out of their models, and I
think they've dialed that back to say, like, hey, we're
going to largely try to lean on the side of
generating what the user is asking for and protects protect

(14:56):
speech there as long as they're not generating illegal content.
I think that's the better that's the better frame.

Speaker 3 (15:02):
I hope that more and more companies move towards that direction.

Speaker 4 (15:08):
Did a single company save the stock market from crashing
into a recession? The Watchdot on Wall Street podcast with
Chris Markowski. Every day, Chris helps unpack the connection between
politics and the economy and how it affects your wallet.
Tech powerhouse in Nvidia's earnings report did not disappoint, But
what does that tell you about the value of AI?
This cannot save the market forever. Whether it's happening in

(15:30):
DC or down on Wall Street, it's affecting you financially.

Speaker 3 (15:32):
Be informed.

Speaker 4 (15:33):
Check out the watch Dot on Wall Street podcast with
Chris Markowski on Apple, Spotify or wherever you get your podcasts.

Speaker 1 (15:43):
Well, let's face it, if the courts really did punish
politicians for lying, our correctional system would be just overbroad.

Speaker 2 (15:55):
Yeah, political speech is full of half to true misframeans
and courts just don't want to get into that. I
think that's that's just fraught territory, and I think I
think it's right to leave it to the people to
make the decisions about who they trust.

Speaker 1 (16:13):
Yeah, that is America, and that's the bottom line. The information,
of course is it's a different age for all of that.
But we've had the same issues and the same problems
over the two hundred and fifty years of this one.
Let's turn our attention to the emotional aspects of AI.

(16:34):
Harvard Business School has an interesting piece up came out recently.
Feeling lonely an attentive listener is an AI prompt away,
and it delves into the brave new world of companionship
with AI. There are well, I think what we're finding out, Neil,

(16:55):
over the last year or so is that there are
a lot of lonely people out there, and they have
turned AI to solve that loneliness problem. Where does all
of that stand today, and where do you think that's
all going?

Speaker 2 (17:13):
Yeah, I mean, I think you're totally right. I mean,
we do have an epidemic of sort of people isolating
themselves from others, and I think this is an outgrowth of,
you know, some really bad policies that we had around
the COVID pandemic, as well as just some general fracturing
of American you know, socialization institutions that we've you know,

(17:40):
historically relied on to bring us together with people.

Speaker 3 (17:43):
And so I.

Speaker 2 (17:44):
Think that trying to fill this gap with AI chatbots
is at best a sort of temporary measure.

Speaker 3 (17:55):
I think that.

Speaker 2 (17:57):
I don't know how to exactly measure this. You know,
these chatbots are general purpose. Most of them are general purpose.
They're not aimed at this sort of function. Now, there
are some companies who are offering specifically this type of function,
but most of the chatbots that people are using our
general purpose. People are using them for a wide range
of things, from academic research to generating funny videos, and

(18:19):
some people use them occasionally to talk to about like
personal problems or you know, relationship problems or things like that.

Speaker 3 (18:28):
I think some of that.

Speaker 2 (18:29):
Can be very useful and very helpful, but certainly would
want to keep an eye out on people replacing time
with other humans with these these chatbots. It's just it's
just not it's not the same obviously, and it doesn't
create the same types of deep connections with community that

(18:51):
I think are essential to human flourishing.

Speaker 3 (18:54):
And so.

Speaker 2 (18:56):
My hope would be that these systems, as they build out,
are aimed at improving people's ability to engage with other people,
because there certainly are people who are not as practice
at that who maybe spent you know, two years, especially
when we get into the kids sector, who spent maybe
two years like doing zoomed school and need to get

(19:17):
back into the swing of, you know, talking to other people.
And I think these tools could can help give tips
on that sort of thing.

Speaker 3 (19:26):
But I hope they don't.

Speaker 2 (19:27):
I hope people don't rely on them as a substitute
for reaching out, being brave, talking to people they haven't
talked to before, and getting to know people in their community.

Speaker 1 (19:36):
We certainly have learned over the last few years just
how dangerous, how much peril lockdown policies have put our
children in especially, but society in general, when it comes
to picking up with communication and relationships and all of
those sorts of things. You know, Neil, though, on this topic,

(19:58):
they always say the heart wants what the heart wants,
The market wants what the market wants. And there are
some strange areas in the marketplace for AI. Not to
say that, you know, there isn't a market for it,
but the question maybe should there be a market for it.
One of those areas is connecting with the dead, and

(20:22):
this has become an interesting subject of late. But the
AI products that offer people the ability to chat with
or hear the voice again of a loved one who's
passed away, what about all of that?

Speaker 2 (20:41):
I mean, grief is a grief is a crazy thing, right,
and so I think that people try to deal with
it in a lot of different ways, some of it
healthy and some of it not healthy.

Speaker 3 (20:50):
What I would be worried about. Here would be.

Speaker 2 (20:52):
Apps that are you know, claiming some sort of therapeutic
effect on the heart, like misclaiming that, fault claiming that,
or falsely claiming that you know, they're going to provide
some sort of resolution. I think people, like I said,
people might use these as tools to engage with maybe
the content that somebody left behind. And I think that

(21:14):
could be interesting if done in a healthy way. Right,
if you are able to look at like air quote
talk to you know, the all the letters that your
your father or your grandfather left behind, that might be
a really engaging way to learn more about their life
and to think about like what they meant to you.

(21:36):
But I don't think there are healthy ways to do that,
and there are unhealthy ways to do that, And I
would be worried about, you know, apps that are claiming
to provide some sort of therapeutic effect when they're not
doing that, rather than maybe ones that are talking about
how they can help you understand you know, your past

(21:57):
and understand your connection to your family better.

Speaker 3 (22:00):
Some of that could be very interesting and positive.

Speaker 1 (22:02):
I can't wait to get an AI letter the kinds
of letters I get from some members of family and friends,
particularly on the older spectrum of family and friends, the
old merry medical Christmas letters where they tell me about
all of their health ailments in the space of two

(22:23):
very long pages.

Speaker 3 (22:25):
I suspect, I suspect, I suspect.

Speaker 2 (22:27):
If you get Christmas cards this you know, this season,
there's there's a good chance a chunk of them were
helped in being written by AI. But can I give
you one example. My in laws recently came across a
handwritten letter from you know, my father in law's dad.

(22:47):
It was a very it was a terrific piece consoling
somebody else in the loss of their their spouse. But
some of it was actually quite difficult to read because
it was handwritten. I used, you know, I scanned it
into chat GPT really quickly and asked it if it
could create a transcript of it. And that was extremely helpful.

(23:08):
Actually it decrypted some of the words that we had
we were really struggling with and uh, and we were
able to enjoy this this letter in a way that
I don't think we would have struggled more to figure
out what the handwriting said had we had we not
use that.

Speaker 3 (23:22):
So there are really positive uses to this technology. But there.

Speaker 2 (23:26):
You know, people shouldn't be substituting, you know, moving through
their grief process by you know, pretending that they could
still talk to their their their past relatives.

Speaker 1 (23:36):
I think, yeah, I don't I don't doubt that that
kind of positive application and the implications of that. You know,
you talk about, let's face it, cursive writing as a
lost art in America anyway, But there's some beautiful cursive writing,
uh in you know the history of family letters and
those sorts of things. But I think about those, you know,

(24:00):
as a history buff those important historical documents, you know,
the letters from Lincoln that I can't always you know,
I'm looking at the penmanship, and he's got pretty good penmanship,
but there are areas where I can't find it. I
think that's interesting. That's a very interesting application. We're going
to talk more about some of you know, those very

(24:21):
useful applications for AI coming up. Our guest today in
this edition of The Federalist Radio Hour Neil Chilson, former
FDC Chief technologist and currently head of AI policy with
the Abundance Institute. I want to get to the jobs
question though, because that is a huge concern, a growing

(24:42):
concern for a lot of Americans what is fact? What
is fiction? About what AI is doing and is about
to do to the job market.

Speaker 2 (24:53):
Well, sorting out the facts can be challenging in sort
of the effects of new technology on jobs. What we
do know is that companies are spending a bunch of
time on trying to figure.

Speaker 3 (25:07):
Out how to use AI, these new AI.

Speaker 2 (25:10):
Tools in their systems, but the actual implementation is actually
still in the very early stages for almost all companies,
and so while they're spending time and money on it,
figuring out how to use it in their environments is
less clear. I do think that there are lots of

(25:31):
individuals who are figuring out how to use it, and
that is moving faster because they can iterate within their company.
They can try lots of different things. They can try
to see how, hey, how does this help me write emails?
How does it help me draft? How does it help
me brainstorm? How does it help me take notes?

Speaker 3 (25:48):
But we're still not.

Speaker 2 (25:49):
Seeing like a huge productivity boost yet that would suggest
that this is sort of replacing a bunch of time
that people are spent, or that it's enhancing and allowing
them to move into other areas.

Speaker 3 (26:03):
Not yet.

Speaker 2 (26:04):
I think some of that's just because we're still at
the very early stages of this stuff working its way
into the job sector. I think, you know, we've heard
some talk about how, you know, early career people are
struggling to get jobs, and the finger is being pointed
at AI. I think that it's it's more about uncertainty

(26:30):
in the economy. And some of that uncertainty, for sure,
is being driven by not knowing what jobs are going
to be affected by AI in the future. And so
I don't want to say that it's not about AI
at all, but there are lots of other factors in
the economy that suggest uncertainty, and so I think teasing
that out is very complicated. We'll have to see a

(26:50):
little bit more. One other area of research has been
about the types of tasks that these large language models
are good at and where they're not good.

Speaker 3 (26:58):
And what we know is right now is that for.

Speaker 2 (27:00):
Certain types of jobs, AI has this effect of leveling
up quickly people who are earlier or less experienced, and
so in like a call center job, for example, where
you're offering you know, your troubleshooting for customers over and
over and over, these tools can really make somebody who

(27:21):
is new to this job perform at a very much
higher level, much more quickly, But it doesn't help the
people at the top of that experience trend very much.
In those types of jobs and other types of jobs,
say where you're running a scientific laboratory and you're using

(27:41):
these types of models to help you brainstorm. There, it
seems like it makes the highest performers perform even better,
and so the distribution of effect is different. So there
the highest performers get a huge boost and the lower
performers don't, because the highest performers can tell, like sort
of intuitively, which threads of this brainstorming process makes some

(28:04):
sense and which ones don't, and so they get a
productivity boost and the lower skilled people don't. And so
I think it really is job dependent that's going to
be the case. I think for general purpose technologies overall,
we don't know all the applications of AI. We don't
know exactly how it's going to help, but there's lots

(28:24):
of different ways that people are figuring it out.

Speaker 3 (28:26):
I actually saw the demo the other.

Speaker 2 (28:28):
Day of how people are using AI and what they
call augmented reality headsets on construction sites to be able
to much more quickly and more safely figure out where
they should put pieces, what's the next step that they
need to take. And you can help people coordinate in

(28:48):
a much more safe way using both AI and this
VR together. But this is all still very early days.
I don't think we really know the overall job impacts
of this technology. What we do know is that intelligence,
artificial intelligence is valuable because intelligence is valuable, and so
to the extent that we can use AI to amplify

(29:12):
our intellectual endeavors the way that we used you know,
the steam engine and the combustion engine to amplify our
our physical uh capabilities we had. There is huge potential
here for a lot of productivity and that means that
means change in the types of jobs.

Speaker 3 (29:30):
And but we don't.

Speaker 2 (29:32):
Really know how fast or how that's going to work
out yet, so there's some uncertainty for sure.

Speaker 1 (29:37):
You note that AI can sharpen skills of individuals. Can
it steal the courage of members of Congress? Is that
yet part of the technology suite?

Speaker 2 (29:48):
Uh? Wow, I hadn't thought of that as an application.
Somebody should train an app to do that? Uh, you know,
train a model to do that. The question would be
could we and get members of Congress to use it.
I did hear that Josh Holly recently used chat GPT
to explore some sixteen hundred's Puritan history and was quite

(30:11):
impressed with the response that he got back. But I
think that was a very early use for him. So
I hope he digs in, and I hope member of
members of Congress do as well. I think understanding how
this technology works and doesn't work is really valuable, and
it's it's not the type of technology that sits off
somewhere else and you would have to like plan a
trip to go do it. Anybody can try it out,

(30:32):
see how it works, and I think that that experience
is worth doing just to understand, you know, what exactly
we're dealing with here.

Speaker 1 (30:45):
Well, I mean, you know, it's always as a computer programmer,
you know the model here, I guess the mantra more so,
and that is garbage in, garbage out. It's what you
put into the system, what you can expect to get
out of it. And I think that pretty much described
the sixteen nineteen projects, speaking of Puritan.

Speaker 3 (31:06):
Travel, all of.

Speaker 1 (31:07):
That sort of thing we've talked about. You know, some
of the concerns obviously in this emerging technology that's been
emerging by the way for the last seventy years, as
you note, But there are some really powerful applications. I
think about the healthcare arena, what about AI and healthcare?

(31:29):
What about you know, what we're really seeing in terms
of AI really driving positive change in different facets of
our lives.

Speaker 2 (31:40):
So I think the healthcare arena is a great example.
The way that these tools work is they take large
amounts of data that human as humans, we can't see
complicated patterns in, and they help expose those patterns. And
healthcare is full of this type of data where we
have a lot of data about you know, we might

(32:00):
have millions of CT scans of breast cancer, for example,
but we still have a very manual process for identifying that. Well,
these tools can can do that type of analysis faster
and more accurately, and often they can identify, you know,
risk factors earlier than doctors could, in particular in the

(32:22):
breast cancer context. That's just one area. One of the
other big applications it's possible here is you know, we
treat and we research healthcare in this country and around
the world, sort of ash in a very generic way.
We sort of treat people as like an average human.
But the truth is there's so much variation in the

(32:44):
human body and in the human health health system. They're
there that being able to diagnose at a much more
personalized level what is going on in your body is
something that AI is getting better and better at, and
I think that that raises just huge potential benefits for

(33:06):
customized medicine that is directed exactly at the problems that
you have and the cluster of problems that you or
your family member might have that won't look like the
vast majority of.

Speaker 3 (33:17):
Issues that other people have.

Speaker 2 (33:18):
And so we've already seen that in what are called
like orphan diseases.

Speaker 3 (33:24):
These are diseases.

Speaker 2 (33:25):
That might affect, you know, a thousand people across the
world at any time, and there are millions of people
who are suffering from these types of diseases that are
so small and so targeted that it's very hard just
from a business perspective or an economic incentive perspective, to
create treatments for them. If you're only going to solve

(33:46):
problems for a thousand people, it's hard to know. But
what we're learning and there's some great research. I believe
it's at the University of Washington that is taking existing drugs,
existing treatments and identifying where those treatments could apply to
some of these orphan orphan diseases. And they're really they're

(34:06):
bringing like real benefit, like life saving treatments to people
by applying existing approved drugs that otherwise wouldn't have been
used in that context to these orphan diseases. And that's
the sort of thing that you could really only do
with AI because it can see you can enter in
all this data and you can get these patterns out

(34:28):
that identify, hey, maybe we should try this technique over
on this thing where we've never thought to try it before.
And so I'm super excited about those types of treatments.
I think that's going to make our lives healthier and longer.
And it's one of the areas where we need to
get the policy right because the way we treat healthcare
data in this country is makes it very hard to

(34:50):
do some of these types of things, and so we
need to get policy right. But if we get that right,
the health benefits here are really really exciting.

Speaker 1 (34:58):
Well, speaking of palas, see again you know the big
beautiful bill has some interesting things in it about AI.
It aims in many ways to stem the wave of
state AI laws you know that that create this patchwork

(35:19):
of you know, different boundary lines for AI, which you
know is problematic to save the least, I mean, there
has to be clearly some regulation in this area. It's
just what that is now that did not make it
through the you know, the process. Right Where where does

(35:42):
all of that stand?

Speaker 2 (35:43):
Now?

Speaker 1 (35:43):
Where do you see that going? Because you know, it's
it's understandable that AI companies have some concerns about trying
to navigate so many different laws in this arena.

Speaker 3 (35:58):
Yeah, there's two real concerns here. One is the one
that you said, which is.

Speaker 2 (36:02):
A sort of patchwork of you know, there were over
one thousand state laws that related to AI that were
introduced in in twenty twenty five so far, and you know,
most of those were not problematic, but you still have
to pay attention to them a little bit. Some of
them were deeply problematic and didn't pass. Some of them
were deeply problematic and did pass. And so I think

(36:24):
California passed sixteen different AI laws this session.

Speaker 3 (36:27):
And so so one of the.

Speaker 2 (36:30):
Concerns is just this patchwork compliance, right, Like, how do
I know what? What if I offer this product and
there's somebody who used it in Missouri and somebody who
used it in California, am I going to have to
comply with two different sets of laws? How do I
do That's that's that's a real problem. That's especially a
problem for startups. Bigger companies can kind of afford to
pay like a huge legal shop to try to figure

(36:52):
out how to do this, but smaller companies are just
going to struggle to do that. And so that's a
problem for competition in this space, which I think is
an important dimension. The second problem is what I call
extra territoriality. And this is an old problem in our
federalist system. It's in fact, it's what drove the founders

(37:13):
to move away from the Articles of Confederation and to
move towards the Constitution to have a more centralized government
with limited powers, and then you know, the states to
have you know, other powers, the police powers primarily. And
so what that means is if California passes a law,
say dictating what you know political advertising can look like,

(37:38):
it embeds you know the values of the California legislature.
But because California is such a large market, and because
companies are going to want to sell into that market,
that means that you know, probably the people in Oklahoma
and Missouri and Iowa and you know, North Carolina, they're

(37:59):
probably going to be opera within the same system that
California has dictated. And I think that that is a
real problem, especially when we get into some more of
the political and speech.

Speaker 3 (38:09):
Based concerns in this area.

Speaker 2 (38:11):
We don't want California to be setting the national standard
for what AI looks like. That's that's Congress's job. This
is a national technology, and so you know, the big,
the you know, big beautiful Bill had there was an
opportunity there for Congress to do something. They tried, couldn't

(38:32):
get it across the line on that particular one. But
there's building interest in this having a federal approach to
this technology, which is a national technology that has both
like nationally economic importance but also just national security importance
as well when we talk about, you know, competing with
China in this area, and so I think there is

(38:52):
building interest in doing something at the federal level. We
just had another opportunity. There was an exploration of whether
or not we could get this into you know, Congress
was going to put this into something like the the
the NDAA, which is the big appropriations bill for National defense.
Ultimately that that didn't It wasn't the right vehicle. It's

(39:17):
a consensus document. It was just challenging to get bipartisan
support there. But I think Congress continues to be interested
in this. I think people recognize that national technology should
be you know, regulated at the national level, and so
I think Congress is going to continue to explore that.
The White House has been pushing pretty hard on this.
President Trump has come out vocally and said that we

(39:39):
need a federal framework in this space. We cannot subject
to this national technology to you know, fifty state patchwork
of laws, and we certainly can't let California dictate what
our technology looks like when we're competing with China.

Speaker 1 (39:54):
So well, just just to ask the hard producers of
Iowa about the impact California laws can have on their marketplace,
you know.

Speaker 2 (40:07):
Or anybody who's bought a pickup truck right absolutely and
had to meet you know, California cafe like fuel efficiency standards.

Speaker 3 (40:14):
Yeah, yeah, exactly.

Speaker 1 (40:16):
All right, Well, we're quickly running out of time. I
just have a couple of questions left. The first is
the resources involved in this. Obviously, AI, this technology requires
a great deal of a particular natural resource. Where do
you see all of that going ahead?

Speaker 2 (40:36):
Yeah, so AI requires a lot of energy. The current
models and the current chips that are used to run
them require energy, and so we are. Unfortunately in this
country we have been operating since the nineteen seventies under
a sort of scarcity mindset when it comes to energy,

(40:56):
This idea that we need to conserve and recycle and
all of these are Efficiency is good, recycling is good.
Using things the best we can is good.

Speaker 3 (41:06):
But we have.

Speaker 2 (41:07):
A wealth of resources in this country. We should be
using them. Prosperity and energy intensity are directly related. It's
hard to be a wealthy country without using a lot
of energy, and we shouldn't think of that as a
bad thing. We should be trying to build more energy
in this country. And I think it's finally AI has

(41:28):
sort of woken people up to the fact that, hey,
this scarcity mindset needs to go away. We need an
abundance mindset when it comes to energy. Building more energy
is a good thing, and providing more energy is good
and that's the sort of thing that over time will
drive down prices and we'll drive up new solutions, and

(41:49):
so these data centers are.

Speaker 3 (41:50):
Sort of the spark for that. But I think we could.

Speaker 2 (41:52):
All benefit from having more energy abundance in this country.
And so the balance is going to be like, where
we build this, how can we build it, how do
we deal with the concerns of you know, local communities
about energy prices. All of that means that we should
aim towards, you know, not subsidizing these types of projects,

(42:13):
but finding a way to enable them to help give
back both on the energy production side, but also on
the you know, the the amazing tools that they're building.
We talk about data centers, I have to think we
should just say supercomputers, like people should be like excited
to have a new supercomputer in their backyard. I well,

(42:34):
maybe that just makes me super nerdy. I am excited
by such things.

Speaker 1 (42:38):
These are not everybody, not everybody who's excited as you
are about this.

Speaker 2 (42:42):
But I think you're right, and I need to I
need to step out of my own bubble sometimes. And
there certainly are trade offs that come from from this,
But I think overall, the benefits are enormous. There are
communities that are very interested in the level of investment
that come with these types of data centers. You know,

(43:04):
Texas is building a bunch of them, Louisiana has is
building some massive ones. These are creating you know, hundreds
of thousands of construction jobs, and then you know the
ongoing like, uh, you know, maintenance of the data centers
is you know, less job intensive, but is you know,
creates prosperity and creates a hub for technology in the

(43:25):
in the nearby area. And so I think there's there
are lots of benefits. I think we need to keep
those in mind, and we need to deal with some
of the myths. One of the biggest myths is around
this water use issue, which is just totally made up.

Speaker 3 (43:37):
I mean, there is.

Speaker 2 (43:38):
Why data centers use less water than the typical like
mid size brewery. It does, and so or like a
small manufacturer often uses more water than a data center.

Speaker 3 (43:51):
They are energy intensive.

Speaker 2 (43:52):
But water is not really an issue in this space,
and it's I don't know why that's become such a story,
but I think people just feel, you know, they're there.
They don't want their bodily fluids polluted. We learned that
from uh uh, you know from you know, uh what
what is that movie? I'm blanking on it right now,
But Soil and Green?

Speaker 3 (44:12):
That wasn't Soiling Green, it was Blanken. But but you know, people.

Speaker 1 (44:18):
I'm trying to remember a movie with bodily fluids as
it's core principle. Actually, I'm sure they're Strange love, doctor Strange.
Now you got it, yes, yeah.

Speaker 3 (44:30):
Doctor Strange love.

Speaker 2 (44:31):
And so I don't know why the water talking point
has been so viral, but it's just not true.

Speaker 1 (44:37):
It really has been. That's I mean, that's what that's
really what I was getting at is that's the that's
been the topic of conversation. Now. Of course, it's been
the topic of a conversation that's in no small part
driven by environmentalist and extreme environmentalist of course. And they
have certainly never led us astray before right now.

Speaker 2 (44:58):
They they I don't, And honestly, there's it's such a
self defeating mindset that that the environmentalists are have been
the primary driver of that mindset I said before, of
like energy scarcity, that basically that that humans using the
resources that God put here on our planet are is

(45:18):
a bad thing. More or less and so I think
they certainly have led us astray before I think on
this water thing, they very much are leading us astray.
You know, electrical use is a energy use is a challenge,
and we need to make sure that we we get
that balance right. And to me, the right solution there
is to build more. We have the ability, we have

(45:40):
the technical capability, we have the deep resources in this
country to do it.

Speaker 3 (45:43):
We need to do it, or you know.

Speaker 2 (45:46):
These data centers will be being built in China and
in you know, Saudi Arabia and in places where they'll
be outside of the sort of US cultural influence, and
then they won't be serving our national interest. And so
I worry about that as a challenge. I think it's
what we can tackle and we should.

Speaker 1 (46:07):
Well. If my AOC clock is right, we have about
two and a half years of existence left. If I'm
not missing because I was out twelve years at one point,
I haven't checked in. I probably should. Final question for you,
and this is it. You know, I've made no secret
to you as we've talked about these issues in the past.

(46:27):
AI technology scares the hell out of me. Yeah, it's
probably because I didn't. I'm old enough to live in
a time where I didn't envision the technology could possibly
get beyond the Atari twenty six hundred and frogger. So,

(46:48):
you know, maybe I'm a ludite on that front. But
here's the question, plain and simple. Will AI eventually become
our overlords and slave us and our progeny?

Speaker 2 (47:03):
No, no, no, these are these are advanced computers. They
don't have motives, They don't have initiative.

Speaker 3 (47:13):
What they have right now.

Speaker 2 (47:14):
What these systems have right now is they have the
ability to identify patterns in a wide range of data
and collate it in a response to a query, and
we can do really amazing things. It turns out we
can do really amazing things with that. But there is
not They do not have initiative or drive. I worry

(47:37):
more about how humans will misuse them than I do
about whether or not the AIS will somehow become independent.
Right now, there is not a clear pathway to that
sort of like autonomy, and so.

Speaker 3 (47:51):
I don't worry about that at all. I worry.

Speaker 2 (47:53):
I worry about not being able to take advantage of
all the huge benefits that these technologies bring because because
we have people who are too worried about you.

Speaker 3 (48:03):
Know, science fiction scenarios.

Speaker 1 (48:05):
So well it is. It is a brave new world
in many facets. It's a very interesting new world, and
it's a new world filled with all kinds of possibilities,
as you say, many of them extremely positive. I'm not,
you know, too much of a Luddite to understand that,
but we need to stop every once in a while

(48:26):
and discuss the impacts thus far and get a sense
of where we're going. And you helped us do exactly that.

Speaker 2 (48:33):
I appreciate it absolutely, and I should point out that,
you know, it's not just the ludd Eights that like Froger.
My six year old daughter plays Froger all the time,
so it has a long life. Some of these things
stick with us. Great technology can bring joy to people,
you know, for decades. So that's what I'm hoping AI does.

Speaker 1 (48:52):
Introducer to dig Doug and you may never see her again.

Speaker 3 (48:56):
That might be right.

Speaker 1 (48:58):
Thanks to my guest today, Neil Chilse, head of AI
policy with the Abundance Institute, you've been listening to another
edition of the Federalist Radio Hour. I'm Matt Kittle, Senior
Elections correspondent at the Federalist. You'll be back soon. With more.
Until then, stay lovers of freedom and anxious for the fray.

Speaker 2 (49:22):
I heard the fame, voice the reason, and then it
faded away.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.