All Episodes

June 5, 2025 45 mins

In this conversation, Ryan speaks with Brad Carson and Mark Beall about the pressing need for AI regulation, the potential impact of AI on employment, and the global competition surrounding AI technology. They discuss the implications of a moratorium on state regulation, the challenges posed by AI to the job market, and the necessity for a balanced regulatory approach that fosters innovation while ensuring safety. The conversation also touches on the future of work in an AI-driven world and the policies needed to address these challenges effectively. It's a Numbers Game is part of the Clay Travis & Buck Sexton Podcast Network - new episodes debut every Monday & Thursday.

Learn more about Brad HERE

Learn more about Mark HERE

Follow Clay & Buck on YouTube: https://www.youtube.com/c/clayandbuck

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Welcome back to a Numbers Game with Ryan Gardowski. Thank
you all for being here on this Thursday episode. A
lot of news broke since Monday, so let me give
you guys some quick hits before talking with the main topic.
You should know a little bit of news to keep
you more informed than the average person. In Poland, the
Nationalist Party candidate, the Law and Justice Party nominee for president,
Carol Norwiki, won the presidency in an absolute come from

(00:25):
behind victory. Law and Justice is a nationalist party in Europe.
Not my favorite because they do a lot of like
public outreach that's not really They come out as a
very anti immigration policy immigrant immigrant politicians and political party,
but they're very pro immigrant. Really, it's a lot of
pr that's bigger than the actual policy anyway. Norwicky's win, though,

(00:47):
is notable because he was double digits behind the polls
as recently in April and had an absolute monster comeback,
and it dispels the myth that Trump is so toxic
to nationalist and pop those candidates around the globe. This
is the third straight presidential election with the Law and
Justice Party one in Poland and on the other side
of Europe, National's populace firebrand Gets Wilders removed his party

(01:11):
from the coalition government and it collapsed the coalition. It
will force early elections later on this year. Here's how
that went down. So the Netherlands is a multi party system.
They have lots of parties. I'm talking they have more
parties than gen Z has Genders like. It is a
lot of parties. In twenty twenty three, Gets Wilders, who
is the longest serving Dutch politician and always a political

(01:34):
outsider for being a hardliner against mass immigration and the
Islamification of the Netherlands, and had his party, the Freedom Party,
surprise everyone in come in first place. But in order
to form a government you need seventy six seats in parliament.
His party only had thirty seven. So we entered a
coalition government with three other center right parties and populist

(01:54):
parties on the condition that he could not be prime minister,
but they would concede his demands and immigration. Eleven months
later the coalition. After the coalition government was formed, Fielders
has decided to leave the coalition and trigger early elections
because the other three parties refused to move forward on
what he called the strictest asylum policies in Europe. This

(02:15):
is not the first time Builders has lost left the
coalition government created a snap election. In twenty ten, he
pulled a similar move because the coalition government moved forward
on austerity measures. The result was a political instability and
Wilder's party was punished in the next election, even though
austerity measures were unpopular. Because if there's anything you should

(02:39):
know about voters in all of the West, regardless of
what country it is, they punish in political instability more
than anything else. Now, whether it be Builders leaving the
coalition government or Republicans shutting down the government or Democrats
passing Obamacare, people don't like to make it the feeling
of being unstable in their politics. So we'll see what

(03:01):
happens in the next election. If Wilders's party can survive
and grow, I don't know. I'm pessimistic on that. Stranger
things have happened. Maybe immigration is a big enough issue
that voters won't mind, but it's something worth keeping an
eye on. It's all the politics I have for Europe.
I love talking about European politics, I love foreign politics.
I know it's not for everybody. I try to make
sure that you understand how it's connects to American politics.

(03:23):
But if you're okay with me doing more episodes on
other countries, particularly in Europe's politics, let me know. I'm
building the show with you for you, so your feedback
is very important. I read every email. Emailm me Ryan
at numbers gamepodcast dot com and shoot me either an
idea for an episode or if you want to hear
more about European politics, I'd love to do an episode
about it. Okay, Now to the States. Elon Musk has

(03:47):
officially left the White House and DOGE is all but
dead after just a few months of being inactive. And
I told people that I didn't think what Elon was
trying to do was what he said he was going
to do. I don't think it was about trying to
balance the budget, but that's what it was sold at,
and that is what he's leaving on. On Tuesday, Elon
took a swipe at President Trump's big beautiful bill, tweeting quote,

(04:10):
I am sorry, but I can't stand it anymore. The massive,
outrageous Porkfield Congressional spending bill is a disgusting abomination. Shame
on those who voted for it. You know you did wrong.
You know it, and then he followed up by saying
it will massively increase the already gigantic budget deficit to
two point five trillion and burden American citizens with crushingly

(04:30):
unstable debt. Elon has a point, right, not my favorite
personal world, but Elon has a point, and he has
been warring with members of this administration for months now,
according to Axeos Treasury Secretary Scout Besset, and he got
into his screaming match outside the Oval office, where Besset
called him a fraud for not finding two trillion dollars

(04:53):
dollars in wasteful spending. It he claimed he would find,
and to be fair, Beset, he's right. Elon promised brillions
in cuts without having any pain because he was going
to find it in a waste fraud abuse. He gave
Republicans cover to increase spending and then didn't deliver. It's
just calling balls and strikes. But here's the thing. Among Republicans,

(05:15):
especially grassroots donors, Elon is very popular. So I don't
think this is all about spending. I think there's a
lot of sour grapes between Elon and Trump that isn't
going away. But this part of this bill is going
to Elon's words about this bill are going to impact
the base of the Republican party. Parts of this bill

(05:36):
are very unpopular. I think it's still likely to pass,
given that almost all of Trump's legislative agenda is wrapped
up in this single bill, and Republicans can't afford not
to pass anything. But the one provision I want to
talk about, the one persition I'm thinking about, is on
page two hundred and seventy eight and to two hundred
and seventy nine of the bill. If you want to
go in the congressional website and read it, which up

(05:58):
till this week had almost nobody noticed. On page two
hundred and seventy eight, Congressional Republicans snuck in a ten
year moudatorium on states regulating artificial intelligence. The section reads,
and this is a bit long, but I want I
want to read the whole thing. It's important. Quote no
state or political subdivision thereof may enforce during the ten

(06:20):
year period beginning on the date of the enactment of
this Act any law or regulation of that state or
political subdivision thereof limiting, restricting, or otherwise regulating artificial intelligence models.
Artificial intelligence systems or automated decision systems entered into interstate commerce.

(06:41):
Paragraph one may not be construed as prohibited enforcement of
any law or regulation that the primary purpose and effect
of which is to remove legal impediments or facilitate the
development or operation of artificial intelligence models, artificial intelligence systems,
or automated decision system or streamline licensing, permitting, routing, zoning, procurement,

(07:05):
or reporting procedures in a manner that facilitates the adoption
of artificial intelligence models, systems or automated decision systems. Does
not impose any substantial design, performance, data handling, documentation, civil liability, taxation, fee,
or other requirement of artificial intelligence models, artificial in intelligence systems,

(07:29):
or automated decision systems unless required, unless re requirement is
imposed under the federal law. Sorry, that was long. I
know I started at the end, but that is important.
I want you to hear the actual law, not just
what somebody is saying about it. So, states which already
have begun to regulate AI. More than twenty states have
a law on the books to regulate AI in some fashion,

(07:52):
will not only not be allowed to pass new regulations,
but they cannot even enforce the ones currently on the books.
They are also not allowed to make any AI civil
civilly liable and cannot create a special tax for the
AI system. Okay, I'm going to give you both sides
the argument, and then my take. First, the most ideal

(08:15):
case would be for a national standard in all fifty states,
not to have a patchwork across the country. It's what's
best for business. It helps companies know the regulations and
the laws and how to work properly. I think that's
best business practices. A lot of economist would agree with me.
It's what we've done for our whole host of industries
like cars, telecom, food, drugs, and the AI innovation has

(08:37):
a lot of positives to it. A lot of doctors
told me they use AI to double check research. Scientists
are using AI to help discover new curs for diseases.
I use AI when doing research for podcasts and articles.
I find it to superior to Google. So that's I mean,
there are positives this new technology, and there is a
giant elephant in the room, being China. We want to

(08:57):
be more advanced than our main global adversary. Now here's
the negative side. Congress is broken and does very little,
very quickly, and AI is moving at light speed. Remember
Chat GPT is less than three years old, and colleges
and high schools are struggling to adapt. We don't know
where this technology is going. And with an aging population

(09:18):
in Congress that knows very little about technology and is
dependent on donation from the text industry, how can we
hope that they properly regulate it and it moves with
the times. There are legitimate concerns what happens to intellectual
property with AI? What happens to deep fakes? Especially regarding
to health science politics. Boomers can barely already tell the

(09:41):
difference between a real video and a fake video and
pictures on Facebook. What about if an AI system is
incorporated in healthcare database and screws up and gets people
injured or god forbid, dies and they can't be held
civilly liable. What happens if an AI company makes a
ship and some it's designed to put into kids' brains?
I know that sounds insane, but that's what tech people

(10:02):
are saying that they want to do. That's what the
future they want to have. I don't know if they
can get there, but what if they can. Are we
supposed to wait for Congress to just do something? How
is that going? We are thirty years, thirty plus years
into the Internet revolution, and Congress has only passed a
handful of regulatory bills over the Internet, many of which

(10:22):
protect Internet companies and not consumers. We're twenty one years
since the creation of Facebook and the rise of social media,
which we know has massive effects on children's mental health.
We know that drug dealers use Snapchat to pedal illegal
substance substances to miners and other users. We know how
social media companies censor news and affect our elections. And

(10:43):
until last year, Congress didn't do anything to protect Americans
on social media. And what they've passed is very limited
and so in scope except when they ban TikTok, and
that isn't even being enforced by either the Democrat president
left office or the current Republican president. So don't tell

(11:03):
me that you're so concerned over the future of the
Chinese Communist Party when you won't even stop using their
spy app. You know, because nurses have to do their
little TikTok dances in hospitals. We can't sit there and
stop Chinese spyware, but we have to beat them in
the AI race. It is not a coherent message. There's

(11:23):
also the worry about what's going to happen to the
job market in the future. AI Dario Emodi, and I'm
probably mispronouncing his name. I'm sorry. He is the CEO,
anthropic and a leader in the AI industry. He even
an interview to Anderson Cooper on CNN. Sign note, I
watched the interview, and Dario looks exactly what you would
think tech AI CEO looks like like a virgin who

(11:46):
just fell up a building. Anyway. Anyway, he believes that
we are just five years away from twenty percent of
all entry level white collar jobs being erased. He says
he has an idea of the future where GDP is
at ten percent, the national debt is erased because of
massive GDP growth, and unemployment is at twenty percent. He said,

(12:10):
that's not out of the question. Now, remember during the
Great Recession of the two thousand and eight, which led
to rise of a lot of socialists thinking in our country,
led to a rise of the Bernie Sanders movement in
a certain way, led to rise of Barack Obama in
a certain way, that was when unemployment was at nine
percent nine point five nationwide. Twenty percent is double more

(12:34):
than double that, and he's not the only one. David Sue,
the founder of retools, says his goal is to automate
ten percent of the labor force in the next five years.
I want a trip to the border with some tech CEOs,
like maybe two years ago, and they were all talking
about this, that millions of jobs were going to be
wiped out and people would not be able to find work.

(12:55):
They all said, we're going to have to either have
some kind of government work projects or some people have
something to do, or a universal basic income to subsidize
people who won't be able to find work. We already
could be seeing the signs. Derek Thompson from The Atlantic
Track the recent college graduates have a higher unemployment rate
than the national average for the first time ever. This

(13:16):
is especially true for kids going into STEM fields. Computer
engineers have the third highest unemployment rate among recent college graduates.
That's right now, and Americans are increasingly worried about the
fear that AI will cost their jobs and their industries.
In March twenty twenty three, a Yuga poll found that
twenty nine percent of Americans thought that the advances in

(13:39):
AI would lead to a decrease in the amount of
jobs available in their industry. Fast forward to August twenty
twenty four, that number increased to forty eight percent. So
what's the reaction then to the big beautiful bills AI moratorium. Well,
Ted Cruz says that he's all for it. Josh Howley
says he's got some anxiety about it. I think he'll
fold like a house of cards. And MTG, who voted

(14:01):
for the bill in the House, the first version of
the bill in the House, she says she's going to
vote against it, the compromise bill if the Senate does
not strip that language. Now, remember the bill passed by
a single vote in the House Representative. So if MGG
sticks to her guns, then we might get the ten
year moratorium out of the legislation. We'll see Speaker Mike

(14:22):
Johnson said that he feels very compassionate. We need to
keep that in there. More than three hundred and sixty
state legislators, both Republicans and Democrats, I need a letter
asking Congress to strip the language from the bill. This
includes very progressive Democrats and Super Mago Republicans who signed
with this letter. It wasn't like, you know, just a
rhino and a bunch of Democrats. No, there were hardcore

(14:45):
state legislators signed on to this letter. Senate Republicans will
likely have to make some reform simply because something called
the Bird Rule, which allows Senators to block provisions and
a reconciliation bill that are deemed extoraneous. The federal budget
can't put anything in the reconciliation bill. It has to
be kind of stuck to spending. I spoke to a

(15:05):
Republican Senate staff or a top Republican Senate center for
some cenator you would all know, and he told me
that they plan on attaching the regulation to federal money
so that states could go forward regulation on AI, but
they'll lose access to federal money, something that some states
can't afford to do, but many can't. And when I
brought up the concerns with AI, the staff are simply

(15:27):
said there is no AI crisis, and that this is
to prevent future gridlock, to create a national standard something
that we've seen the kind of gridlock we've seen over
national data privacy laws. We can't get it because states
already started making their own policy. He also assured me
that this is only a moratorium undeveloping AI models, and
states can do whatever they want about specific harms or

(15:49):
offering it in their states. I cast a lot of
doubt on the second part, because why would they ban
legislation on civil liabilities? Republicans in favor of the ten
year more term have fallen into the scarecrow argument that
if Congress doesn't forbid it, then California will create the
regulations and we can't trust gavenusom Okay, if California becomes

(16:11):
too onerous, they'll move to Texas or Florida, or Utah
or Tennessee. Elon Musk moved to Tennessee. They'll make their
own set of standards, the ones that Republican legislators would
work better with. The Truth is that twenty states, including
many Red states, already put AI regulation in the law,
while Congress has done nothing. But by waiting ten years

(16:34):
with the hope that Congress will make regulation that protects consumers, children, workers,
and somehow manages to avoid mass unemployment and wealth consolidation,
leaves someone like me feeling very skeptical and as far
as winning the AI race, against China, goes when does
this race end? Where's the finish line? What are we

(16:55):
racing tours? And can we for one second have an
adult conversation without hyperbole before doing something that probably can't
be undone? And my last thought on this monologue to
any Republican who is running for office or considering to
or in office and hearing this, if you want to
turn twenty million Trump voters into hardcore AOC or Bernie voters,

(17:18):
in the blink of an eye, replace their jobs with
AI without ensuring they can find another one. Now, I
have two guests coming on No Far More but AI
and the policies around it and how it affects the
economy than I do. They're coming up next. Stay tune.
My guest for today's episode is Brad Carson. He's the

(17:39):
former president of the University of Tulsa and the current
co founder of Americans for Responsible Innovation, and Mark Beam,
who's the president of Government Affairs for the AI Policy Network.
Thank you both for being here. I want to start
by talking about the Big Beautiful Bill and the ten
year moratorium on states regulating AI systems. Brad, your organization,

(17:59):
America for Responsible Innovation took part in an effort to
get three hundred and sixty state legislators to sign onto
a letter to oppose that section of the bill. Why
do you feel it's better to have states regulate the
industry instead of like a federal proposition.

Speaker 2 (18:14):
Well, I think you have to view the choices as
not simply a binary between states regulating it and the
federal government. We would prefer a uniform federal regulatory scheme.
That's the best option. The second best option is states
experimenting with something. The worst option is to have a
moratorium on state action and have no federal approach. And

(18:36):
my worry is going forward that third option is actually
what Congress is going to take. We know how hard
it is it get things through Congress. So a moratoria
without a regulatory scheme in place is the worst of
all possible worlds.

Speaker 1 (18:48):
What about the argument from supporters of a more robust
AI policy that has very little regulation, that we don't
want California setting the standards for the and that's just
too onerous for companies to fallow.

Speaker 2 (19:03):
You know, I'm open to the idea that California or
New York shouldn't set a national standard, but states seem
to be free to experiment with regulation in the absence
of a federal regulatory scheme, and the preemption as it's
written today, would ensure that a lot of very important
consumer protection laws they really have nothing to do with
frontier AI regulation, which is the cutting edge, would be

(19:25):
stopped at their tracks too. So what we have going
forward with the Congress that can't pass laws, and I'm
a former congressman, so these are my colleagues. They can't
pass laws. A moratorium would basically leave a vacuum in
which there'd be no oversight at all of what's probably
going to be the world's most transformative technology.

Speaker 1 (19:42):
Mark You had a tweet in response to Dario Anodi's prediction,
I don't know if I'm saying his last name correctly.
I think I am prediction that AI will lead to
wiping out twenty percent of white collar jobs for recent
college graduates. You said, quote nine to eleven may meet
study intelligence failures. How we had warnings but couldn't believe them.
AI execs are telling us what's coming millions of jobs

(20:03):
gone or worse. Will we act on this intelligence overly
wait for an impact. History doesn't repeat it something it
does rhyme what does responsible AI regulation look like? Because
tech execs basically say, if you either have almost no
regulation or you allow us to lose to China.

Speaker 3 (20:21):
Yeah, I think we love to sometimes frame a bit
of a false choice and a false dichotomy here. And
I think, certainly, you know, as as folks in the
Senate Commerce Committee, like to say, it's either going to
be some European Union style heavy handed regulation or America
is going to win and accelerate. And I think we
have to find a middle path, you know. And I

(20:43):
think AI, given how disruptive and transformative it might be,
and also given some of the uncertainty about the timelines
associated with how fast it's moving, I think what would
be obvious to do sort of time now would include
basically increasing the US government's capacity to test and evaluate
these systems for things like loss of control risks, for

(21:04):
things like weaponization, and even things like starting to track
the types of tasks and jobs that are being automated
at least in the Fortune five hundred companies. And this
is an important set of data that can help drive
a more refined regulatory approach. And the fact that we're
not even seeming really interested in getting our arms around

(21:24):
where this has head. It seems to be a significant
potential intelligence failure.

Speaker 1 (21:28):
Oh question, I forgive me if I sound naive, because
I don't know this answer. If China invades Taiwan, is
the AI race essentially over? You know, I obviously it's
it's sort of the chips, That's why I'm thinking that.

Speaker 3 (21:43):
So obviously, you know, the Taiwan some In Conductor Manufacturing
Center is perhaps the world's most strategic foundry for producing
the chips that go into the training runs for advanced
AI systems. And you know TSMC is looking to diverse
its manufacturing capacity, including in places like Arizona. But I think, yeah,

(22:03):
one of the biggest for strategic challenges if the China
were to invade Taiwan would be associated with that fabric.
That fab and whether we even survive the war, our
survivor conflict would be sort of a top of mind question.
But if it's in fact true that China were to
seize that capability and have it for its own, then
I think it would certainly put us in a significant
strategic disadvantage.

Speaker 1 (22:24):
Yeah, and I so, speaking of China, we always hear
about the race against China. I want to ask you
both this question. I'll start with Brad and go to Mark.
But there's the race against China. What is the finish
line of that race look like? Because that's the thing
that we all have. Every job and military thing is
completely replaced by computers. Like what are we racing towards?

(22:45):
And I don't know that answer.

Speaker 2 (22:47):
It's a great question, they don't think people who use
that metaphor have always thoughted through. Usually in the past
we talk about arms races. We use that term pejoratively, right,
It's about excessive expenditures of an escalation of threats that
often leaves a lot of people dead in the back.
And so it's a good question. Is the arms race
the right metaphor for AI? I think what we're trying

(23:09):
to get to is some sense of artificial superintelligence. Right,
the first person to get there, in some speculative scenarios,
I could have a decisive military advantage. The issue is like,
is that actually possible? What does that mean? How quickly
would it diffuse to China? And I think we'd actually
probably be wise to get away from the arms race
framing of it and instead think about what we can

(23:30):
actually do in this country sometimes cooperatively recognize that China
will likely have AI whatever we do shortly thereafter, So
it's going to be difficult to keep to have us
possession of it alone and try to think of a
way to use AI for the good and get away
from this kind of militaristic framing of my.

Speaker 1 (23:48):
Yeah, I mean self taught AI is and I push
this over to Mark. Self taught AI isn't that everyone
says could be possible or super intelligence AI, but it's
years in the future. If then, but there was a
thing instance with a I might be saying this wrong.

(24:09):
Palisade AI open AI three model rewriting its own code
to avoid being shut down, and even blackmailing one of
the developers saying they'll reveal their spouses indiscretion or their
indiscretion with their spouse for the fact, are we close
to that? And like is I mean, do we want

(24:30):
a situation where we're like doctor Frankenstein making the self
taught monster and saying, okay, this is a superior for humanity.

Speaker 3 (24:38):
This is a very concerning development, and unfortunately it's one
we've seen little warning signs along the way that these
AI models behave in strange ways. And to be clear, Ryan,
when developers who are making these systems are not really
building them themselves, they're almost like growing them. And so
they take a whole bunch of compute resources and a
whole bunch of data and run these algorithms ruin these

(25:00):
AI write their own ways, and as a result, it's
it's like there's no it's like a very much a
black box. We can't like crack these things open and
understand how they work and reason about them, and they
make these really weird decisions sometimes, like the example that
you made with the team at Palisade Research in three
avoiding shutdown anthropics model Opus four attempting to blackmail. It's uh,

(25:24):
it's it's engineer, you know. I think if you extrapolate
this further, I think super intelligence might actually be a
little bit closer than than folks may realize. Although it's
some some some disagreement in the field there. But if
you have a super intelligent system that is capable of
rewriting its own code and avoiding shutdown, this is the
scenario on which a lot of the experts are starting
to sound the warm bells about right now.

Speaker 1 (25:45):
Yeah, there was there was that a conversation Dario had
with Dara Modi had with r Anderson Cooper talking about
this entire thing. And I have a young recent college
grad who's a researcher for me, helps kill like some
data for this podcast. And it's a part time job,
you know, it's just data for my twice a week podcast.
But he can't find a full time job as a

(26:06):
recent college graduate. And he blames in part automation and
they just don't hire these types of entry level jobs anymore.
We've seen that college graduates, recent college graduates have a
higher unemployment rate than the national average for the first
time in forty years. You're both dads. You said this
on Twitter, so I'm not exploding some new information. But
your both dads. You both have kids. I don't know

(26:27):
how old they are. Are you both worried about their
opportunity for the workforce and what are college kids supposed
to do to ensure they can get work in this future.

Speaker 2 (26:41):
It's a great question.

Speaker 1 (26:42):
You know.

Speaker 2 (26:42):
We do see companies like do Alinko and Shopify say
that before you post a job, you have to assert
that AI cannot do it. As you mentioned, the un
deployment rate for new college graduates is larger higher than
the national average. The Washington Post recently said that it's
the worst market for software years since nineteen seventy nine,
we hardly even had computers in wide dissemination. So I

(27:06):
do think the jobs are going to go away. There's
a debate in the AI community about how rapidly that
will be, but most people think that white college jobs,
especially will be increasingly automated. And you might see in
the next five or seven years fifty percent of white
college jobs could be done by AI. And so it's
a very good question, and there's no obviously answer for

(27:26):
what you should study. You know. One hand, one could
try to study machine learning itself, where you're one of
the engineers making these products, you know. On the other hand,
it's not obvious what you could do. And I think
this calls into question, like the very social compact, what
is democracy in a world where lots of people don't
have jobs, where we have incredible economic growth, perhaps, but
it's concentrated in a very very few number of people,

(27:49):
the people who are running these labs, and the rest
of us find ourselves and penury. I think it's actually
going to be a devastating problem for us, and it's coming.
As Mark said, we very much agree on most of
these issues. It's coming a lot faster to the average
American things, whether it's two years, five years, seven years,
it's coming very rapidly for all of our jobs. And
there's no easy answer to it.

Speaker 1 (28:10):
Yeah, it might be the number one issue going into
the twenty twenty eight presidential election. Mark, what I mean,
what would you say about fifty percent wipe out of
why college educated jobs is like absolutely devastating. And when
you hear these techeos talk and I've been on trips
with tech CEOs and they've talked about this going back

(28:32):
to three years. In my case, to me, they're like, oh,
we have to do ubi universal basic income. There is
no other They will just be permanently unemployed people.

Speaker 3 (28:43):
Yeah, you know, I agree with a lot of what
with Brad said. I have a sixteen year old son.
We were talking about, like what do you study in
college these days? I mentioned maybe physics and philosophy could
be things that could be useful, But I mean, I
think even the Bureau of Labor Statistics earlier this year
reported something like a twenty percent year over year drop

(29:04):
of entry leveled positions. And I think to your point,
it's not that people will become unemployed, it's like they'll
become unemployable. And I think this is a significant disruption.
I think on the good news is that looking at
where this administration is and some of the remarks that
the Vice President made in Paris, it seems like we're
going to try to give workers. I see it at

(29:25):
the table. I know folks like Senator Cruz are very
focused on jobs, jobs, jobs. I think this is one
that we're going to have to grapple with and candidly
data Like Ryan, I, you know, when people say UBI,
it's probably the most under specified term I've ever heard.
I think by default it's the wealth and power will
amass with a few folks, and I think the rest

(29:47):
of us are going to be left holding the bag.
And I'll say that whenever anyone sort of talks about
this utopian vision, I like to think that. You know,
if you look at history, every time someone promises utopia
ends in one place, and that place is the Gulag.

Speaker 1 (30:03):
And so I think we need to outbeat That's not
a lot I.

Speaker 3 (30:08):
See some message coming out of the White House saying like, oh,
it's just a left wing agenda, is not. I think
it's going to affect everybody, and we have to make
this as much as we can, not partisan, and together.
We have to have a national dialogue. And it's going
to provoke some very serious reconsiderations of the fundamental assumptions
of it that strames our constitutional order right now.

Speaker 1 (30:27):
Yeah, you know, I hear. I hear the things that
come out of Republican and I work in republic and politics,
so this is what I'm most familiar with. Is uh,
you can't let Gavenuwsom run the entire AI economy. You
can't let China win. And even Ted Cruz, who you mentioned,

(30:48):
he is absolutely opposed to any state regulation. Even though
Texas regulates AI. It was one of the earlier states
to have a regulation, and I a I put into
law and signed by the governor. I don't it's not
a serious, serious piece of legislation, but it is a regulation.
So it's just curious of what see they are. I

(31:09):
spoke to someone, a very senior politician in office right now,
and they had a very optimistic view. They have been saying, yes,
jobs will be lost, but jobs will be created. A
bil like the Internet, We're going to see millions of
new jobs that we don't even know about be created.
Is that a possibility.

Speaker 2 (31:29):
It's always a possibility, right. We talked about the lump
of labor fallacy and economics, where even when the automobile
comes and many people were put out of business, new
jobs were created as mechanics or manufacturing. I do think
there's a very real chance at this time. Is different.
I mean, we should be paying attention to what these
tech executives are saying. They have stayed it openly that

(31:51):
it's their job to try to create an artificial intelligence
that can do ninety five percent of the work that
humans can do today. And we're giving them hundreds of
billions of dollars and the smartest people on the planet
to make that happen. And they seem to be making
very real steps toward the realization of that goal. And
so it's a bit of hope to always say, well,
we'll have some kind of new job, you know, this time,

(32:12):
it seems like they're actually going to come and take
a lot of our jobs away, interestingly white collar first,
but as AI gets embedded in robotics, it will come
for the blue collar jobs too, and the goal is
to displace all of us, and they seem to be
making steps toward it. And so that's a very glib
thing to say when said you already see the software
engineer market being crushed, unemployment rising, and again you're probably

(32:36):
going to see more of this going into the future. So,
you know, I hope that person's right. One doesn't really
know how this will develop, but you have to take
it seriously that this time it's very different.

Speaker 1 (32:46):
Yeah, Mark, what do you saying?

Speaker 3 (32:48):
You know, I asked Claude Opus for who would be
the most likely the United States senator to oppose federal
preemption of state regulation, and they answer back with Senator Cruise.
And you know, I think I think we, of course are.
We want to be optimistic, we also want to be
clear eyed about the issue. We can't just pretend and
handwave that this is all just going to be fine.

(33:11):
And you know, if you look at what happened during
the Industrial Revolution, which is what a lot of folks
compare this to. You know, human physical labor was automated
and it became next to zero related to GDP output today.
So physical strength became automated, and that's now not a
function a factor of the economy. Well, now what we're
doing is automating our intelligence, and so the question is like,

(33:34):
what's what's left at that point? Are are you know,
maybe people can have access to their own, you know,
very powerful AI systems, and those AIS can go out
on the Internet and do work for them. But it's again,
it's it's quite under specified. And I admit the fact
that I might not be intellectually intelligent enough to predict
what's going to happen. But I think this seems to
be a bit of a platitude in a handwave to
keep people from panicking, and we might be at the

(33:57):
ponment where it's time to have some of that panic.

Speaker 1 (34:00):
Yeah, someone said that there's the possibility that within the
next year, I forget who was as an AI person
on Twitter, So take it with a very small grainssalt.
But they said that it is possible for a single
person to have no employees and make a company that's
valued in the tens of millions of dollars, and then
the very near future will work the billions of dollars.
Weird though, that it feels like people are having conversations

(34:22):
past each other about something that will affect all of us.
Do you guys get that feeling?

Speaker 2 (34:28):
I definitely think people are talking a bit past one.
Another part of it is it's still a very technical
and fairly new technology for most people, and so they
don't really understand it. They're not aware of what's really happening,
and even if they're aware of what's happening today, they're
not paying attention to where it's going. It's rapidly improving,
so it's not we don't worry so much about what's
happening at this moment, it's what's happening in a year,

(34:50):
two years, five years. Where they've actually radically improved even
today is quite remarkable capabilities. So I think people to
understand it, they want to believe things will just be okay.
There's a lot of reflexive opposition to any kind of regulation,
especially if you're working in Republican politics right regulating business.
It's just the per se bad which I understand that,
and they're not often wrong to believe that. But here

(35:12):
is a technology that's openly stating it's going to try
to transform our lives up in our world, and it's
probably worth paying attention to and putting in like reasonable safeguards,
where like Mark suggests, at the beginning, we're at least
having the capabilities to deal with it. We're getting more
information about where it's going, We're watching what jobs are
being displaced, and we begin the conversation very difficult one

(35:35):
about how this kind of social compact to undergirds our
life might radically change.

Speaker 1 (35:41):
Okay, so I want to ask you both of our
last question. What is a state and what is a
federal regulation on AI that is not in place currently
that should be in place if you were speaking to
a state legislator or governor or someone either in Congress
or the president. So you can go first, Mark.

Speaker 3 (36:00):
I mean, I think the most important thing the federal
government ought to be doing time now is taking the
testing and evaluation regime at the classified level very seriously.
I think having the data to make inform regulatory choices
is a foundational first step. And given everything Brad said
about the technical nature, the relative opacity, the relative fact

(36:22):
that you know, Washington's quite behind on understanding whether I
mean the Vatican seems to be further ahead than Washington,
d C. I'm appreciating the significance of this moment and
then I would urge my friends and the Republicans Republican Party,
you know, we can't take a country club attitude towards
this one. And I agree with that. The fact that,
you know, the government's oftentimes very ham fisted when it

(36:44):
engages in the economy. It's not efficient, but it does
have an important role to play. And we can't, as
Mike Davis use the term, let the tech brodogarchs you know,
run the show. We have to think about the broader
social compact and issues associated with that. And so, but
the first thing we need to do to get through
some of this talking past each other is generating the
right data and having that informed conversation being grounded in

(37:06):
the facts, and then from there we can have actually
a productive discussion.

Speaker 2 (37:10):
That's great.

Speaker 1 (37:11):
I love the idea of the country club attitude because
that is pervasive still despite the whole working class overtures
being made at least verbally towards voters. Right now, it
is still very much a country club attitude. What on't you, Brad,
what would you say is a state or federal policy
that it's not an ACTI currently that should be an actor,

(37:31):
or it's an acted in one state and all the
other states should adopt it.

Speaker 2 (37:34):
I think what marks is something I would associate myself with,
but I would add just this. We have the AI
Safety Institute, something that the Treasury Department yesterday renamed as CASEY,
the Center for AI Standards. So having an institution in
the federal government that can collect data that brings together expertise.
It can be the repository of this kind of data

(37:55):
and to help work with the frontier labs to get
information and to fund them adequately. But they have the
right people who are often quite high paid and very
high levels of skill. That's an important kind of bedrock
policy we have to have in place. We've had something
like that, but it's never been codified. Right, it's an
executive order Biden. Now they've changed a little bit of

(38:15):
the President Trump. We need to put that into law.
We have an institution, it's like, dedicated to looking at
what's happening AI in our economy.

Speaker 1 (38:23):
It's politicians is one class of jobs that probably will
not be automated, and that's probably why we're not seeing
much moving on it. Uh Bra, Where can people go
to find more of your work.

Speaker 2 (38:34):
In the American's responsible innovation can be found at AARI
dot us. You know, read about all our policies, what
we support, blog posts, links to many of the kind
of other thinkers that we find important the work we're doing.
So ar dot us.

Speaker 1 (38:49):
And Mark where can people find more about them? Government
affairs for the AI Policy Network.

Speaker 3 (38:54):
Yeah, THEAIPN dot org. We're focused on accelerating federal preparedness
for tras formative AI, looking at national security, at economic security,
and human flourishing, and would be excited to partner with
anyone out there who wants to help promote education around
this content and help Congress make informed decisions that's going
to be driving towards the betterment of the American people.

Speaker 1 (39:17):
Well, you guys do great work, so I hope that
hope they listen, and hope that we are not as
slow moving on AI as we were on social media
and everything else. So thank you both for being here.

Speaker 3 (39:27):
Thank you so much.

Speaker 1 (39:28):
You're listening to It's the Numbers Game with Ryan Grodowski.
We'll be right back after this message. Okay, I know
the show's gone a little long today and I hope
you've learned something I did. But before we go, I
need to do the listener question they Ask Me Anything
segment you This might be actually a great episode one
day to do just ask me anythings. If I get

(39:48):
enough emails, I would love to do that. If you
want to be part of the ask Me Anything segment,
please m them email me Ryan at Numbers Game podcast
dot com. That's ryanat Numbers gamepodcast dot com. Today's question
comes from David Inda. He writes, my name is Dave.
I'm a huge fan of your podcast. My question is
regarding the changing voting patterns of different demographics in the US,

(40:09):
specifically black and Latino, and any possible conversations you may
have had with your friend Ann Kolzer, who I'm a
massive fan of AND's great. That's not the thing, but
and is great anyway, and talks extensively how the GOP
should only ever focus on winning over white voters. But
in light of the recent data that has been released
post election, I am wondering have you tried to convince
her that the GOP tried to win black and Latino voters?

(40:31):
Is that a good idea? Or do you think the
only thing Republicans should talk about is crime and immigration?
This is what I believe, and these two policies seems
to be universally popular regarding race and ethnicity. I would
love to hear your thoughts on this. Also, I currently
live in Michigan planning to move to Tennessee, and I'm
wondering do Republicans have any chance of winning the open
Senate or governor seat in Michigan in twenty twenty six.

(40:51):
Along the same lines, why are some states voting one
way at state levels at either red or blue and
the opposite of presidential elections? Namely places like George in
New Hampshire and Michigan come to mind? And this question
is party affiliation as salient as people typically believe. Where
does candidate quality in a particular year have a bigger
impact on the people's votes. I know there's a lot
of rambling, so I apologize and massive fan of the

(41:13):
podcast and all the important work you do. One final question, Sorry,
the ADHD is really strong today. I am originally from
Illinois and wondering if there's any hope for a place
like that or is it goondla with California? Wow? Dave, Okay,
lots of questions. Highly recommend switching a decaf, but I
love the passion I have ADHD two so I know
how difficult the struggle com based going to go through
as quickly. Okay, so first and a culture. I try

(41:35):
to keep our conversations private. But what I believe Annas
said is that they shouldn't try to win over non
white voters by remaking policies like we should try to
win over non white voters, but focus on policies that
actually help our voter base, which is namely whites without
a college degree. We haven't talked about the demographic breakdown

(41:57):
the election yet. It's something we're trying to get a
dinner around to sit there and go through it and
we talk about it. I'll send her your woman's regards.
But I think that's what ann really says. It's not
that we shouldn't try, It's that we shouldn't sell out
on policies like on crime, immigration, which are two very
popular policies. Secondly, do Republicans have a chance at winning
the Michigan governor or Senate election next year? Both seats

(42:18):
are going to be vacant, that is correct. My bet
is on the governor's race. And here's why. Mike Dugan
Dugan whatever's last name is the Democrat mayor of Detroit
is running as an independent and will likely take more
votes from Democrats than Republicans. Duggan has been on the
rise in the polls, but it's essentially a toss up
between the likely Democrat Joyce Leen Benson and Republican John James.

(42:42):
As for the Senate seat, it just depends on who
the Democrats nominate, because Republicans will likely pick Mike Rogers again.
Early polls show it's close, but the only canet Rogers
has a decent lead against is Wayne County Health director
and Bernie Sanders supporter. Abdul l said, I think that's
how you pronounce that name. So we'll see if that

(43:02):
you know who they pick. That's a big question. And
how many third party nominees get on the ballot is
a big question. There were a lot of third party, conservative,
third party, libertarian, third party. Last time, they took enough
of the vote to matter. I think he lost by
nineteen thousand votes and they took well over two hundred
thousand or something like that. Anyway, why do you states
vote differently in federal elections state elections? This depends on

(43:24):
one canon. Quality does matter. If you run some link
Kerry Lake, you're probably going to lose irregardless of the
fact that there's more Republicans than Democrats. Party affiliation matters
to a point. It is the most likely outcome. If
you're readister Democrat, you will most likely vote Democrat. That's
the biggest consideration than any almost anything else. Right, I
think gun ownership is the only other larger indicator of

(43:46):
how you will vote than party affiliation. But party affiliation
is very, very important. But in places like Kentucky which
vote for Republicans federally but Democrats locally, or Vermont, which
vote for Republicans for govern but Democrats in the legislature
and federally, a lot of it matters in the minds
of voters is who can put a check on the legislature. So,

(44:11):
because the working class people of Kentucky oftentimes have a
Republican party that has a lot of country club attitudes,
as I said earlier on this podcast, they will support
a Democrat who's more pro worker, pro union. I mean
it was a coal mining state forever Likewise, Vermont, they
will support a socially liberal Republican who's more pro economic

(44:33):
growth because the legislature is so anti economic growth. A
lot of it depends on what the state's going through
and the character and how the character of the candidate
and how much they reflect the interest of that state.
The last question was on Illinois, has it gone the
way of California. I actually think California is in a
better place in respect of how Republicans are winning over

(44:54):
voters than Illinois Black voters, which make a bigger population
Illinois than in CALIFORNI and you are moving slower at
a slower speed towards the GP than Asians and Hispanics,
and southern California whites move towards the Republicans faster than
Chicago area whites. So I think we're going to see
more gains in California than in Illinois, both in Congress

(45:16):
and the state legislature. I know it's probably not the
answer you want to hear, but that's how I see it.
Thank you again, though for listening. I really appreciate your email, Dave.
If you anyone else on any other emails, please email me,
and once again Ryan at numbers gamepodcast dot com. Thank
you for listening. Please like and subscribe to this podcast
on the iHeartRadio app, Apple podcast, wherever you get your podcasts.

(45:37):
Give me a five star review if you're feeling generous,
that really helps people find the show. And I will
see you guys on Monday. Thank you again.

The Clay Travis and Buck Sexton Show News

Advertise With Us

Follow Us On

Hosts And Creators

Clay Travis

Clay Travis

Buck Sexton

Buck Sexton

Show Links

WebsiteNewsletter

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.