All Episodes

April 15, 2025 • 37 mins
In this Tech Roundup Episode of RTP's Fourth Branch podcast, Kevin Frazier and Aram Gavoor sit down to discuss the recent, fast-moving developments in AI policy in the second Trump administration, as well as the importance of innovation and procurement.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Welcome to the Regulatory Transparency Projects Fourth Branch podcast series.
All expressions of opinion are those of the speaker.

Speaker 2 (00:18):
Hello, and welcome to today's fright Regulatory Transparency Projects Fourth
Branch Podcast.

Speaker 3 (00:24):
With the Federalist Society.

Speaker 2 (00:26):
My name is Elizabeth Dickinson and I'm an assistant director
with the Regulatory Transparency Project. As a reminder, all opinions expressed.

Speaker 3 (00:33):
Are those of the speakers and not of the Federalist Society.
Today we will be.

Speaker 2 (00:37):
Discussing the latest updates in AI policy. We're excited to
have with us two great legal experts. Kevin Frazier, an
AI innovation and law fellow with the UT Austin Law School,
and arm Before, the Associate Dean for Academic Affairs at
the George Washington University Law School.

Speaker 4 (00:53):
Thank you both so much for joining us today, Thanks
for having us live.

Speaker 3 (00:57):
Thank you all right.

Speaker 5 (01:00):
It's always so much fun to have my go to
AI analyst on the RTP Fourth Branch Podcast. Today, we
have so much to dive into. Two new crucial AI
directors from the White House, both announced on April seventh,
and these policies set new requirements for promoting AI adoption

(01:20):
and innovation among federal agencies, managing related risks and ensuring
AI use aligns with core American values and norms. We
have so much to break down. But before we get
into the nitty gritty, arem why now? Why did these
policies come out at this point in time? Regular listeners
will know we have the AI Action Plan that's due

(01:42):
at the end of July. So why in the middle
of April, kind of amid a lot of other events
going on. Do you think the White House chose this
moment to announce these two policies.

Speaker 6 (01:53):
Well, AI is a crucial element to this administration, so
the Trump Administry's technological, national competitiveness, national security goals, and
the President's been quite clear and direct about that. These

(02:13):
policies M twenty five twenty one M twenty five twenty two,
which are known as M memos from the Office of
Management and Budget, are promulgated in furtherance of the President's
Executive Order fourteen one seventy nine, which was a Day
three executive order titled Removing Barriers to American Leadership and

(02:39):
Artificial Intelligence, and actually a number of other related policies
and statements.

Speaker 5 (02:46):
And looking at these policies can you help us set
the stage a little bit for how is this a
market transition from AI adoption and AI use under the
Biden and Minute stration. What is the pivot occurring here?
We heard in the fact she detailing these two policies,

(03:07):
they described this transition as one from a risk averse
approach under the Biden administration to a new forward leaning,
pro innovation, pro competition mindset.

Speaker 3 (03:19):
So can you.

Speaker 5 (03:20):
Detail a little bit more what did AI adoption look
like under the Biden administration?

Speaker 6 (03:25):
So AI adoption, I would say AI adoption policy was different,
distinctly different in the Biden administration, whereas the Biden administration
built upon Trump forty five and President Trump's Executive Order
thirteen nine sixty with the AI Bill of Rights from

(03:48):
twenty twenty two, Executive Order fourteen one ten twenty twenty three,
and the AI National Security Memorandum in twenty twenty four,
or the big distinguishing difference and the Biden administration was
that that administration viewed AI as a mechanism to advance

(04:13):
further civil rights in the view of the Biden administration
Democratic Party, and to engage in social change. So it
was viewed as a tool of advancement, but also a
tool of the so described equity. And the big difference

(04:34):
for the Trump administration is to remove that element from
AI adoption, have fewer breaks associated with it, and with
the intended goal to be the development of AI for
its operational purpose and for its technological purpose.

Speaker 5 (04:57):
And saying at a thirty thousand foot level for just
a second longer, do you think it's an accurate characterization
to frame.

Speaker 4 (05:05):
This a little bit?

Speaker 5 (05:06):
As the Biden administration saw AI as government with AI, right,
it was just using AI as another tool, as if
it were Google or a spreadsheet or what have you,
just kind of augmenting existing government services and systems with AI.
Whereas now we're seeing, speaking on April ninth, just in

(05:28):
the wake of an announcement that the federal government will
now be using AI for the first time to assist
with federal employee record keeping. Can we now say the
Trump administration is introducing a period of government by AI
where we're actually seeing full on AI use cases to
improve the effectiveness and efficiency of the government.

Speaker 4 (05:48):
Is that a kind of.

Speaker 5 (05:49):
Useful dichotomy government with AI versus government by AI?

Speaker 3 (05:56):
I think to a degree, yes.

Speaker 6 (05:58):
So the distinguish between the administrations and to draw a
through line. One of the one of the factors to
consider is the advance in the march of the technology itself. Right,
So in one respect, we don't want to be unfair
or throw spitballs one way or another when the underlying

(06:18):
technology at the beginning of the Biden administration did not
include you know, Chat, GPT and Claude and and LAMA
at the level in which it operates now.

Speaker 3 (06:30):
So that's one point.

Speaker 6 (06:32):
The second point is, and this is really where I'm
agreeing with your position, is that the Trump administration is
really leaned in to engage in technological transformation of how
government runs to align with other policies. So reducing the
footprint of the federal worker as opposed to upscaling as

(06:53):
a philosophy in the Biden administration, which was much more
incrementalist and fewer steps, with greater thought in between the steps,
greater planning. And then also in this administration, the Trump administration,
there's a number of different policies that centralize decision making

(07:15):
policy making within the White House, and AI is a
fantastic mechanism to help to effectuate that. So this one
executive order that I referenced is one of many presidential
actions and vice presidential actions that lead us to this point.
And I'm happy to go through a number of the

(07:36):
different mosaic data points so that our listeners can see
how this fits in within the big picture of Trump
two point zero.

Speaker 5 (07:43):
Yeah, let's set that scene just a little bit, because
we've heard, for example, Vice President Vance go to Paris
and shake up the EU by specifying that this wasn't
a time for AI safety, this was a time for
AI opportunity. And then again at the American Dynamism Summit,
Vice President Vance said, this is going to be a

(08:05):
era of AI dominance in the US. So when folks see, okay,
two policies from omb this seems like maybe small potatoes
to some people. Why is this a matter that people
should be paying attention to when we think about filling
out the broader Trump AI agenda.

Speaker 6 (08:24):
Because this is the implementation at the rank and file
government level and to the interagency level in a way,
these m memos are typically implementation memos of other policy,
and M memo is itself not a spontaneous announcement of
policy unless it's within an authority of the Office of
Management of budget like the Paperwork Production Act or the

(08:47):
Privacy Act in nineteen seventy four, So you're right. The
Vice President on February eleventh gave speech at the French
AI summit. That summit was billed as an a safety summit,
and right after the election in early December, it sort
of stealth was rebuilt as sort of like a pro

(09:11):
business summit, and the Vice President focused on saying that
the message was the AI future will not be won
by hand ringing about safety. And then in his Munich
speech the following week on the eighteenth, he really emphasized
the great game nature of this. And then the president,

(09:32):
now we're focusing on presidential policy, writing a public letter
to Michael Kratzios, the director of the Office of Science
and Technology Policy, laying out three big key questions that
he wanted answered. Question one, how can the United States

(09:54):
security position as an unrivaled world leader in critical and
emerging technologies knows prize? What is the first example of
several technologies AI? Followed by quantum information science, which in
part drives AI, and then third nuclear technology. So we
can sort of sheer off the nuclear tech. You can
go on the news and see the B sixty one

(10:16):
DASH thirteen bomb by the National Nuclear Security Administration that
was fast tracked. That was just sort of publicized yesterday.
Right after was also publicized that there's a large collection
of B two stealth bombers in San Diego with saber
rattling in the Middle East. But again the AI and

(10:39):
then quantum as an explicit, separate directive to mister Kratzios.
In addition to that, there is a crypto and AI
zar that the President announced at the beginning of the term.
David Sachs, it seems like he's focusing more on the
crypt do a piece of it, just because if we're

(11:02):
looking at the at the m memo fact sheet, the
references to mister Kratzios from mister Vote Russ Vote, who's
the director of the Office of Management and Budget.

Speaker 5 (11:15):
And now getting into the weeds, we have M twenty
five twenty one promoting Rapid and Responsible AI Adoption. So
what are some of the key provisions in this memo
when it comes to accelerating adoption of AI by federal
agencies and what aspects of it do you want to

(11:35):
call attention to. Maybe we'll start with this role of
Chief AI officers. What were chief AI officers doing under
the Biden administration.

Speaker 4 (11:46):
What's the new charge they face under this memo?

Speaker 6 (11:51):
Yeah, so let me let me react to that first.
When we did our last pot on this on different platform,
my prediction was based on the campaign rhetoric that the
Trump administration would tear out, root and stem everything that

(12:11):
the Bid administration did, because that's what the rhetoric was.
To the contrary and I think actually to the complement
of the administration, they kept the things that were useful,
which which is of course always good policy and keep
us useful. And they kept that chief AI officer function

(12:32):
which was introduced by the Biden administration. They kept the
AI councils, at least at the agency level, they kept
that intact. There's different goals and there's different priorities for
those people, but for purposes of making sure that there

(12:53):
wasn't too much of a shock in the AI infrastructure,
because it's the same people, right, it's like the same
employees' AI policy from one administration. You're likely to remain
on AI policy as a career government official in the
next administration, and you're already somewhat organized. So the huge
difference is the removal of utilizing AIS a function of

(13:18):
social change in the context of the civil rightsification and
excess of what the law requires. So the Trump administration
quite predictably cares about civil rights as it is required
in laws that are on the books, discrimination on laws
that are on the books, complying with the Constitution as

(13:40):
it is understood today, and now using AI to engage
in some new level of policy making or to advance
and equity interest. So that's a huge difference. That's like
the biggest tectonic difference between them all. And then there's
also sort of a supercharge as you describe as you
as you were implying Kevin, with regard to the interoperable

(14:05):
implementation of AI use cases information sharing, getting rid of
the silo wing. We saw a lot of that, you know,
laid out in a broader sense in the twenty twenty
four National Security Memorandum on what is the October thirtieth
the National War College, in the Naval War College. And

(14:27):
but the other elephant in the room is DOGE. Right,
So there's there's a bunch of other executive orders that
are referencing.

Speaker 3 (14:39):
DOGE.

Speaker 6 (14:40):
Gaining access to systems connecting systems. You see rhetoric and
statements from Elon Musk saying well, for fraud detection in
you know, Social Security Administration, you have a number of
different standalone legacy systems and the tech stack that.

Speaker 3 (14:57):
Aren't talking to each other. Now they are.

Speaker 6 (15:00):
So with all of this networking of all of these
government databases, legacy and all of these different agencies at
some cost, right like you're seeing a lot of resignations,
like the IRS Acting Commissioner resign based on the data
sharing agreement between Immigration Customs Enforcement and DHS and IRS
for purposes of.

Speaker 3 (15:19):
Effectuating immigration enforcement.

Speaker 6 (15:22):
You had Treasury change in leadership and the Department of
Treasuries Payment Systems leadership. But given that all of this
is happening, then having an affirmative policy of the AI
implementation and adoption and interaction really is going to strengthen
White House policy making. It's going to make it more efficient,

(15:45):
and it's also going to cut down on a lot
of the human delay associated with presidential policy making with
like the sub PCC, PCC Deputies, Committee, Principles committee processes
because there will be fewer homework assignments now, the policy
making unities within the White House.

Speaker 3 (16:04):
Can just get the information instantaneously.

Speaker 4 (16:07):
Who is ever opposed to less homework? I got to say,
my students love when I assign less work. So good
news all around there.

Speaker 5 (16:16):
And I think what's really stood out to me was
some of the larger framing of this document and of
these policies. So now this idea of the chief AI
officers within each agency serving as quote, change agents and
AI advocates, and that change in mentality of not just

(16:37):
being kind of good folks to go to good resources
for answering AI related questions for that agency, but actually
championing how is this agency going to leverage AI to
improve services, to improve systems is a whole different mentality.

Speaker 4 (16:53):
And with that in mind, I wonder if.

Speaker 5 (16:55):
You can speak to the significance also of as a
result of these policies, we're now seen that rather than
have a bespoke AI specific approval process, so saying you know,
this agency needs to go through these stakeholders to say yes,
we can use AI in that use case. Now, under

(17:16):
M twenty five twenty one accountability processes, approval processes for
AI is treated like any other government IT process. What
does that mean on the ground, How is that going
to change how agencies actually build AI into their various functions, so.

Speaker 6 (17:36):
It'll function is sort of like a red tape clearing.
And to be very clear, the Trump administration is indeed
concerned with bad use cases of AI. It's styled now
is high impact. And again it's based on existing law

(17:57):
as it is understood currently and not an idealized version
of a future state of law, which is or like
a common law that the Biden administration was endeavoring to develop.

Speaker 5 (18:10):
I also to pause there for a second, can you
give us one or two examples of what's a what's
a high impact use of AI?

Speaker 4 (18:19):
What does that actually look like?

Speaker 3 (18:22):
All right?

Speaker 6 (18:22):
So this is laid out in section four of M
five twenty one, which really begins in Earnest on page fourteen,
and there's a layout of determining high impact AI. So
this relates to AI that has an output that serves

(18:47):
as a principal basis for decisions or actions that have
a legal, material, binding, or significant effect.

Speaker 3 (18:55):
On rights or safety. So let's im pay act that.

Speaker 6 (19:01):
That is remarkable in the sense that you have some
carryover from the prior administration. It's not root and sem
to the credit. You know, of the Trump administration and
is unsurprising in many respects because those three features relate
that we can draw those out, so legal or material

(19:23):
binding sort of touches on the lay lines of regulatory
law administrative law.

Speaker 3 (19:30):
If an AI.

Speaker 6 (19:31):
Is making adjudications, that's something that that principal officers need
to know about because that can lead to all kinds
of rabbit hole problems at the back end with regard
to Article three judicial review, you know, if a bot
decided something and it's unexplainable, or sometimes some decisions must

(19:53):
be made by a person, some things are non delegable,
and then there's other questions of to what the Greek
and a secretary delegate generalized statutory authority to a bot
something like that, And then of course there's safety component
or the rights component. So certainly I think no one

(20:13):
is interested in sort of dystopian decision making, like for
purposes of detention determinations for persons held, you know in
criminal custody based on an algorithm or or like physical
safety right like sending in some probe on a on
a on a drone to like inspect a mind can

(20:36):
be useful provided that you know the human operator is
ultimately making the decision, and it's not just some algorithm
that is, you know, networked to the drone that just
flies into a mind. So just giving you those examples
that makes sense and then drawing very quickly to you

(20:57):
made a point like who would who would be against
more homework. Keep in mind there's always a bit of
a trade off if you're having instantaneous information with systems
that of course have some risk of confabulation. We're certainly
data that requires a narrative to understand the nuance that's

(21:20):
typically involves a human actor, and there's only so many
human actors that the EEO B can can sort of have.
You have the White House overseeing a multi million person
federal executive branch, and you really it's it's impossible to
fully consolidate to like five hundred or so, like all
of that narrative knowledge, historical knowledge that might be missing

(21:44):
from the interoperability and the accessibility of government databases.

Speaker 3 (21:50):
At a fingertip.

Speaker 5 (21:51):
Yeah, and I think too, it's fascinating to see some
additional carryovers, such as the continuation of inventorying AI use
practices AI use cases by the various agencies making sure
we're actively tracking who's using AI for what. That's a
carryover from the prior administration. Also having minimum risk management

(22:15):
practices be adhered to by each agency. There's also a
insistence on various testing and assessment requirements, as well as
some transparency measures as.

Speaker 4 (22:26):
Well, so on the whole ram.

Speaker 5 (22:29):
Looking at this first memo, do you think we're going
to see an explosion of AI adoption by the agencies?

Speaker 4 (22:37):
Is it going to be a trickle? What can we expect.

Speaker 5 (22:39):
Are we going to wake up next week and all
of a sudden every agency is deploying AI in different
contexts or what's that pace going to look like?

Speaker 3 (22:48):
Do you think so?

Speaker 6 (22:51):
Certainly in the first month of the administration, where certainly
we've seen DOCH is acutely active, it looks like the
explosion side, right, there's every single time you have some
like to the point where there was a couple of
weeks and you know you can blame the news cycle
for that too.

Speaker 3 (23:11):
Is that every time there was like.

Speaker 6 (23:13):
A networking of some standalone significant database, that was the
news story itself, in part because of the associated career
official resistance to some of that, and part of course
associated with very rapid riffs or wood shedding. You know,

(23:33):
that's sort of wood shippering. That's the term that Elon
used on X with regard to USAID. So that's happening.
It's happening a lot. Undoubtedly, DOGE has a large workflow
and that's moving quickly. So I would say, so long
as the DOGE capabilities remain and continue to be adopted

(23:55):
and applied, you're going to see much more of.

Speaker 7 (23:57):
This your example that you get of simple digitization, like
you said AI, but it's also it's actually just it's
a simple digitization of paper documents that are kept in
a very large cave, like literally a cave in Pennsylvania
for government retirees. I could say personally, you know, like

(24:19):
I've spent fourteen to fifteen years in the Justice Department.
I wouldn't want my personal.

Speaker 6 (24:25):
Records to be at risk of like physical deletion if
like the cave caved in, that would.

Speaker 3 (24:31):
Just be so weird.

Speaker 6 (24:33):
Of course, I want my information to be accessible electronically.
But for example, the use of AI to help affect
the digitization process is essentially taking like two thousand and
five tech, which is frankly like a scanner with like
a decent ocr rate like above like ninety percent, and

(24:56):
then to fill in some of the blanks, and then
to categorize it so it doesn't have to be so manual.
It can just be so really really quick because most
of the government forms are have O and B control
numbers on them, so they can very easily be categorized.

Speaker 3 (25:10):
Where you're going to.

Speaker 6 (25:11):
Start seeing like a lot of serious, permanent transformative change
to sort of get us out of like pre two
thousands practices that are just sort of based on inertia.
There's so many databases. I remember when I was in
government representing various agencies. Some of them like we're run
on like das and you'd have to like quote literally

(25:34):
the terms like fire up the database if it's a
legacy database, you know, and if we have to produce
it in the context of litigation, you'd have to produce
it in the format of a hard drive so that
there was software to run the database, because the software
didn't exist besides the database itself. So seeing what you're
describing is actually the private sectorization best practices of AI policy,

(26:00):
just like don't do dumb things, you know, test, don't
use it for use cases that are problematic. That creates
a greater alignment between government functionality and use of AI
and the private sector, which will then help to accelerate
adoption without like long back and forth memos and deep discussions.

Speaker 5 (26:22):
And one of the reasons I think folks who are
generally supportive of the diffusion of AI and access to
AI should be paying attention to this is this is
a huge opportunity if it goes right, if we see
this adoption go well to improve not only public trust
in AI and public adoption of AI, but also public

(26:45):
appreciation for the government functioning. As you pointed out, are
if you have a government that's stuck in two thousand
and five, it's still watching the first.

Speaker 4 (26:53):
Track, you're going to be upset right when you see.

Speaker 5 (26:56):
That everyone else all of your other interaction are efficient,
are timely, are quick. So we need government to be
acting in the same way if we want folks to
have that sort of trust that their government can act
effectively and response and respond to pressing issues. So it'll
be very fascinating to see how the federal government kind

(27:19):
of setting a standard of adoption can lead to a
trickle down of state governments of local governments then being
able to adopt AI But before we dive even further
into some hypothetical scenarios, let's turn to the second policy here,
and this policy focuses on procurement, and procurement is probably

(27:40):
the least sexy term that anyone has ever used on
a podcast. Please listener, don't immediately turn to that next
pod in your queue.

Speaker 4 (27:49):
Procurement. Why does it matter?

Speaker 3 (27:51):
Areman?

Speaker 5 (27:52):
And what's the significant changes we're seeing as a result
of this model?

Speaker 3 (27:57):
Sure?

Speaker 6 (27:57):
So keep in mind at GW Law we have like
the number one public procurement program in the world.

Speaker 3 (28:04):
So I'm like a junior varsity player in that.

Speaker 4 (28:08):
Well, we'll cout I know enough varsity, We'll give you
a varsity.

Speaker 6 (28:12):
I'm really selling on the AI stuff on the procurement,
like I know enough to talk about it.

Speaker 3 (28:18):
Let me let me break this down for you.

Speaker 6 (28:20):
So first, is the effective date of this particular memorandum
is a little bit further down the road. It's it's
it's in October M twenty five, twenty two replaces M
twenty four to eighteen, which the Biden administration, and it's
twilight days issued.

Speaker 3 (28:39):
I think it was like November December. What are the
things that a does well?

Speaker 6 (28:47):
Its goals are to ensure the government and the public
benefit from competitive AI marketplace. So whereas in the Biden
administration there was a greater willingness for or prioritization of
the largest, biggest models, which in procurement speech, gives a
higher risk of lock in. Right, So, you know, the

(29:10):
Biden administration adopted, i would say, rather consistent philosophy.

Speaker 3 (29:14):
You know, like, for example, like the F forty.

Speaker 6 (29:17):
Seven warplane, you know, the President Trump authorized, which is
essentially like the F twenty two replacement. You can have
like four different providers, you know, of the most advanced,
you know, like stealth playing out there, you have to
pick one, and you know, those are the things that

(29:40):
are happening in that space. So the shift here is
there's a greater emphasis on free market competition and then
also safeguarding taxpayer dollars by tracking AI performance and managing risk.
So emphasizing the tracking AI performance component of this, right,
the manager risk. You know, the Biden administration was all

(30:04):
over that they had They had like a starkly, very
very heavy, sort of risk averse approach to AI that
was a little bit more elastic on the National Security Memorandum,
sort of using like my you know, fighter craft example,
for national security purposes. And then the third big principle

(30:27):
for M two, which is this administration's big M memo
to replace them twenty four eighteen, is promoting AI effective
acquisition with cross functional engagement. So that's something that the
Biden administration was onto. I think the Trump administration takes

(30:48):
it a step further to the point where the Trump
Administration's one of its big policies is like procurement reform outright,
and that sort of ties into dogy things. Right, So
of this long list of h fraud waves to abuse
so described spends that are on procurement. So I think

(31:09):
that that ties in as well.

Speaker 5 (31:11):
And I think this change is so important, mandating not
only to facilitate competition in this space, but to buy
American right, American AI is going to receive a preference. Also,
there's a preference for interoperative interoperable AI systems, which is
incredibly important as well as well, because arm as you

(31:32):
pointed out, concerns about vendor lock in have always been existent.

Speaker 6 (31:38):
Right.

Speaker 5 (31:38):
If you're stuck going to that same vendor time and
again you have way too long of a contract, you
may be missing out on the next best tech, and
as we know in the AI space, the next best
tech changes seemingly every week, and so if we're not
giving agencies an opportunity to integrate the latest technology and

(32:00):
move on from maybe that older system, then we could
see the federal government falling behind. So I think that's
another strategic and smart play by the administration.

Speaker 3 (32:09):
Yeah, that's right.

Speaker 6 (32:09):
And then tying this to M twenty five twenty one,
which was the AI M memo, there's a specific carve out,
just like there was in the Biden administration for M
twenty four to ten for national security, so we should
expect sort of like a National Security Memorandum two point
zero Trump addition to replace to Biden Executive Order fourteen

(32:33):
one to ten National Security Memorandum you know that was
issued the thirty October twenty twenty four. And also getting
to your point on the Procurement M memo again, I
want to compliment the Trump administration for not just you know,

(32:56):
building from scratch a new policy. They built from scratch
a policy taking into account the aspects that were functional
and useful for Trump administration interest. So there is vendagram crossover,
you know, from the Biden administration, So there's not like
a complete waste of time. You know that the Biden

(33:18):
administration engaged in for these work products because there's good
general policy in there. And then of course there's starkly
divergent you know, policy choices for which you're going to
have entirely expected we're going this different direction approach.

Speaker 5 (33:37):
Well, before we let you get back to your myriad responsibilities,
what does this suggest for the Trump Administration's AI policy
going forward? Does this give you any insights to what
we may expect from the AI Action Plan and do
in late July, or what do you think we'll see

(33:58):
next from the administration?

Speaker 4 (33:59):
Are what we take away from these policies? Sure?

Speaker 6 (34:02):
So, I guess another big point for the M twenty
five two before fully zooming out, or maybe as part
of the zoom out, is that you already saw policies
generally on procurement to empower the GSA General Services Administration
to have greater centralization of policy making and also to
be a choke point or at least an access point

(34:24):
for general sort of like commercial off the shelf like
generally available products, as opposed to the prior model for
time immemorial, which is an optionality to use GSA, or
if you want to buy your pencils, you know, the
Department of Defense can buy its own pencils under its
own procurement strategy. But part of that GSA sort of

(34:47):
initial step change in the Executive Order from last week
or two weeks ago, was the general policy viewpoint that
procurement procurement itself needs to change. So I would anticipate
seeing GSA having more power, more centralized authority, to essentially

(35:09):
be like a gatekeeper from which a lot of these
things can flow. So I would note that, and then
I would also note that all of these policies work
in harmony with each other. For the general principle that
I would say, unlike the Biden administration, where there was
a don't worry, you won't lose your job, were going

(35:30):
to upscale you approach, there is an intentional reducing footprint
goal of the Trump administration, for which AI plays a
central part of making enabling part. So those are very
very big differences. So the more agencies are focusing on rips,
the more they will correspondingly need to rely on tech,

(35:54):
hopefully with not too much of a lag time to
be able to make up for the diminution and human resources,
and I think so long as those remains active. I
know there's a lot of news media coverage and whether
the Special Government Official Employee SGE status of ELON will
extend beyond one hundred and thirty days or so, that's

(36:15):
coming up in like late May. That's the other sort
of elements in this mix.

Speaker 5 (36:21):
Well, lots to keep track of, and am there's no
doubt that we'll be talking to AI again sooner rather
than later. But for now we'll leave it there and
thank you again Liube for having us.

Speaker 3 (36:32):
Thank you for the opportunity awesome.

Speaker 2 (36:36):
Thank you both so much, Kevin and arm for joining
us today for that great conversation.

Speaker 3 (36:40):
Thank you also to our audience for tuning in today.

Speaker 2 (36:42):
And if you're interested in learning more about all of
our programming here at OURTP discussing the regulatory state in
the American way of life, please visit our website at
bregproject dot org.

Speaker 3 (36:52):
That is our eutproject dot org.

Speaker 8 (36:55):
Thank you, half of the Federal Society's Regulatory Transparency Project.

Speaker 1 (37:05):
Thanks for tuning in to the Fourth Branch podcast. To
catch every new episode when it's released, you can subscribe
on Apple podcasts, Google Play and Spreaker. For lays from
our TP, please visit our website at regproject dot org.
That's oregproject dot org.

Speaker 4 (37:30):
This has been a FEDSOC audio production
Advertise With Us

Popular Podcasts

Fudd Around And Find Out

Fudd Around And Find Out

UConn basketball star Azzi Fudd brings her championship swag to iHeart Women’s Sports with Fudd Around and Find Out, a weekly podcast that takes fans along for the ride as Azzi spends her final year of college trying to reclaim the National Championship and prepare to be a first round WNBA draft pick. Ever wonder what it’s like to be a world-class athlete in the public spotlight while still managing schoolwork, friendships and family time? It’s time to Fudd Around and Find Out!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.