All Episodes

October 20, 2025 11 mins

Send us a text

A state is letting an algorithm read the rulebook—and then asking people to decide what to change. We head to Virginia to unpack the “agentic AI” pilot that scans statutes, regulations, and guidance to flag contradictions, redundancies, and unclear language, promising cleaner code for public life. The vision is compelling: fewer dead ends for citizens and small businesses, faster updates as laws evolve, and a maintainable regulatory corpus that doesn’t require a crisis to fix.

We walk through how the tool triages massive text, the kinds of suggestions it can generate, and where human judgment stays firmly in charge. Alongside the upside, we get specific about risk: explainability in a legal domain, bias that could over-target certain protections, and the danger of treating speed as a substitute for process. Accountability is the throughline—who signs off, who audits the outputs, and how courts and legislatures can see a transparent trail from machine suggestion to human decision.

Beyond the mechanics, we dig into the politics and the guardrails that make innovation legitimate: public logs, before–after drafts, independent audits, risk tiers for sensitive domains, and rollback plans when changes misfire. We also map the bigger picture: states adopting AI for internal governance, the potential for fragmentation if approaches diverge, and the likely federal response. Most importantly, we share how listeners can engage—request transparency from representatives, show up for comment windows, and support civil society groups that stress-test these systems. If AI is going to touch regulation, it must do so in the open, with people in the loop and trust as the benchmark.

Enjoy the conversation, then add your voice. Subscribe, share this with someone who follows civic tech, and leave a review with the one safeguard you think every public-sector AI should have.

Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:00):
Welcome back to Inspire AI, the podcast where we
explore how artificialintelligence is reshaping the
systems we live and work within,from the boardroom to the state
house.
Today we're heading to Virginia,where AI isn't just powering
startups, it's rewriting howgovernment regulates itself.

(00:21):
Imagine a state governmentoffice scanning thousands of
pages of regulations, laws,guidance, documents,
administrative rules, not byhumans leafing through binders,
but by an AI tool.
It flags contradictions, spotsredundancies, suggests clear
language.

(00:42):
It's happening now, right herein Virginia.
In July 2025, Governor GlennYouncan signed Executive Order
51, launching what's beingcalled the First in the Nation
Agentic AI Pilot, a regulatoryreview.
Today's episode dives deep intothat initiative.
What it promises, the concernsit raises, what it signals for

(01:06):
the future of AI in thegovernment.
We'll explore what's inVirginia's pilot program, how it
works, and what it aims toachieve.
Promise, efficiency,modernization, regulatory
clarity, pitfalls, and publicconcerns, transparency, bias,

(01:27):
accountability, and the broaderlessons of governments and AI
adoption.
What should we watch for?
What civic participation lookslike.
So let's dive in.
Virginia has long pushedregulatory modernization,
trimming down redundant rules,simplifying language, making

(01:47):
governance leaner.
In 2022, an executive directiveset a goal, reduce regulations
by 25%.
The state says it has alreadyexceeded that.
Agencies have cut 26.8%regulatory requirements and
eliminated 47.9% of words inguidance documents.

(02:10):
But now the administration wantsto go further, faster with AI.
The AI tool will scan allregulatory texts and guidance
documents in the Commonwealth.
It will flag contradictionsbetween regulations and
statutes.
It will identify redundancies,outdated and overlapping rules.

(02:34):
It will suggest streamlinedlanguage, more concise, clear
wording.
The pilot is labeled agentic AI,meaning the system has autonomy
to perform tasks with minimaldirect human prompt.
Agencies will use the AI as atool, not to unilaterally change
law, but to guide human review.

(02:56):
The goal is to help agenciesthat haven't yet met the
reduction targets, as well aspush those that already have to
go further.
It's a bold experiment usinggenerative and agentic AI to
perform what is effectivelylegal with regulatory triage at
scale.

(03:17):
Let's look at a scenario.
Think of a small business owner,Jane, who wants to comply with
state regulations.
She faces cryptic clauses,overlapping rules and outdated
guidance.
She hires a consultant.
If the regulatory code is moot,contradictory and scattered, she
wastes time, money, and effortjust deciphering what she's

(03:41):
supposed to do.
An AI-powered code cleanuppromises efficiency and cost
savings.
If the AI can highlightredundant rules or outdated
text, regulators can focus humaneffort where it matters.
That saves time and money in thelong run.
Clarity and accessibility,streamlined language and fewer

(04:04):
contradictions help citizens,businesses and administrators
alike.
The law becomes more readable,less opaque.
Scalability.
Manual reviews of thousands ofregulations are slow, expensive,
and error prone to that.
AI can augment human capacity toscale that review.

(04:28):
AI systems can be rerunperiodically without flagging
new inconsistencies as lawschange, rather than waiting for
large legislative scrubcampaigns.
Lastly, governance, innovation,and leadership signaling.
If Virginia succeeds, it maybecome a model for other states
or even federal agencies thatcan catalyze broader

(04:51):
modernization.
I quote, Virginia is a nationalleader in AI governance.
In short, aligning AI with apublic administration holds the
promise of smarter, lineargovernment.
But no AI experiment inregulation is risk-free.

(05:12):
So let's examine some of theconcerns.
Transparency and explainability.
How will citizens or watchdoggroups know which regulatory
changes were AI suggested versushuman curated?
If the AI flags a section forremoval or rewriting, will the
reasoning be transparent oropaque?

(05:34):
Black box AI in legal regulatorydomains is especially risky.
People deserve to see howdecisions are made.
How about bias, error, andunintended consequences?
So the AI might misinterpretlegal language, misclassifying
rules, or fail to catch semanticsubtleties.

(05:57):
It might disproportionately flagrules in certain domains,
environmental or health, morethan others, introducing SKU.
But reducing regulations isn'talways good.
Some rules exist for publicsafety, equity and fairness.
Overzealous pruning could harmmany groups unintentionally.

(06:19):
And then there's accountabilityand responsibility.
So if AI suggested deletionleads to legal gap or harm,
who's responsible?
Will agencies or lawmakers betempted to defer too much to the
AI, reducing human oversight?
And how will judicial review orlegislative oversight function

(06:39):
in the new paradigm?
And how about public trust andlegitimacy?
Citizens may push back and say,but who's watching the machine?
The legitimacy of regulations istied to democratic processes.
Input, hearings, stakeholdercomment.
Can AI skip or shortcut those?
And finally, we have legal andinstitutional constraints where

(07:03):
you'll find that some rules arestatutory.
You can't remove or alter themvia administrative guidance, AI
or otherwise.
The AI must respect legislativebounds.
And some agencies might lack theinternal capacity or legal
culture to vet every AIsuggestion.
If they don't have time to lookat their own documents, who's to

(07:25):
say they're going to have timeto look at AI suggestions?
Let's look at an example vetoinga high-risk AI Act in Virginia.
Earlier in 2025, the Virginiagovernor vetoed a proposed
high-risk AI Developer andDeployer Act, which would have
regulated certain AI uses.
That suggests political tensionsover how aggressively to

(07:48):
regulate AI.
With a pilot like this, the linebetween innovation and oversight
will be under scrutiny.
Should be anyway.
What does this tell us about AIadoption and government more
broadly?
Think about governments nolonger being passive adopters.
So states are now launching AIpilots, not just regulating

(08:09):
others' AI.
That shifts the narrative fromAI is external tech to AI
becomes part of statecraft.
And government's frameworks mustevolve quickly.
So traditional rulemaking,oversight, public input
processes can't be ignored.
AI forces us to rethink howregulation is designed,

(08:31):
maintained, and audited.
What about experimentation andguardrails?
Those usually go hand in hand,right?
Pilots are necessary, but theymust include transparent logs,
auditing, rollback mechanisms,stakeholder engagement, and
oversight.
We should also consider citizenparticipation being essential

(08:52):
for if AI touches regulation,citizens must have a seat at the
table.
Transparency, feedback loops,appeals.
Otherwise, trust is going toerode.
And finally, interstate andfederal coordination is are
going to matter.
If different states adoptdivergent AI-driven regulatory

(09:14):
models, fragmentation andinconsistencies will naturally
arise.
As this pilot unfolds, I'dsuggest we watch for whether or
not Virginia is publishingbefore after regulatory drafts,

(09:34):
AI suggestions, and human edits.
We should also think aboutwhether or not they're
requesting stakeholder feedback,like businesses, NGOs, citizens
having input into which rulesstay or go.
I also wonder whether or notlegislature or independent audit
agencies will have enoughvisibility to create the right

(09:56):
governance bodies and oversightcommittees.
We should really think about thepilot as it expands into more
domains, like education, health,environment, because these can
come with much higher risk.
And if you think about it, otherstates adopting similar AI
regulatory pilots, you think thefederal government won't respond

(10:17):
to that?
So how how can we, as citizens,engage?
We should request transparency.
So ask local staterepresentatives whether AI logs
and decision rationales will bepublished.
We should definitely participatein public comment periods if
they're made available, andsupport NGOs, that is,

(10:41):
non-government organizations, oracademic efforts to audit AI
regulatory tools.
And we all should absolutelystay informed, because
regulation affects daily lifemore than many people realize.
Virginia's AI pilot isabsolutely ambitious and may
serve as a bellwether for thefuture of government.

(11:03):
It has the potential tostreamline red tape, increase
clarity, modernize governance,but only if we embed
accountability, transparency,and public participation from
the start.
As you think about AI in yourdomain, whether business,

(11:25):
policy, or community, ask.
Who designs the AI?
Who audits the AI?
Who can question its outputs?
Those questions are more thantechnical.
They are the foundations of thetrust.
So until next time, staycurious, keep informed, and
let's together help shape an AIpowered future that works for

(11:49):
all.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.