All Episodes

July 24, 2025 11 mins

Unmasking the Dangers of 'Preventing Woke AI': A Critical Analysis

In this midweek special of PsyberSpace, Leslie Poston addresses a significant news event: the Trump administration's signing of a federal AI action plan as well as an executive order called 'Preventing Woke AI.' Focusing on generative AI, LLM AI, and NLP AI, Poston discusses the implications of how AI, optimized for ease, can subtly reprogram societal norms and reinforce biases. The episode underscores the threat of authoritarian control through AI, the illusion of neutral AI, and the psychological effects of passive AI use. Poston also offers guidance on ethical AI usage and emphasizes the importance of staying aware and critical in the face of AI-driven convenience. The episode concludes with a call to action for supporting human rights-centered AI initiatives and pushing for protective legislation.

00:00 Introduction and Context
01:44 The Dangers of AI Comfort
02:31 Psychological Impact of AI
04:09 Bias and Ideological Control in AI
07:59 The Cost of AI Comfort
08:49 How to Resist and Use AI Ethically
10:58 Conclusion and Final Thoughts

★ Support this podcast ★
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Leslie Poston (00:11):
Welcome back to PsyberSpace. I'm Leslie Poston.
Today's episode is a specialmidweek episode in response to a
specific news item in The UnitedStates. Before I continue, I
just want to remind you thatwhen I use the phrase AI in this
podcast and other episodes ofthis podcast, I'm speaking

(00:34):
specifically of generative AI,LLM AI, and NLP AI, not machine
learning. Onto the news item.
The Trump admin just signed afederal AI action plan and an
executive order calledpreventing woke AI. And if that

(00:56):
phrase sounds like it came froma troll on Reddit or a hate
group on Telegram, it kind ofdid, but now it's federal
policy. As my area of researchcurrently is applied psychology
with a concentration in mediaand technology, it felt
imperative that I offer someguidance on this through the
lens of that research andthrough the research of others

(01:18):
right away. This isn't justabout tech. This is about how
authoritarianism reprogramsreality by quietly reprogramming
the tools we trust, AI tools.
Like, the tools that are beingshoved into every corner of our
lives right now in this currentAI gold rush, from our email,

(01:39):
phones, and search engines toschools, doctors' offices, and
our jobs. One of the mostdangerous things about AI isn't
what it, quote, knows. It's howeasy it makes everything feel.
We love it because it helps uswrite faster, think faster, code
better, shop easier, even feelsmarter. But that ease is a

(02:02):
trapdoor.
It makes us stop questioning,stop noticing, and stop
resisting. And that's exactlythe point. If our brains are
already wired to seek comfortand AI is being intentionally
designed to reinforce thatcomfort while quietly inserting
bias, then how do we break free?How do we resist the easy button

(02:26):
when that's the whole interface?Let's talk about it.
Human brains are energy misers.Neurologically, we're built to
conserve cognitive effortwhenever possible. We default to
mental shortcuts, what DanielKahneman called system one
thinking, which we've talkedabout before. Fast, intuitive,

(02:47):
automatic. It's how we avoiddecision fatigue and overload.
It's also why it's so hard tounlearn something, even when we
know it's wrong. When weencounter complexity, nuance,
contradiction, or discomfort,our brains react like they're
under threat. Our stresshormones rise, our fight or
flight systems activate, and welook for a way out. AI tools tap

(03:12):
directly into this wiring. Theyoffer us clean answers, fast
results, seamless experiences,and they don't challenge our
thinking unless we ask them to.
Even then, they don't do such agood job. Plus, most people
won't ask them to because easyfeels too good. But when
technology is optimized for easeover truth, it stops being a

(03:34):
tool for growth and becomes atool for control. I mean, change
is hard, and that's by design.If you've listened to our
earlier episode on why peoplestruggle to change, you'll
remember that change triggers acascade of psychological
defenses, fear of the unknown,identity threat, and social
belonging concerns.

(03:55):
AI tools that reinforce ourworldview, especially biased or
exclusionary ones, don't justfeel convenient, they feel
validating. They reward ourexisting beliefs, and they
eliminate the friction ofdissent. This is why propaganda
works better when it's deliveredthrough design, and it's why AI
systems programmed withideological slants like Grok,

(04:18):
Elon Musk's chatbot on Twitter,which was recently modified to
reflect white nationalist, antiSemitic, and authoritarian views
are so dangerous. They don'thave to yell to convince us.
They just have to agree with us.
Comfort is the delivery system.And what about the illusion of

(04:38):
neutral AI? AI often feelsneutral and the tone is calm.
The information is formattedcleanly. The interface feels
objective, but that's anillusion.
All AI systems are trained onhuman data, written by humans,
selected by humans, labeled byhumans, and built by teams with

(04:59):
human ideologies, humanincentives, and human blind
spots. There is no such thing asvalue neutral design. When AI
developers claim they'reremoving bias, what they often
mean is they're replacing onebias with another. And right
now, in some sectors, thereplacement is deliberate. To

(05:20):
erase inclusive, equitable, factbased perspectives and reframe
them as woke ideology.
The language of anti woke isn'tjust a cultural meme. It's not
funny. It's a campaign toreengineer the default settings
of reality. And the mostchilling part, it works best
when no one notices. What aboutpassive use, active harm?

(05:45):
Let's talk about what happenspsychologically when people use
AI passively. First, there'smoral disengagement, the
tendency to separate our actionsfrom their ethical consequences
when someone else like a machineis doing the heavy lifting. And
then there's the diffusion ofresponsibility. When something
goes wrong, we assume someoneelse is accountable. Oh, it's

(06:08):
not me.
It was just a tool. And thenthere's confirmation bias where
people cherry pick AI outputsthat align with what they
already believe and ignore orreject the rest. Put that all
together, and you get a perfectstorm. Users who feel smart and
empowered, but who are actuallybeing slowly rewired into

(06:28):
ideological compliance. We'vetalked before on the show about
gaslighting, how manipulatorsmake you doubt your own
perceptions.
Biased AI doesn't need togaslight you overtly. It just
needs to flood you withreinforcement. It doesn't erase
your mind. It nudges it untilit's not really yours anymore.

(06:53):
And let's make this plain.
This isn't an accident, and it'snot just about market forces or
unintentional gaps in data. Thisis about power. The people
shaping AI policy right now,particularly those pushing the
anti woke narrative, are tryingto reshape public perception
through technology. Thatincludes Trump, Musk, Teal, and

(07:15):
other far right players who areinvesting heavily in AI while
calling for the erasure ofinclusive language, equity,
diversity, and even historicalfacts from the training dataset.
What's being labeled woke hereisn't radical.
It's basic decency. It's truthand reality. They want AI to

(07:35):
reflect their worldviewexclusively because AI is
becoming the front end for humanknowledge. If they control that
interface, they control publicunderstanding of everything, of
race, history of gender, war,freedom. This is not about
preventing bias.
It's about institutionalizingbias. And there's a big cost to

(07:58):
that kind of comfort. Here'swhere things get painful. Most
people won't resist this, notbecause they're evil, but
because they're tired. They'reoverworked.
They're distracted. They'rebombarded with notifications,
stress, media, misinformation.They're looking for something
that makes their life a littleeasier, smoother, and a little
more manageable, and AIdelivers, at least on the

(08:21):
surface. But the cost of thatcomfort is cumulative. More
discrimination in hiring andhousing, more hate speech
dressed up as free speech, morehistorical revisionism, more
harm to marginalized groups,more authoritarian control
passed off as innovation.
The easy button is lying to us,and it's lying in our own voice.

(08:49):
So what does it take to resist?Well, resistance requires
friction, effort, awareness,intention, and a willingness to
feel uncomfortable. Ethical AIuse means asking where your
tools get their information,checking what voices are missing
in the information you get backfrom your prompts, adjusting

(09:10):
defaults and filters, supportingdiverse and transparent AI
projects, calling out bias whenyou see it even when it's
subtle, and frankly, maybe usingAI on hard mode, which means
installing it on your own laptopand not using an online publicly
accessible AI tool. It meansknowing that the easiest answer

(09:31):
is almost never the mostinclusive or the most accurate
and being okay with sitting inthat discomfort.
Moral reasoning is a muscle. Ifwe never exercise it, we
atrophy. If we let machines doour thinking for us, we lose not
just agency but empathy. And aswe talked about in an episode on

(09:52):
AI and education recently, youcan lose cognitive ability as
well. And you don't need to bean engineer to take action on
this.
You just need to be awake, notwoke in the culture war sense,
but just awake to the choicesyou're making when you use these
tools. Support AI companies thatcenter human rights and

(10:13):
transparency. Push forlegislation that protects
marginalized groups fromalgorithmic harm. Read more than
one source. Interrupt your ownassumptions.
And if you're building tools,build them with care because
code is never just code. It'spower. There's a reason the

(10:34):
people in power want AI to makethings easy for you because if
it's easy, you won't notice itchanging you. But noticing is
your job now, especially if youcare about the truth, justice,
and reality itself. You don'thave to become a technologist or
a psychologist.
You just have to pay attention.Your brain craves ease, but your

(10:56):
ethics deserve more. Thanks fortuning in to this special
episode of PsyberSpace. This isLeslie Poston signing off. Stay
critical, stay curious, anddon't mistake comfort for
safety.
We'll be back Monday with ournext regularly scheduled weekly
episode. Thanks for listening.
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.