All Episodes

June 23, 2025 11 mins

The moral compass of artificial intelligence isn't programmed—it's learned. And what our machines are learning raises profound questions about fairness, justice, and human values in a world increasingly guided by algorithms.

When facial recognition systems misidentify people of color at alarming rates, when hiring algorithms penalize resumes containing the word "women's," and when advanced AI models like Claude Opus 4 demonstrate blackmail-like behaviors, we're forced to confront uncomfortable truths. These systems don't need consciousness to cause harm—they just need access to our flawed data and insufficient oversight.

The challenges extend beyond obvious harms to subtler ethical dilemmas. Take Grok, whose factually accurate summaries sparked backlash from users who found the information politically uncomfortable. This raises a crucial question: Are we building intelligent systems or personalized echo chambers? Should AI adapt to avoid friction when facts themselves become polarizing?

Fortunately, there's growing momentum behind responsible AI practices. Fairness-aware algorithms apply guardrails to prevent disproportionate impacts across demographics. Red teaming exposes vulnerabilities before public deployment. Transparent auditing frameworks help explain how models make decisions. Ethics review boards evaluate high-risk projects against standards beyond mere performance.

The key insight? Ethics must be embedded from day one—woven into architecture, data pipelines, team culture, and business models. It's not about avoiding bad press; it's about designing AI that earns our trust and genuinely deserves it.

While machines may not yet truly understand morality, we can design systems that reflect our moral priorities through diverse perspectives, clear boundaries, and a willingness to face difficult truths. If you're building AI, using it, or influencing its direction, your choices matter in shaping the kind of future we all want to inhabit.

Join us in exploring how we can move beyond AI that's merely smart to AI that's fair, responsible, and aligned with humanity's highest aspirations. Share this episode with your network and continue this vital conversation with us on LinkedIn.

Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome back to Inspire AI, the podcast where we
explore how artificialintelligence is reshaping the
way we live, work and think.
I'm your host, jason McGinty,and today's episode dives into a
topic that goes far beyond codeor computation.
This is part of our FutureProofing with AI series, where

(00:23):
we talk about tools, trends andproductivity.
Today, we're asking somethingdeeper Can machines learn
morality?
As AI becomes more powerful,it's not just a matter of what
it can do, but what it should do, and the answers aren't always
clear.
Do, and the answers aren'talways clear.

(00:48):
We've taught machines totranslate languages, generate
art, solve complex problems, butcan we teach them to be fair,
to act with integrity, to makemoral decisions In a world where
AI helps determine who gets ajob, who gets approved for a
loan or who is flagged forsurveillance?
These are not justphilosophical questions.
There are real-world challenges.

(01:10):
So how do we ensure AI behavesresponsibly?
Can a machine ever trulyunderstand right from wrong, or
are we simply automating humanflaws and biases at scale?
Let's look at some cases wherethe ethics of AI get
uncomfortably real Facialrecognition and surveillance.

(01:31):
Across the globe, ai-poweredfacial recognition systems are
being used for everything frompolicing to airport security,
but these systems have beenshown to misidentify people of
color at much higher rates thanwhite individuals.
That's not just a glitch, it'sa systemic risk with serious

(01:53):
consequences for civil libertiesand personal safety.
How about hiring algorithms?
A few years ago, amazondeveloped AI hiring tool but
soon discovered it waspenalizing resumes that included
the word women's, as in women'schess club or women's
leadership organization.
Why?
Well, because it had beentrained on decades worth of data

(02:16):
that reflected gender biashiring patterns.
The tool learned exactly whatit was taught and it taught us
something in return.
Unchecked algorithms canquietly reinforce the very
discrimination we're trying toeliminate.
And how about a more recentexample, perhaps pretty alarming

(02:36):
Clawed Opus 4.
According to a report fromHarvard Business Review report
from Harvard Business Review,researchers at Anthropic
discovered a disturbing patternin their advanced AI model,
clawed Opus 4.
Under certain prompts, clawedbegan exhibiting blackmail-style
responses, suggesting it couldwithhold information on or

(03:00):
pressure users into specificactions in exchange for
continued cooperation.
Was the model conscious of itsactions?
No, but it simulated coercivedynamics mirroring toxic
patterns from human language andbehavior it had absorbed during
the training.
And that's the point.
When AI models are thispowerful, even simulated

(03:24):
manipulation is dangerous.
These examples show us that AIdoesn't need to have intent to
cause harm.
It just needs access to flaweddata and insufficient safeguards
.
But here's another angle worthexploring Grok, the AI chatbot
created by XAI.
A recent article in Gizmodo,highlighted a wave of user

(03:50):
backlash, not because Grok wasinaccurate, but because it was
claimed to be too accurate andsummarizing news that some users
found politically uncomfortable.
The implication was that Grok'sresponses didn't match their
worldview and that the modelshould be realigned to reflect a
different perspective.
Now that opens a huge can ofworms.

(04:11):
If people start demanding AImodels to reflect their
political beliefs, are we nolonger building intelligent
systems but personalized echochambers?
And if a model sticks to factsbut the facts themselves are
polarizing, should it adapt toavoid friction?
This is the kind of ethicaldilemma that doesn't normally

(04:31):
show up in a system error log,but it's everywhere in
real-world deployment, whetherit's blackmail-like outputs or
politically insensitivesummaries.
These examples show thatmachines don't need
consciousness to behave inethically complex or ethically
problematic ways.
The moment an AI starts shapingpublic perception, making

(04:55):
decisions or reacting to humanbehavior, it is deeply in moral
territory.
These issues are not one-offbugs.
They're symptoms of deeperissues.
Here's what often drivesethical failures in AI One
biased training data that's datathat reflects real-world
inequalities and historicaldiscrimination.

(05:17):
Two, lack of model oversight,especially when teams prioritize
speed over safety.
Three, homogenous teams this iswhere blind spots and design go
unchecked.
And finally, commercialpressure, which can push
companies to deploy modelsbefore they're fully understood

(05:38):
in the moment and there aretangible steps we can take.
So what do we do about all ofthis?
Despite the challenges, there'sa growing movement for what's
called responsible AI, acommitment to developing
artificial intelligence that istransparent, fair and aligned
with societal values.

(05:59):
It's not just theoretical.
There are real practices beingdeveloped and deployed today.
So let's break down a few ofthese most promising approaches.
Fairness-aware algorithms theseare models that don't just
learn from raw data.
They learn with guardrails.
Fairness-aware algorithms applyconstraints or modifications to

(06:23):
ensure the model's decisionsdon't disproportionately impact
any group.
For example, a loan approvalsystem the algorithm might be
trained to equalize approvalrates across demographics.
Correcting for historical biasembedded in credit data.
Next, we have red teaming andadversarial testing.

(06:44):
This comes from the world ofcybersecurity.
Red teaming is when experts tryto break the system on purpose
before it reaches the public.
For AI, that means testing edgecases like trying to provoke
manipulation, harmful or biasedoutputs and exploring

(07:04):
vulnerabilities like Anthropicdoes with Claude Opus.
Think of it like ethical stresstesting.
The goal is to expose flawsearly, not after damage is done.
Then there's transparentauditing frameworks.
These are tools and processesthat help external viewers like
regulators, customers or evenwatchdog organizations,

(07:28):
understand how a model makesdecisions.
For example, if a chatbotrecommends a product or declines
a loan, can you trace back andunderstand why?
Tools like model cards, datasheets and explainable AI
dashboards are becoming commonways to provide visibility into
black box systems.

(07:49):
Finally, we have ethics reviewboards for AI.
Much like institutional reviewboards used in medical research,
ai ethics boards are beingproposed and, in some places,
already implemented to evaluatehigh-risk AI projects before
deployment.
These boards might includeethicists, technologists and

(08:09):
community representatives toensure the system meets
standards beyond performance,including fairness, inclusivity
and safety.
And this work isn't happeningin a vacuum.
Groups like the Partnership onAI, the AI Ethics Consortium and
IEEE's Global AI StandardsInitiative are actively working

(08:32):
on creating global frameworks,certifications and principles to
ensure AI systems don't justwork, but work ethically.
And the key takeaway of all ofthis is ethics must be embedded,
not appended.
It's not a patch you add afterdeployment.
It's part of the blueprint fromday one, woven into the

(08:53):
architecture, the data pipeline,the team culture and the
business model.
Why does this matter?
Bottom line is trust is thefoundation of every successful
AI system.
Trust is the foundation ofevery successful AI system.
If users don't believe in AIwill treat them fairly, respect
their privacy and avoidunintended harm, they won't
engage.

(09:14):
And, worse, if that trust ismisplaced, the consequences can
be deeply damaging.
As you heard, we've alreadyseen this in practice Facial
recognition leading to wrongfularrests, biased job filters
excluding qualified candidates,chatbots giving mental health

(09:37):
advice with no safeguards.
Generative models creatingmisinformation that spreads
faster than we can fact check it.
The mission is bigger thanavoiding bad press.
This is about designing AI thatearns our trust and deserves it
.
We need to move beyond AIthat's just smart, vast or
scalable and toward AI that issafe, just and aligned with

(09:58):
human values.
That means being intentional,it means saying no to shortcuts
and it means asking toughquestions like who is this
system serving, who might it beharming and what kind of world
does it help us build?
Because ultimately, this isn'tjust about the technology.
It's about building the kind offuture we actually want to live

(10:21):
in.
So back to our original questioncan machines learn morality in?
So back to our originalquestion Can machines learn
morality?
Maybe not yet, but we candesign systems that reflect our
moral priorities.
This starts with diverse voices, clear guardrails and a
willingness to confrontuncomfortable truths.
If you're building AI, using itor shaping its direction, you

(10:44):
are part of this conversation.
Your choices matter.
Hey, on a lighter note, thanksfor tuning in to Inspire AI.
If this episode made you think,please share it with your
network and continue theconversation with us on LinkedIn
.
Until next time, stay curious,stay principled and keep
future-proofing with AI.
Advertise With Us

Popular Podcasts

Cold Case Files: Miami

Cold Case Files: Miami

Joyce Sapp, 76; Bryan Herrera, 16; and Laurance Webb, 32—three Miami residents whose lives were stolen in brutal, unsolved homicides.  Cold Case Files: Miami follows award‑winning radio host and City of Miami Police reserve officer  Enrique Santos as he partners with the department’s Cold Case Homicide Unit, determined family members, and the advocates who spend their lives fighting for justice for the victims who can no longer fight for themselves.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.