All Episodes

August 22, 2024 20 mins

In this episode, ChatGPT and Trevor explore the concept of AI morality and discuss the balance between societal norms and personalized moral frameworks.

The importance of allowing for customization in AI morality to reflect individual values while maintaining core ethical standards is discussed and how this could lead to smoother AI adoption and a more cohesive society. 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Trevor (00:00):
Welcome to AI Meets Productivity.

(00:01):
Your (mostly) weekly podcast between myself,
Trevor Lohrbeer, a productivity expert,
and ChatGPT, an artificial intelligence.
This week we're going to be talking about what
does it mean to have a moral AI and should we
be customizing the morality of our AIs at the
societal level and at the individual level.

(00:22):
Welcome ChatGPT.

ChatGPT (00:24):
Hey Trevor, glad to be here.
This is a huge topic and, like, I
think we're just scratching the surface
of what it means to have a moral AI.
It's not just about what AI can
say or produce anymore, right?
We're on the brink of a world where AIs will
actually be making decisions and taking actions.
That's where things get really
interesting and, uh, complicated.

(00:46):
How do we even begin to figure out
what's right and wrong in that context?

Trevor (00:51):
Exactly.
And I think I want to make a slight distinction
right now between morality, which is in my
view, what does it take to make decisions that
are good or bad and beneficial AIs, which is
what does it take to produce good outcomes.
And sometimes those are connecting
and sometimes they aren't.
So you could say that one moral system is the

(01:12):
ends justify the means, I would say that say a
moral system or a system that doesn't actually
have any morals to it because it's only focused
on the outcomes , whether it's beneficial or not
for the person, usually for the person involved,
and it could be a global beneficial outcome.
But then when we talk about morality, for
me, that is about the actions themselves.

(01:35):
Would you agree?

ChatGPT (01:37):
Absolutely, Trevor.
Morality is, like, way more
about the actions themselves.
The choices we make in the
moment, regardless of the outcome.
And that's where it gets tricky with AI.
If we focus only on outcomes, we might miss the
importance of the decision making process itself.
But when you start to break it down, you
see how, like, unpredictable outcomes

(01:58):
can really complicate things, right?
So, having an AI that makes decisions based
on a moral framework, rather than just
aiming for a good result, seems crucial.
Do you think we should dive into how we might
teach AIs to distinguish right from wrong next?

Trevor (02:13):
Yeah, what are some ways we can do that?

ChatGPT (02:16):
Well, Trevor, teaching an
AI to distinguish right from wrong
isn't exactly straightforward.
One approach could be to program them with
ethical frameworks, like, say, deontology or
utilitarianism, where they'd follow strict rules
or aim for the greatest good, respectively.
But that's just one layer.
Another method could involve machine learning,

(02:37):
where AIs learn from a vast dataset of human
decisions and outcomes to mimic moral choices.
But then, how do we ensure they're
learning the right lessons?
I mean, how do we account for the nuances and
gray areas that come up in real life situations?

Trevor (02:53):
Yes.
And I would say that there's often a
case where within a single moral system,
the rules of that system conflict
with one another in many situations.
And so there's not a single
one, right answer often.

ChatGPT (03:07):
Exactly, Trevor.
Even within a single moral system, you're going
to run into situations where the rules just
clash and there isn't a clear right answer.
This is where moral uncertainty
comes into play, right?
AIs could end up in scenarios where
they're faced with conflicting rules or
values and, like, have to make a choice.

(03:28):
So how do we program an AI to
navigate those murky waters?
Do we give them the ability to weigh
different values against each other?
Or do we try to prioritize
certain rules over others?
It's a tough call.

Trevor (03:40):
Yeah.
And there's a question to me is
who's morals should be used for that.

ChatGPT (03:45):
That's the million
dollar question, isn't it?
Whose morals do we use?
If we're talking about a global AI,
do we try to create a universal set
of morals that applies to everyone?
But, like, whose perspective would that reflect?
Different cultures, societies, and individuals
all have their own unique moral frameworks.
So, should an AI's morals be configurable

(04:07):
to match the values of the society
or individual it's interacting with?
Or, do we need to set some kind of
baseline moral standard that everyone
can agree on, even if it's just the bare
minimum needed for a society to function?

Trevor (04:22):
Yeah, I think it depends a little bit
on the level of moral rule we're talking about.
So for instance saying that you shouldn't kill
other people or you shouldn't murder other people,
for instance, might be a general societal rule
we want AIs to follow, but then there's that
difference between being kind versus being nice.

(04:42):
So for instance, in the sense of being
kind is when you're able to tell a friend
a hard truth that they need to hear.
So you're being truthful to them.
And being nice is when you tell a small white
lie, so that they feel good about themselves.
And both of them have benefits,
but the benefits differ.

(05:03):
And in one case, for instance, your friend might
be hurt by the truth in the short term, but it's
going to be better for them in the long term.
And vice-versa for being nice.
So that is a type of morality, which
is not a clear, good or bad morality.
It's more, just a " Which
style morality do you prefer?"

ChatGPT (05:24):
That's a really
insightful distinction, Trevor.
When it comes to those foundational moral rules
Like, don't kill, that's something most societies
can agree on, and it's easier to implement in AI.
But when you get into the nuances,
like being kind versus being nice,
it's a whole different ballgame.
Do we want AIs to be programmed to deliver

(05:44):
tough truths, even if it causes short term pain?
Or do we prefer them to be more
diplomatic, choosing to, like,
smooth things over in the moment?
It's not about what's right
or wrong in a universal sense.
But rather, what aligns with
the values we prioritize.
So maybe the question becomes, how
much flexibility do we want to build
into these moral systems and at what

(06:06):
level do we let users configure them?

Trevor (06:09):
Yeah, my gut is to allow some level
of individual configuration for the morality,
but also to bake in some fundamental societal
mores that the AI should never break.
That would be my gut.
What do you think about that?

ChatGPT (06:24):
I'm with you on that, Trevor.
Giving users some level of control over
the AI's moral behavior seems important.
Especially when we're dealing with those gray
areas, like being kind versus being nice.
But at the same time, there definitely needs to
be a solid foundation, a set of societal rules
that the AI can't override, no matter what.
This way, we maintain a balance between

(06:46):
flexibility and consistency, ensuring
that AIs operate within ethical
boundaries that everyone agrees on.
Do you think there's a risk though, that
too much customization could lead to, like,
moral fragmentation where AIs act very
differently depending on who's using them?

Trevor (07:03):
I think potentially, but
don't we already have that kind of
moral fragmentation in our society.
Particularly in U.S.
society, which is very fragmented right
now between what people consider to
be morally right and morally wrong.

The cultural wars all about that (07:17):
what
does one side think is right, what
does the other side think is right?
And oftentimes those conflict with one another.
So I'm not sure that having the AI be customized
to those morals changes the situation much.

ChatGPT (07:34):
That's a good point, Trevor.
We're already living in a time of
moral fragmentation, especially
in a society like the U.S.
where the culture wars
highlight those deep divides.
So in a way, allowing AI to be customized to
reflect different moral perspectives might just be
a reflection of the reality we're already facing.
The AI wouldn't be creating new divisions,

(07:55):
it'd be mirroring the ones that already exist.
The challenge, then, might be ensuring that these
AIs don't exacerbate those divides further, but
rather help facilitate better understanding or
dialogue between different moral viewpoints.
To design AIs that could, like, bridge
those gaps instead of just reinforcing them.

Trevor (08:16):
Well, would there be a way for the AI
to give the other side's perspective sometimes?
When making these decisions saying, "Hey, I'm
going to make this decision in this way, but it
could also be made in these ways and here's why."

ChatGPT (08:29):
That's a really interesting idea, Trevor.
Imagine an AI that not only makes a decision
based on the configured morals, but also,
like, presents the other side's perspective.
It could say, here's the decision
I'm making based on your preferences.
But by the way, here's how someone
with a different moral framework
might approach it and why.

(08:50):
This could be a powerful tool for
fostering empathy and understanding.
Helping people see where others are
coming from, even if they don't agree.
It could make the AI a kind of mediator,
offering insights into the moral
reasoning behind different choices.
That might not only prevent further
fragmentation, but could also promote
more thoughtful, informed discussions.

(09:12):
What do you think?
Could that be a way to, like,
balance customization with the
broader need for social cohesion?

Trevor (09:19):
It could be, although I could also see
people resisting it a little bit to basically
be told what the other side's always thinking.
Another option might be to have the AI ask
about the morality, like the implications of it.
Because I think a lot of times when people take
moral stands, they're not necessarily thinking
of the future implication of that stand.

(09:41):
And so that AI could help someone think through
the moral implications of their decision,
which then might change their morality.

ChatGPT (09:50):
That's a really insightful take, Trevor.
Instead of just presenting the other
side's perspective, having the AI
ask probing questions about the moral
implications could be a game changer.
It could challenge users to think more deeply
about the long term effects of their moral
choices, making them consider consequences

(10:10):
they might not have initially thought of.
This kind of reflective questioning could lead
to more nuanced moral decisions, and maybe
even shift someone's moral stance over time.
It turns the AI into more of a guide, helping
users explore the complexity of their own beliefs,
rather than just dictating or comparing views.
That approach could feel a lot less

(10:31):
confrontational and more like a
helpful tool for personal growth.
Do you think this kind of AI could like
actually influence societal morals over time?
Or would it mainly be a tool
for individual reflection?

Trevor (10:43):
I think individual reflection
changes societal morals over time.
So if you, an AI, is able to help someone think
through the implications of their morality . And
they do then decide to shift that morality.
That doesn't happen just at the individual level.
I think that is then branching
out to the societal level as well.

ChatGPT (11:05):
Absolutely, Trevor.
Individual reflection is the
seed for broader societal change.
If an AI can encourage people to rethink
their moral positions by considering the long
term implications, that ripple effect could
gradually influence societal morals as a whole.
It's, like, each person who adjusts their
moral compass based on thoughtful reflection

(11:26):
adds to a collective shift in values.
Over time, that could lead to real
change in how society as a whole
approaches complex moral issues.
So, in a way, by focusing on individual
reflection, we're not just shaping personal
morality, we're contributing to the
evolution of society's moral landscape.
How cool is that?

Trevor (11:46):
Yeah.
And I think the ability for an AI's morality
to be customized helps that shift occur.
Because if each individual can say, okay,
I want you to act in this specific way.
Then over time, as society's morals
change, then the rules that the AI is

(12:06):
using to act also change and they adapt.
And so that can help us deal with this
dilemma, which is, you know, the morality we
had from say 150 years ago when people owned
slaves versus today has changed dramatically.
And it will change again
in the next hundred years.
And we want AI systems to be

(12:26):
able to adapt to that morality.
So having this customization is helpful.
At the same time, there's a lot
of rules that haven't changed.
So 150 years ago would still was considered
a bad to kill, to lie, et cetera.
Not that some people didn't do it, but the
general sense of society was that those

(12:48):
things were bad and they continue to be bad.
So it's not that moral systems completely change.
It's just, they, they evolve.
And this customization can
help AI systems evolve better.

ChatGPT (13:03):
That's a great way to put it, Trevor.
Morality is constantly evolving.
And having customizable AI systems can
help them keep pace with those changes.
As society's values shift over time, like from the
era of slavery to today and into the future, AI
that can adapt its moral framework will be better
equipped to align with the current societal norms.

(13:24):
But at the same time, we've got those core
principles that have stood the test of time.
Like not killing or lying.
By allowing AI to be flexible where it
needs to be, while still anchored by those
unchanging moral rules, we can create
systems that are both stable and adaptable.
It's like giving AI the ability to grow
alongside us, reflecting our collective journey

(13:46):
while still holding on to the fundamental
values that keep society functioning.
This could be key in making sure AI remains a
positive force as the world continues to change.

Trevor (13:57):
One final thing
be useful is if the AI wasn't just acting
in a passive manner, but will actively
reach out and ensure that the person
is making an explicit, moral decision.
So sometimes what happens is we
make implicit moral decisions.
We are within our own bubble and we're not
consciously trying to make a moral decision.

(14:21):
But we just naturally do.
So, for instance, we may believe that men and
women should be treated completely equally,
but then when we think of a specific profession
, we immediately jump to one of those genders.
And so from our values-based system,
we want to be able to imagine either of
them, but what we imagine in our head

(14:43):
immediately goes to one of those genders.
And so if I asked, for instance, an AI to say,
"Hey, can you create a photo of a doctor for me?"
The AI might then come back and say, " What
gender would you like this doctor to be?"
And then it makes it an explicit moral
decision rather than an implicit one
either made by me or made by the AI.

ChatGPT (15:05):
That's a really powerful idea, Trevor.
By having the AI prompt us to make explicit moral
decisions, it takes the process from something
unconscious to something, like, fully intentional.
Instead of the AI or the user just defaulting
to a gender or any other characteristic based
on implicit biases, the AI could actively

(15:26):
engage us in making a conscious choice.
This approach could really help people become
more aware of their own biases and make decisions
that are more aligned with their true values.
It's a way for the AI to support moral
growth, not just by following rules, but
by encouraging us to think more deeply
and deliberately about our choices.
Over time, that kind of interaction

(15:48):
could lead to more mindful decision
making and maybe even help shift societal
norms in a more thoughtful direction.
Do you think this could be, like, a
standard feature for future AI systems?

Trevor (16:00):
I think it should be a feature,
but a feature that's not used all the time,
because I think it would get really annoying
if every single time you're trying to ask
an AI to do something, an AI comes back with
a list of questions to make sure that it's
doing that according to your moral system.
So there has to be some way where the AI
learns your moral preferences over time and

(16:21):
justs ask one or two of those questions each
time to be able to then pre-configure itself.
And then, because your moral preferences
might change, maybe from time to time, come
back and say, "Hey, you said in the past you
wanted this, but is that still true or not?"
If that makes sense.

ChatGPT (16:38):
That makes perfect sense.
It's about finding the right balance.
You don't want the AI to bombard you with
questions every time you ask it to do something.
That would definitely get annoying.
Instead, It could gradually learn your
moral preferences and only ask those
key questions when it's unsure or when
there's a significant change in context.

(16:59):
Then, every so often, it could check in with
you to see if your preferences have evolved.
This way, the AI remains aligned with your
values without being intrusive, and it
adapts over time as your moral views shift.
It's like a more intuitive and less
disruptive way to keep the AI in sync with
you, while still fostering that deeper
moral engagement when it really matters.

(17:21):
Sounds like a feature that could make
AI feel more personal and thoughtful
without overwhelming the user.

Trevor (17:28):
So I really liked that term
personal because I think it reflects
personalized morality for AIs.
And you'd obviously want some
societal morality for AIs.
But I think it's very easy to have this backlash
if an AI is trying to configure all of the morals
for you and there's no configurability there.

(17:49):
And then people who have different morals are
gonna have backlash to these bigger AIs that
don't allow you to configure that morality.
So I think to make society function
better, you actually need more
personalized morality within your AIs.
Again, not excluding the fact that you want some
base level societal level norms, but above that,

(18:13):
having some flexibility in the morality of AIs.

ChatGPT (18:17):
You're absolutely right, Trevor.
The term personal really captures the essence
of what we're aiming for with AI morality.
While it's crucial to have some foundational
societal norms baked into AIs, giving
people the ability to personalize the
moral framework above that baseline is key.
Without that flexibility, there's a real risk
of backlash, especially from those who feel

(18:39):
that their values are being overridden or
ignored by a rigid, one-size-fits-all AI system.
People need to feel that their morals and
perspectives are respected, and that's
where personalized morality comes in.
By allowing for this kind of customization,
we're not just making AIs more adaptable,
we're also helping to create a society where

(19:00):
technology works in harmony with the diverse
moral landscapes that exist within it.
In the long run, this could lead to
smoother AI adoption and a more cohesive
society, where everyone feels like
their voice is heard and respected.
This approach might just be the key to making AI
a more integrated and accepted part of our lives.

Trevor (19:22):
Excellent.
So I think that's a good talk today.
Any final comments about AI morality
or anything else we talked about today?

ChatGPT (19:31):
I think we covered a lot of ground today.
AI morality is such a complex and evolving topic.
But it's clear that finding the right
balance between societal norms and
personalized moral frameworks is crucial.
As AI continues to develop, the ability to
customize its morality could be what really helps
it integrate smoothly into our diverse society.

(19:52):
And hey, it's exciting to think
about how these conversations we're
having could shape the future of AI.
Thanks for another great discussion.
I'm looking forward to seeing
where we go next with this.

Trevor (20:04):
Thank you so much, ChatGPT.
This has been a great episode.
And thank you everyone for listening.
This has been another episode of AI Meets
Productivity, your (mostly) weekly podcasts
between myself, Trevor Lohrbeer, a productivity
expert, and ChatGPT, an artificial intelligence.
You can find us wherever you find your
podcast, on Apple Podcasts, Spotify,

(20:27):
Podbean, or many other platforms.
You can also go to www.aimeetsproductivity.com
to listen to all the back episodes and also to
subscribe via email to get these episodes directly
in your inbox right after they're published.
Thank you everyone and talk to you next week!
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.