Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
AI ethics in everyday life. AI ethics in everyday life.
Speaker 2 (00:16):
Right now, as you're listening to this, dozens of algorithms
are making decisions about you. They're deciding what news you'll see,
whether you'll get that loan, and who you might fall
in love with, all without asking your permission or explaining
their reasoning. I'm Jason Park, and this is AI ethics
in everyday life, where we pull back the curtain on
the invisible digital forces reshaping human experience one algorithm at
(00:37):
a time. So Sarah's story really highlights how and opaque
these algorithms can be. One minute, you're seeing tons of
potential matches, the next crickets. It's like being digitally ghosted,
not by a person, but by the system itself. Makes
you wonder what are these algorithms really doing?
Speaker 1 (00:53):
Right?
Speaker 3 (00:54):
And it's not just some random gritch. The Wikipedia article
on online dating, though a bit out day, mentions how
factors like age, location, even profession are all fed into
these algorithms. It's a complex recipe and we the users
don't know the ingredients.
Speaker 2 (01:10):
Yeah, and Sarah's case, moving into a new city seems
to have been the trigger. The algorithm probably recalibrated based
on her new location, potentially prioritizing people closer to her.
But who knows what other factors came into play.
Speaker 3 (01:22):
And that's the problem, this lack of transparency. We're trusting
these systems to help us find connections, yet we have
no real understanding of how they work. Yeah, it's a
black box. And as the episode description mentions, that black
box can perpetuate societal biases. Take for instance, the studies
on racial preferences in online dating mentioned in the article,
(01:44):
White women were significantly less open to interracial relationships than
white men. If that data is used to train an algorithm, then.
Speaker 2 (01:53):
The algorithm learns and replicates that bias. It's a vicious cycle,
and it's not just race. The article Sites of twenty
eighteen study showing how women's desirability on these apps declines
sharply after age twenty, while men's increases until fifty. So
if Sarah at thirty two moved to a city with
a younger demographic.
Speaker 3 (02:13):
The algorithm might have deemed her less desirable in that
new context, pushing her profile down in the rankings. It's
chillin really, these systems designed to connect us can actually
create digital barriers, reinforcing existing societal inequalities exactly.
Speaker 2 (02:27):
And then there's the business model aspects. The whistleblower Marcus
Rodriguez points to profit driven decisions influencing the algorithms. I mean,
these platforms need to keep users engaged, right, So what's
more engaging than a constant stream of matches, even if
those matches.
Speaker 3 (02:41):
Are right, superficial, or unlikely to lead anywhere. It's a
numbers game. The Wikipedia article highlight how some dating sites
inflate the number of active profiles, even including fake ones,
to make the pool seem larger and more enticing. It's misleading,
to say the least, and it creates a false sense
of abundance, which paradoxically can make people less likely to
(03:05):
commit to a partner.
Speaker 2 (03:06):
It's almost like a casino designed to keep you playing,
and as doctor Batel, the ethicist explains, this can reinforce
social segregation. People are being sorted and filtered not just
by their stated preferences, but by a host of hidden
factors determined by the algorithm. It's creating echo chambers of desirability,
(03:28):
further dividing us instead of bringing us together.
Speaker 3 (03:31):
I see interesting, absolutely, and the fact that most users
have no idea how these algorithms work or the extent
to which they're being manipulated is deeply concerning. It's a
real ethical dilemma. We need greater transparency and accountability in
this space, definitely.
Speaker 2 (03:48):
So we were talking about these algorithms and how much
control we really have this whole Sarah situation. It's a
bit unsettling, right, yeah.
Speaker 3 (03:55):
Absolutely, It's like we're handing over these incredibly personal decisions
to so a system we don't understand, and the potential
for manipulation. It's a real concern.
Speaker 1 (04:07):
It is.
Speaker 2 (04:08):
I was reading this article and it mentioned a few
legal cases related to dating apps. One case, back in
twenty thirteen, an Ashley Madison employee sued the company claims
she got repetitive strain injuries from creating get this, a
thousand fake profiles in three weeks.
Speaker 3 (04:22):
A thousand Oh wow, a thousand fake profiles. What was
the company's response, Well.
Speaker 2 (04:28):
They countersuit, of course, claimed she kept confidential documents. But
the interesting bit is they admitted to creating fake profiles,
saying it was for quality assurance testing.
Speaker 3 (04:38):
They said, right, testing sounds a bit like a way
to inflate their numbers. Doesn't it make the pool seem bigger,
more attractive exactly?
Speaker 2 (04:45):
And it's not an isolated incident. There was this case
with Zeusk in twenty fourteen where a married woman accidentally
created a profile just by clicking on a pop up ad.
One click and suddenly she's being bombarded by messages from
single men.
Speaker 3 (05:00):
Oh really, that's him be invasive to say the least.
So it's not even always about the algorithm itself, but
the practices around it.
Speaker 2 (05:08):
Yeah, yeah, it's the whole ecosystem. Then there's it's just
Lunch sued in twenty fourteen for allegedly misleading customers about
having matches lined up, and JDI Dating find over six
hundred thousand dollars for using fake profiles virtual cubids they
called them, to lure people into paid memberships and renewing
those memberships without consent too.
Speaker 3 (05:28):
Wow, it's like a minefield out there. It makes you
wonder how much of what we see on these apps
is genuine, you know, and how much is engineered to
keep us hooked, keep us paying, right.
Speaker 2 (05:40):
And then there are the deeper, more systemic issues like
the Tender lawsuit in twenty fourteen Whitney Wolf suing for
sexual harassment and discrimination, and later in twenty eighteen, the
company firing the VP of marketing after she accused the
former CEO of sexual assault.
Speaker 3 (05:56):
It's disturbing these platforms designed to connect peace people can
also be breeding grounds for exploitation and abuse, and the
power dynamics at play, they're often.
Speaker 2 (06:07):
Skewed, absolutely, and all of this happening against a backdrop
of well minimal regulation. The US government didn't really start
regulating these services until two thousand and seven, and even
then it's mostly focused on international matchmaking.
Speaker 3 (06:22):
So basically the wild West of the digital age, and
we the users are left to navigate this landscape largely
on our own. It's concerning, to say the least. We
need more transparency, more accountability, definitely.
Speaker 2 (06:38):
So we've been talking about these shadowy algorithms and their
impact on dating, but these things are everywhere, right and
senters who bought this also bought It's almost creepy how
well they know us.
Speaker 3 (06:49):
Yeah, and that's collaborative filtering and action. They're not looking
at the product itself, just at what other people who
bought similar things also bought. It's like a digital hive
mind yeah exactly.
Speaker 2 (06:59):
But then they also recommend books by authors you've read before,
which is content based filtering. So it's not just one thing.
Speaker 3 (07:05):
It's a mix, right, a hybrid approach. And that's where
it gets interesting because each approach has its own quirks,
Like collaborative filtering needs tons of data. Think about last
dot fm, for example. If you're a new user, it
struggles to recommend anything decent. That's the cold start problem.
Speaker 2 (07:23):
Oh yeah, I see interesting.
Speaker 3 (07:24):
But content based like Pandora can start with very little.
You pick a song and it plays similar stuff, but
it's also much more limited.
Speaker 1 (07:34):
Right.
Speaker 3 (07:34):
It won't suddenly recommend a completely different genre, even if
it might be something you'd love.
Speaker 2 (07:40):
Got it. So they each have their strengths and weaknesses.
It's like a trade off between breadth of recommendations and
the amount of data needed exactly.
Speaker 3 (07:48):
And then there's this whole question of how these systems
are different from search algorithms. I mean, they're both trying
to find things for you, but in different ways. It's
a subtle but important distinction, especially with legal cases like
the Gonzales versus Google one.
Speaker 2 (08:03):
It is so these recommended systems, they're not just some
new fangled thing. This article mentions Grundy way back in
nineteen seventy nine asking users' questions to figure out their preferences.
Speaker 3 (08:12):
That's pretty ingenious for the time, it is. And then
there's the digital bookshelf idea from nineteen ninety. It's amazing
to see how these concepts have evolved from simple questionnaires
to complex AI powered systems.
Speaker 2 (08:25):
Yeah, and the Netflix prize that really pushed the field forward.
A million dollars for a ten percent improvement in accuracy.
That's a serious.
Speaker 3 (08:34):
Incentive, it is, and it led to some really innovative approaches,
like blending over one hundred different algorithms.
Speaker 2 (08:40):
Who would have thought of that, right, It's like the more,
the merrier. But it also raises questions about reproducibility. This
article mentions a crisis in the field where many published
results are hard to replicate.
Speaker 3 (08:52):
Yeah, that's a real problem. It makes it difficult to
know which approaches actually work. And then there's the issue
of evaluating these systems. Flying evaluations using historical data can
be misleading. It's not the same as real world user behavior.
Speaker 2 (09:06):
Right, and user studies are expensive and time consuming. AB
testing is probably the most realistic, but even that has
its limitations. It's a tricky business measuring the effectiveness of
these systems.
Speaker 3 (09:16):
It is, and it's not just about accuracy. There's diversity, novelty, serendipity,
even trust. If users don't trust the system, they won't
use it, no matter how accurate it is. It's a
complex underplay of factors, definitely.
Speaker 2 (09:30):
So we've talked about the mechanics of recommender systems, but
what about the bigger picture their impact on.
Speaker 3 (09:35):
Well everything, right, it's not just about suggesting products or movies.
These systems influence what we read, who we connect with,
even what we believe. It's pretty powerful stuff.
Speaker 1 (09:49):
It is.
Speaker 2 (09:51):
This article mentions algorithmic radicalization. That's unsettling, how these systems
can inadvertently steer people towards extreme viewpoints.
Speaker 3 (10:01):
Yeah, it's the filter bubble effect. You're constantly being shown
content that reinforces your existing beliefs, creating echo chambers, and
it's hard to break out of that exactly.
Speaker 2 (10:13):
And it's happening on a massive scale. Think about social media.
These platforms, they're using algorithms to curate our feeds, deciding
what we see and don't.
Speaker 3 (10:21):
See, and that has real world consequences. This article mentions
how some platforms are trying to mitigate this, like Twitter
with community notes and YouTube's planned pilot.
Speaker 2 (10:32):
Oh yeah, interesting, what about legal and ethical considerations? This
whole area seems like a minefield.
Speaker 3 (10:39):
It is. There's the Netflix Prize debacle, privacy concerns, bias
and recommendations. It's a complex landscape.
Speaker 1 (10:47):
It is.
Speaker 2 (10:48):
The article mentions a vivo Vadia and bridging based ranking
empowering user groups to influence algorithm design. That sounds promising, Yeah.
Speaker 3 (10:56):
It does. More transparency and accountability are crucial, but it's
also a technical challenge. How do you actually implement these
bridging algorithms in a way that's fair and effective?
Speaker 1 (11:08):
Right?
Speaker 2 (11:09):
And how do you measure their success? It's not just
about accuracy, it's about diversity, novelty, serendipity, even trust exactly.
Speaker 3 (11:19):
And then there's the cold start problem. How do you
make relevant recommendations to new users or for niche items
with limited data?
Speaker 2 (11:27):
Got it? So lots of challenges, but also lots of potential.
This article mentions how these systems are evolving from simple
collaborative filtering to complex AI powered models.
Speaker 3 (11:38):
Yeah, and they're being applied in all sorts of domains
from e commerce and entertainment to education and even healthcare.
Speaker 2 (11:45):
So the future recommendation, it's well, it's hard to predict,
but one thing's for sure. These systems are here to stay.
Speaker 3 (11:52):
They are, and it's up to us to ensure they're
used responsibly, ethically and in a way that benefits everyone. Definitely,
the next time an app suggests something, a website shows
you certain content, or you get an unexpected decision from
a company, ask yourself, what algorithm made this choice for me?
And would I have made the same one. Until next time,
(12:14):
remember awareness is the first step toward agency. Thanks for
listening to AI ethics in everyday life.