All Episodes

February 3, 2024 24 mins

In the first episode of Season Four of the Inside Learning Podcast by Lernovate Centre, we begin with interest as we host renowned inclusive technology and responsible AI expert, Phaedra Boinodiris. Known for her impactful work with IBM and Future World Alliance, Phaedra delves into the pressing issue of inherent bias in AI technologies and how we can work towards their ethical utilization.

 Phaedra provides insightful perspectives on the prevalent bias in AI, but she also envisages AI as a mirror that reflects our biases, encouraging us to rectify the image we see. The conversation takes an exciting turn as we delve into her book which enlightens on the absolute necessity of ethical AI.

Throughout this discussion, we dissect real-life instances of AI misapplication leading to societal harm, underlining the importance of multidisciplinary teams to anticipate and mitigate such impacts. Phaedra convincingly argues for not just creating diverse and inclusive AI technologies, but also for using them ethically.

Join us in this conversation as we uncover the societal and technical challenges that AI poses, and how we can steer through them to maximize human potential. We address the critical necessity of revolutionising the teaching of AI in schools, extending its application to disciplines like civics or social studies for comprehensive innovation.

We conclude this episode by probing the current norms surrounding AI education, challenging the boundaries of humanities and engineering. Phaedra, the author of "AI for the Rest of Us", emphasizes the importance of humility and the capacity to 'unlearn' for the collective good. This episode ends with a fervent call for enhanced inclusivity and collaboration within the AI fraternity.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
The Inside Learning Podcast is brought to you by the Lernovate Centre.
Lernovate's research explores the power of learning to unlock human potential.
Find out more about Lernovate's research on the science of learning and the
future of work at lernovatecentre.org.
Welcome to season four of the Inside Learning Podcast brought to you by the

(00:21):
Lernovate Centre here in Trinity College.
I have got a real treat for you to start off this new season.
Our guest is a fellow with the London based Royal Society of Arts.
She has focused on inclusion and technology since 1999.
She's currently the business transformation leader for IBM's responsible AI
consulting group and serves on the leadership team of IBM's Academy of Technology.

(00:45):
She is a co-founder of the Future World Alliance, a non-profit dedicated to
curating K-12 education in AI ethics.
In 2019, she won the United Nations
Woman of Influence in STEM and Inclusivity Award and was recognized by Women
in Games International as one of the top 100 women in the games industry as

(01:07):
she began one of the first scholarship programs in the United States for women
to pursue degrees in game design and development.
She is also the author of AI for the rest of us. She is Phaedra Buonadieris.
Welcome to the show. Oh, thank you so much for having me. I'm so pleased to be here.
Phaedra and I had this great opportunity to meet. Both of us were speaking at

(01:31):
an event in Austria last year in 2023.
And I found out a beautiful thing about her name. Her name means life.
And she'll be sick of me saying this. But I, I love this idea of Phaedra shedding
light on a very important topic, which is this idea of AI and bias.

(01:51):
And one of the things you said was, there's a lot of bias in AI,
because it's coded by people who have those biases inherent.
And I think that's something that we have to
have a little bit of empathy towards this entire industry as
well as it's going through this melting pot of a stage this kind of real step
change in ai so maybe we'll open up with this one of the main topics of your

(02:14):
book thank you first of all and how funny that you remember our conversation
about light and i'm just hearing you say that again i can hear the sound of angels singing,
as we shed light on the subject of AI ethics.
Or you walk into a room and there's butterflies and beautiful light.

(02:34):
That's hilarious. No, it's such an important topic.
And it all goes back to what we're trying to do.
What we're ultimately trying to do is have our own human values be reflected
in our technologies. Human values meaning we don't expect our technologies to lie to us.

(02:57):
We don't expect it to discriminate.
We expect it to be safe for us and for our children to use.
And it's incredibly important when you're thinking about AI to also be thinking
about what needs to happen in order to be able to trust it to appropriately

(03:19):
reflect those human values.
Values because earning trust in AI is not strictly a technical problem with
the technical solution, but something that is socio-technical.
And anytime you talk about something socio-technical.
You know that you're really going to need a holistic approach.

(03:42):
And holistic meaning people, process, tools, people, what is the right organizational
culture that is required to curate AI responsibly?
What are the right AI governance processes and the tools in AI engineering frameworks?
And of those three, as you've heard me say before, people is always the hardest,

(04:05):
always the hardest. And the reason why that is, is because of the nature of data itself, right?
If you think about it, data is an artifact of the human experience.
We humans, we make the data, the machines that make the data.
But as you said, we have over 188 biases and counting. So all of our data is

(04:26):
biased. All of it is biased.
And the trick is to recognize that AI is this wonderful mirror that reflects
our biases back towards us, but we have to be brave enough and introspective
enough to look into that mirror and decide,
do we like what we see?
Do we like what is being reflected back? back? Does the bias that's being reflected

(04:48):
back actually match my values or not?
If it doesn't, then you know you need to change.
And it's interesting because I ask people all the time, clients that I'm presenting to,
if they have had an experience where they have seen a biased output from an

(05:09):
AI model before, just ask them, like, have you ever experienced a biased output
from AI? and about half the people.
Say no or I'm not sure. And it's extremely important to give examples.
And the London Interdisciplinary School has just done a phenomenal job.
They generated a video a few months ago where they had a team of statisticians

(05:36):
that they were prompting a generative transformer that was was generating images.
And with generative AI, remember, there's no test, retest reliability.
So you can give it the same prompt a thousand times and get a thousand different pictures, right?
So this team of statisticians, they kept prompting this image generator with

(05:57):
questions like, show me an image of a typical CEO.
Show me an image of a typical doctor. Show me an image of a typical terrorist.
Show me an image of a typical prisoner.
And then these statisticians measured which gender, which race,
what skin color is being depicted in these pictures.
And they found tremendous representational harm.

(06:21):
For example, here in this country, within the United States,
39% of doctors are women.
And yet in this image generator, when you asked it a thousand times,
show me an image of a typical doctor, it only showed women and less than 7% of the time.
So this is an example of how our bias, the data that is being used to train

(06:45):
these models, the bias data that is being used to train these models can absolutely
lead to representational harm.
One of the things you said in, well, we spoke and you talk about it in your
book is the importance of ethical AI and how even organizations that truly have
the very best of intentions,
with respect to their use of AI can end up causing

(07:06):
both individual and societal harm and
this extends to things like ai misapplications for
example you talked about one with welfare and
how ai was used to predict welfare fraud in the state of michigan and how this
absolutely got it wrong and so many people went through so much suffering as

(07:28):
a result in that particular case study that model to predict welfare fraud.
It ran for a period of three years with an accuracy score of less than 10%.
So, you know, there were numerous economically disadvantaged families that were
erroneously fined. And it just doesn't, it doesn't end there.
You know, there was a region in the country of Spain, they had procured an AI

(07:52):
model to predict their level of recidivism of domestic abusers.
And it was only after 19 women were murdered that due to, you know,
the algorithm saying saying, we're giving this concern a low score so the police
don't have to put a protective order in place.
Only after 19 women were murdered, an audit was rolled out and done on the model.

(08:16):
And there were conversations about, was this really an appropriate use of AI?
Who was in the room when making that decision? Were there any victims of domestic abuse, for example?
And then they found all kinds of bias in the algorithm, including the assumption
that if a complaint or a concern came from a wealthy household,

(08:38):
that they could devalue the risk.
And again, it's organizations that truly have really good intentions,
even to the effect of they want to boost diversity or inclusivity,
or they want to lower domestic abuse in their region, or whatever the case might be.
But again, due to a lack of understanding on the nature of data and bias and

(09:03):
the appropriate safeguard rails and controls, they can end up causing unintended harm.
I lecture in Trinity College, and I show them a clip from Terminator 2.
And in that clip, it shows the unintended consequences of the creator of the
neural net that becomes the Terminator years later on.

(09:25):
What this movie does actually is it zones into his private life.
And he's just there with his wife chatting away.
And she goes, Why is this thing so important to you?
Miles was the man's name. And he goes, because this can help society so much,
it means that we can have no more plane crashes, because the pilot never gets
tired or makes poor decisions, etc, etc.

(09:45):
And you can see what that movie was trying to do was so far ahead of its time.
And I say all that to say, oftentimes, then, despite the,
positive intentions of somebody like this, that creates some some type of AI
or some AI application that it falls into not the wrong hands,

(10:06):
but it falls into powerful hands.
And one of the things you talk about is the importance of power dynamics.
You said that whenever we talk about a artificial intelligence,
it's incredibly important to talk about power.
It's, it's very, very true. Thinking about who is gaining power from the use of an AI model.

(10:31):
Is the power being given to a person or giving power over a person is extremely, extremely important.
In particular, as you're talking about historically underserved communities,
this becomes even more critically important.

(10:52):
And it's amazing how often I will be having conversations with customers about
the use cases for their models.
And I ask them, well, what do you think the risk is for disparate impact for their use case?

(11:13):
And their response is,
Well, there are no risks for disparate impact because we have the best of intentions.
And they mean it so earnestly.
They mean it so earnestly.
And it's fascinating. I think it's why,
you know, as I speak about how we desperately need to understand this is a socio-technical

(11:40):
challenge and we need to be hyper-focused on culture.
And in particular, not only really insisting that the teams of individuals developing
AI models and the systems of governance around those AI models have to be far
more diverse and inclusive than they are today.
But that also those teams of individuals have to be multidisciplinary in nature.

(12:05):
And this is so that they can feel comfortable having really difficult questions
about inequity, have really difficult questions about disparate impact,
because far too often, the technologists, the practitioners that are developing
these models and making a determination on what data is being used to train these models,

(12:28):
they fundamentally do not understand disparate impact.
So it's why we need to speak far more strongly about how we teach AI in schools,
that it's not just limited to computer science classes and higher ed institutions,

(12:49):
institutions, but that we bring it, you know, into the civics class in middle school,
in high school, and really, you know, introduce it in such a way that explains
why we're talking about this in a multidisciplinary way.
Aidan McCullen, you know, you were saying almost like AI is a mirror of society
and that that show if anybody watches it, it's on Netflix Black Mirror.

(13:13):
That's what that's actually about. It's about this black mirror,
the the nasty pieces of society that maybe we don't want to address that are
so important to bring the light to bring the fader to because if we don't,
we won't may not be able to stop where it goes.
And we may create systems systems that are even way more in unequal than they are today.

(13:36):
And I say all that to say that I had a brilliant lady on the show before Joan C.
Williams, and she wrote a book called bias interrupted.
And she was saying that many leaders, many CEOs of really top ranking companies
in the US, for example, have stay at home wives, the men have stay at home wives.

(13:58):
And they make make decisions through that lens.
So many of their colleagues who have children, women, they have to do things
differently, but the man can't understand because he's seeing it purely through his lens.
And I and that, again, you one of the things we talked about was that if you're

(14:21):
biased, it doesn't mean you're broken, it just means you're human.
And it's so so difficult.
It's so difficult, because we bake those into the technology.
And then that becomes this black mirror. And it brings me to solutions side of things.
So you're you're set, you're trying to interrupt us at a very young age to help
people be able to, and even you work with a lot of ladies to get them into the

(14:46):
system, because that's one of the.
Antidotes to where we are today is to actually have more
women working in tech jobs but they still have to deal
with those biases that have been marinating for decades for even nearly a century
in many ways and it's not just women it's it's not just you know gender but
different races ethnicities and earnestly different lived world experiences

(15:10):
people have different lived world world experience.
Because today, there's a teeny tiny homogeneous group of human beings determining
what data sets to use to train our models.
And most people think, you know, the effort to develop AI is coding.
And it's not true. As we've said before, like well over 70% of the effort is

(15:30):
just picking the right data.
And if data is an artifact of the human experience, we need to be thinking,
Is this the appropriate data to be using that reflects all of these diverse
communities that we need to serve?
Are we working with domain experts to make sure? Is this the right data to use
to solve this particular problem?

(15:51):
And again, that's not a technical, these aren't technical questions. questions.
So like, for example, the example I was giving earlier about the London Interdisciplinary
School, where people will ask this image generator, you know,
what is the image of a typical doctor?
It's actually a philosophical question. How often should women be shown in those outputs?

(16:15):
Should it be exactly, you know, the percentage of women that's here in the US at 39%?
Should it be 50% what we aspire to as a society, how should we have AI reflecting back to us?
Do you see what I'm saying? So again, a lot of this is philosophical.
So much of the work that needs to happen in education...

(16:38):
Is, again, framing because, you know, I was on a panel saying,
you know, we need to be introducing the subject of data and AI ethics in schools.
And a dean sitting right next to me said, well, should that be in the School
of Humanities or the School of Engineering?
And it's like, that's the issue. Like, why are we categorizing this into an either or bucket?

(17:01):
Isn't that what what got us into this mess to begin with.
I mean, can we not, can we not really be interdisciplinary?
What do we need to get there? And then how do we message to,
you know, the next generation that if you want to be a psychologist or sociologist
or anthropologist, are you interested in the law? Are you interested in design?

(17:23):
We desperately need you to be working in the field of AI.
You're interested in social justice? We desperately need you to be working in the field of AI.
So it's a lot about making room at the table for new kinds of brains to be able

(17:44):
to help contribute to the conversation.
The concept I always have is that.
We've been trained to learn a trade or a skill set and then stay in our swim lane.
And that there's a bigger need than ever for us to explore beyond our swim lane
and explore not only the big swimming pool, but the actual ocean and actually

(18:08):
go out there and then cross pollinate the information, particularly when it comes to this.
So it's like playing chess and only seeing one perspective of the board.
And this brings you up and gives you this overview of the entire board,
which ways, how things are working, etc.
And as you say, we even then push AI into the technology departments within

(18:28):
universities or college, etc, etc.
And you call out on education and AI ethics that why aren't we teaching AI in social study classes?
That's one of the places it ultimately belongs. That's right.
That's right. I was at a summit that was being hosted by a business school on

(18:51):
the subject of generative AI.
And all of the speakers up on stage were from the business school.
Now, this was a university that doesn't just have a business school.
They have a school of engineering, you know, with computer science, data science.
They have a school of government, school of public policy.
And everybody who was up on stage from that business school kept giving messaging

(19:13):
around if businesses, you know, take their time in adopting AI,
you know, they're going to miss the train.
They're going to miss out.
They're not going to blah, blah, blah, blah, blah. But they didn't breathe a
word about anything we're talking about.
Not about regulation, not about risk, not about AI literacy,

(19:37):
not about, like nothing else.
Not about empowerment or augmenting human intelligence and appropriate use cases.
And I asked the people putting on the summit, I said, I said,
wait a minute, you have...
A school of engineering with computer scientists and data science curriculum.
You have a school of government that's informing governments around the world

(20:01):
on how to create public policy.
You've got a hospital.
You've got a school of health, public health.
Where are your other voices? And do you know what their answer was, Aidan?
They said, well, we don't have really good relationships with those others. Oh, God.
Human human problems. That's it.

(20:24):
Oh, human problem back to the human problems.

Aidan McCullen, Ph.D. (20:27):
Maybe one of the last things we might land the ship on
today is the role of culture in AI development.
Because one of the things I love that you said, reminded me of an old Alvin
Toffler quote that we use a lot on the show is that the literate of the 21st
century won't be those who cannot read read and write, but those who cannot
learn, unlearn and relearn again.

(20:49):
And you say that we have to approach the space with humility,
because as much as there is to learn,
there's loads that we have to unlearn, particularly the ways we used to do things
around here are no longer apt to no longer fit for purpose.
And this goes back to what we were just saying, What we need to unlearn is stop

(21:11):
putting AI in the bucket of this must be taught in computer science class for coders.
What are we doing? Why would we do that?
Why would we do that? Why are we talking about AI in a way that doesn't leave
room at the table for all these other different kinds of people who have to

(21:34):
be part of the conversation? Right.
Why are we doing that? That is exactly an example of what it is we need to unlearn
is starting again with how we're communicating and how we're teaching people about this space.
Phaedra, what would be our audience? So we have a wide range of people who work
in a wide range of education, creating tools for education, creating reform in education.

(22:01):
What would be your message for our audience?
Remember what we said here today, that if you have a seat at this table,
irrespective of what your background is, simply because you have a different
lived world experience.
You've got different backgrounds in education, anthropology,
psychology, sociology, learning.

(22:22):
I mean, you are welcome here. We desperately need you. We need you to be part of this conversation.
And ways in which you can plug in right away and help us advocate is by help
us push to make curriculums on the subject of data and AI ethics far more inclusive
and accessible than they are today and bring it earlier into schools.

(22:42):
Like Aidan said, teaching it in social studies class, teaching it in civics
class, because ultimately that is where it belongs.
I mean, remember the definition of data.
If data is an artifact of the human experience, it is incumbent incumbent upon
us to make sure that our human values are being reflected in the technologies that we are creating.

(23:05):
And again, ultimately, it's not a technical challenge.
It's not a technical challenge strictly.
It has to be far more holistic than it is today.
Beautiful. And Phaedra, for people who want to find out more about you,
find out about your book, where is the best place to find you?
Oh, you can go to www.Phaedra.ai.

(23:27):
That's P-H-A-E-D-R-A.ai.
Thank you.
Thank you, Phaedra, for joining us at the Inside Learning Podcast brought to
you by the Learnovate Centre here in Trinity College, Dublin.
Author of AI for the Rest of Us, Phaedra Buonadieris, thank you for joining
us. My pleasure. This was fun.
Inside Learning is brought to you by the Learnovate Centre in Trinity College,

(23:51):
Dublin. Learn of 8 is funded by Enterprise Ireland and IDA Ireland.
Visit learnof8centre.org to find out more about our research on the science
of learning and the future of work.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.