All Episodes

May 2, 2024 35 mins

Transforming Society with AI: Ethics, Education, and the Future of Work

This episode of the Inside Learning Podcast, presented by the Learnovate Centre, features an insightful discussion with Nell Watson, an expert on AI ethics and author of Taming the Machine. 

The conversation delves into recent advancements in AI and their transformative potential for society and economies, emphasizing the need for ethical leadership and responsible AI development. Watson discusses the concept of a 'Sputnik moment' for AI, highlighting the rapid progress in AI capabilities and the challenges in steering this technology to align with human values and ethics. The episode also explores AI's impact on education and the workforce, suggesting a shift towards developing critical thinking and creative skills amid an AI-dominated future. Additionally, the dialogue touches on the idea of AI-driven corporations and the potential of universal basic income supported by AI dividends. The session concludes by discussing the importance of continuous learning and adaptation in navigating the future shaped by AI.

00:00 Welcome to Inside Learning: Exploring the Power of Learning 00:15 The Transformative Power of AI: A New Era 01:07 Ethical AI: Steering the Future Responsibly 01:29 Introducing Nell Watson: AI Ethicist and Author 02:11 AI's Impact on Society: A Conversation with Nell Watson 05:53 AI Ethics and Safety: Navigating the Challenges 14:30 AI in Education: Shaping the Future of Learning 25:09 AI in the Workplace: Algorithmic Management and Its Implications 30:40 The Future of Work and Education in the AI Era 34:25 Closing Thoughts and Resources

The links mentioned in the episode:

https://tamingthemachine.com/

https://www.nellwatson.com/

 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
The Inside Learning Podcast is brought to you by the Lernovate Center.
Lernovate's research explores the power of learning to unlock human potential.
Find out more about Lernovate's research on the science of learning and the
future of work at lernovatecenter.org.
AI is continuously transforming our world, unfurling capabilities with unprecedented

(00:20):
accessibility and flexibility.
With nothing more than simple words, we can tap into an incredible variety of
services of an ever improving quality.
Today's adaptable AI models enable robots and systems to respond seamlessly
to human commands across various real world applications.

(00:40):
Agent based models can think step by step to collaborate with other versions
of themselves, and even to form AI communities.
We are at a pivotal Sputnik moment set to reshape global society and economies
through AI's transformative power.
It's necessary to take stock of where the recent developments have taken us

(01:02):
and to deliberately choose where we want to go from here.
A responsible future for AI requires vision, foresight and courageous leadership
that upholds ethical integrity even when tempted to act otherwise.
Such ethical conduct fosters trust And trust is the linchpin that holds our

(01:23):
societal structures together and builds branding power.
That is an excerpt from a brilliant new book, Taming the Machine,
Ethically Harness the Power of AI.
The book is by a researcher, writer, speaker, and applied tech ethicist.
She is president of the European Responsible AI Office, an AI expert at Singularity

(01:46):
University, and pioneers global standards as an AI ethics certification maestro at IEEE.
She advises leading organizations on their machine learning strategies specializing
in AI research and ethical advocacy.
And she is the author of that new book, Taming the Machine.
Nell Watson, you're very welcome to the show. It's a pleasure to join you today.

(02:08):
I thought we'd start with that
timely moment, this Sputnik moment that I mentioned in the introduction.
You have a quote at the start by Jan Tallien, and I love this quote.
I'm just going to use this to tee you up to bring us whichever way you like.
Building advanced AI is like launching a rocket.
The first challenge is to maximize acceleration, but once it starts picking

(02:30):
up speed, you also need to focus on steering. That's absolutely where we are today.
It's strange to think that only about 16 months ago, the whole chat GPT Sputnik moment arose.
And that's where, of course, the power of AI was thrust into the public consciousness.
But it had been bubbling for some time. Many of us in that space had been waiting

(02:53):
for the breakthrough moment when most people would wake up and realize just how far we'd come.
And we're about to go much further because there's this next phase of AI,
agentic models, which are able to form plans, sophisticated plans with many
different sub-steps within them.

(03:14):
They're able to take action upon those plans and indeed to delegate those tasks
to other versions of themselves,
to work together in an ensemble with some of them taking on different specialist roles and even skills.
And this means that AI is about to become a lot more autonomous.

(03:34):
We've gotten used to this idea of interacting with AI systems using natural
language, just talking to them as we might a human being.
But soon we're going to be emancipating them to go off and work on problems
independently to figure out the best way of solving a problem that we're trying to work on,

(03:55):
whether it's planning an event or planning a trip or figuring out the best way
to persuade people to try a new product, for example.
However, as with that wonderful quote from Jan Talen, once we've gotten things
up to speed, we definitely need to learn how to steer them. And we're still in that phase.

(04:18):
And it's about to become a lot more difficult with these agentic models,
because we have to start teaching them about our values, our preferences,
our boundaries, no less, if they are able to take action on our behalf, right?
If these systems are going to act as concierges, possibly even ambassadors for ourselves,

(04:43):
they need to understand how to do things in a way that pleases us,
in a way that fulfills the mission and doesn't come short of it.
Taking shortcuts that we don't want, for example, might even be dangerous shortcuts.
And equally, we don't want these systems to go off and spend all day on one
single task, polishing a desk until all the varnish has been taken off it,

(05:08):
because obviously that's not what we're looking for.
Suppose that we are planning a picnic.
We don't want the system to decide that a thimble of tap water and a cracker
each for everyone is going to fulfill that mission.
We also don't want that system to plan a nine-course dinner either.

(05:31):
Either of those is not in the sweet spot of what we're looking for.
Similarly, if that picnic is for your local mosque or synagogue.
Ordering ham sandwiches for everyone is not going to be pleasant for them either.
Similarly, if it's There's a lot of vegetarians and vegans in your midst,
or gluten-free folks also.

(05:53):
And that's why it's so important now that we learn about not just AI ethics,
which is about how we use technologies and try to make sure that AI technologies
are transparent, that we can understand what's going on in them,
that there's a minimal amount of disproportionate biases, etc.

(06:13):
But that also we look at the safety of these systems, their ability to align with our desires,
our expectations, and to understand and interpret our values,
to be able to read the room in whatever culture or situation or schema it happens

(06:34):
to be in, whether it's you yourself late at night having some existential question,
whether it's in some big board meeting or indeed dealing with a client.
We need systems which are able to understand how to fit in mercurially with human society.
And that's an immense challenge. It's a very difficult process,

(06:56):
somewhat akin to rearing a child or indeed training a pet to behave in ways that we wish.
Except that this child or pet may soon potentially eclipse human capabilities, right?
In many ways, we are like teenage mothers giving birth to a demon.

(07:18):
And we are obliged to raise it as an angel. And that's not going to be easy for us.
And that's why we need to think very carefully on how we use AI and how we can
better fit it into our lives and into the future of humanity itself.
I often think about the show Black Mirror that's on Netflix,

(07:40):
brilliant show that actually does hold up this black mirror to society.
And listening to your talks and reading the book, it came to mind that when
AI learns more and more about the decisions we make, about how we think,
etc., it's going to realize the black mirror.
It's going to eventually realize there's a lot of stuff we do that isn't so great.

(08:02):
I thought we'd just mention this. And for our audience, what we're going to
talk about are two main things today.
AI meets HR and AI meets education and the benefits of those.
And if it's okay with you now, also, I'd love you to share how you got to be
one of the world's experts in AI ethics and a researcher in AI,
because I think you're such a great role model for many

(08:25):
people out there to get there and to understand how
how does one get there as well because it's going
to become such a valuable role as we advance through ai in the future so with
that maybe we'll start with this black mirror because you do shine a light on
that so to speak in your talks absolutely in in many ways machines are going

(08:47):
to be not just trying to understand us so they can fit in,
but they're also going to understand our little foibles,
our peccadilloes, the little aspects of our personalities,
of our civilization and its various mores.
And some of those interpretations may not be generous to us or may embarrass us in some ways.

(09:11):
We've seen how the revelations of Copernicus and Galileo, that the Earth orbits
the Sun and not the opposite way, changed our view of the universe.
It made us feel a bit less special, that perhaps we weren't the most important

(09:31):
thing in the universe after all.
And similarly, Darwin's revelations about evolution have also made us realize
that we weren't formed perfect as little homunculi of the divine necessarily,
not in the way that we conceived anyway.
But that we had scrambled up from murky depths over a very long period of time,

(09:57):
and through considerable suffering, no doubt, across the eons.
And of course, that in itself is a beautiful and an incredible,
magnificent thing as well, that we are indeed the formation,
therefore, of countless millions of selection events, and I think there's beauty in that too.
But sometimes when we have ingested a belief and that belief is questioned,

(10:21):
that can be a painful thing.
It's an opportunity for growth, but it's also an opportunity that people can
take to get angry because the world has changed.
It's moved in a direction they didn't like, and perhaps it's made them feel uncomfortable.
And I wonder if AI may make us feel uncomfortable about some of of our beliefs

(10:42):
about the world, the things we consider to be okay or not okay,
AI may change our perspectives on.
Indeed, just as different philosophers or religious leaders have come up with
new sets of morals over time, AI systems may do something similar.
They may find new ways of interacting which are more equitable or indeed lead

(11:07):
to better forms of cooperation between people.
And I think that's perhaps likely to create a bit of a schism in society as
some people update to the new rules, having witnessed that they are indeed pretty
good ways of living, and other people decide,
no, thank you, I'll keep the values I have. Cheers all the same.

(11:29):
So it's going to be an interesting turnabout where at some point we,
we stop putting values into machines and they perhaps start putting values into us.
David Pérez- One of the ways you talk about that happening is the idea of.
If we have a legion of nanobots throughout our bodies, that in a way we can create a hive mind.

(11:51):
And I thought about the utopian view of that, where a lot of people People won't
like the data privacy of that,
but at the same time, we can transcend things like death as we know it.
We can live on with our ancestry.
We can have a huge depth of experience from previous generations,

(12:11):
but also we could possibly solve huge thorny problems that we're faced with
as a planet. Absolutely.
Our smartphones and the internet have become a third hemisphere of our brain in many ways.
And I think that connection is going to increase further,
not just through augmented reality glasses and things like that,

(12:32):
but eventually AI systems, which are a literal co-pilot that are able to exist
within our bodies and to connect to our external and internal senses so that
they understand our experiences from what we see and feel and hear,
but also what we feel inside.

(12:53):
And I guess that will help machines to understand us in a very intimate level.
And indeed, perhaps, if a machine has all of our memories and all of our feelings
about our memories, then that would present a reasonable facsimile, perhaps,
of a human being to some degree once that person themselves has passed on from this world.

(13:16):
So another element that we can do then is that we can potentially radically
improve empathy between people as well.
If we could, for example, feel our impact on others very directly,
then we would understand the consequences of our actions in that moment.

(13:37):
Just how much those words we said, which we didn't necessarily mean,
had stung that person, right?
But if we sing a beautiful song, make a meal for someone when they're feeling
sick, that kind of thing, we could also feel the joy that we give to others.
And indeed, perhaps the thing to do in life would be to create joy for other

(14:01):
people, and there would be no profit in being wicked to each other.
Those are the kinds of ways in which our society may evolve in the next 50 years or so.
Which is a lovely segue for AI and education,
I think, because one of the great realizations, I suppose over the last 50 years

(14:23):
or so was Howard Gardner's idea of multiple intelligences that there's more than IQ in the world.
And I feel greatly that AI can, because of that empathetic aspect,
we can understand how different people learn, et cetera.
And I thought maybe you'll bring us through what you're seeing out there because
not only are you a researcher and ethicist, but you're also,

(14:46):
you also had a startup and you understand where
things are going you share a wide array of
resources where people can learn about machine learning etc
so you've studied this deeply so no better
person to tell us about where you see the benefits of ai in the education realm
yeah it's a heck of a journey for myself in waking up to the power of ai around

(15:10):
about 2012 2013 when i met professor jeremy howard at singularity university University,
who was talking about deep learning,
which was this very new technology at the time, which enabled us not just to
find patterns within data, as we do with machine learning in general,
but to find patterns within patterns.

(15:33):
And that enabled these systems to make very deep inferences and predictions
and understandings of things that were never possible before.
And at the time, I had a very difficult technical problem I was working on,
which was 3D body measurement using nothing more than two photographs from a camera.

(15:54):
That company is still going today. But we had a very difficult problem,
which was cutting the person out of the background because you don't want to
be measuring wallpaper behind someone, etc.
That's a problem in machine vision called image segmentation.
And myself and my team we had created this very complex algorithm to try to

(16:17):
cut people out in the image and we could get the head to fit but the crotch would break,
one arm would fit, a foot would not and it worked perfectly 15% of the time
which was extremely frustrating because of course we could see the promise was
there but it was unfulfilled.
However having learned about deep learning, we were able to use a machine vision

(16:39):
technique called convolutional neural networks,
where we took about a thousand photo edited before and after images of what
we were looking for, basically silhouetting the person out.
And we fed that into the machine and it worked flawlessly, basically immediately.
And so this problem, which had stumped us for over a year was solved overnight.

(17:01):
And that's when I realized the power of these techniques and indeed what they
were beginning to evolve into.
Over the years since then, I of course became an evangelist for this amazing
new set of technologies,
but increasingly also became quite concerned about the ways in which these technologies could be used,

(17:23):
sometimes by bad actors or sometimes by honest people just forming misapprehensions
about how best to use these systems.
Their strengths, and indeed their limitations.
And so that's why for many years now I've been working with organizations such
as the IEEE to develop new standards and certifications for AI ethics and AI safety,

(17:48):
so that we can help more people to use systems in ways that are safer and less
likely to cause various issues and scandals within society.
When it comes to education, I think that we're going to have to reconsider how we educate people.

(18:11):
We've entered a world where facts are cheap because we can find them out there on the internet.
But we still have to ascertain whether those facts are correct,
or indeed whether there are different nuances to those facts that perhaps we might be missing.
The context to those facts might have been de-linked for some reason or other.

(18:34):
And indeed, we're now entering also a world where generating things is very cheap and fast.
Where we can, you know, take a few bullet points and turn it into an essay without too much difficulty,
where we can generate all kinds of content, whether it's videos or images or

(18:56):
3D worlds, and edit things trivially also.
We can pinpoint the very specific thing that we would like to change or like
to animate, etc., and have these systems go forth and do that.
And until Until recently, it was only the province of Hollywood and maybe state
intelligence services that had access to those kinds of capabilities.

(19:18):
And now any one of us can pick those up for a trivial amount in a subscription per month.
And so it means that we need to be training people less about how to do things
and more about how to make sense out of things,

(19:39):
how to curate more than creating, per se.
I could analogize it to the old factory model of schooling,
where, you know, in a world where we still manufactured a lot of things in the
West, which we do less so, I'd like to see us maybe do more so again.

(20:02):
But in a world where we manufactured things, many people were seen as interchangeable
elements, a little bit like the machines in which they operated.
And so the schools became the same way, right?
Regimented rows of students all learning the the same content, the same way. But...
That creates people who can cook, people who can follow instructions,

(20:29):
people who can implement something, but not be able to see the bigger picture necessarily,
not with the training they've been given in any case.
What we need to be doing is not training cooks, but rather training chefs,
because a chef is able to put together elements in an original manner.

(20:50):
They're able to curate an experience for someone, not just cooking a grilled cheese sandwich,
but actually putting together an experience which is meaningful for them,
where there is the matter of presentation,
where there is the ambience of where someone is eating.

(21:11):
There's the story which comes with food.
If you go to a really good restaurant, the chef or the waiter is going to come
out and explain what you're eating and all the elements that go into it and
why it's special and interesting. And that's an important part of that experience.
And so by training people in how to curate, how to collaborate with other people

(21:35):
to share ideas and to work together in empathy,
how to understand very complex problems and all of their interlinkages,
and indeed to think in a critical and also creative manner.
These are the chief skills of the 21st century.
It's not so much about teaching people how to do things, so much as teaching

(22:00):
how to build their character so that when they hit a roadblock,
they don't give up, but they keep on pushing forward, forward even when things
might be uncomfortable.
In our world, which is changing so quickly, it's imperative that we keep learning,

(22:21):
that we don't have this idea that we just go to school and our minds are filled
with stuff and we leave at some point, but that we keep reinventing ourselves,
right? We need to be like Madonna.
We need to keep coming back with a different version of ourselves.
And to do that, we need to have the character attributes of being comfortable with failure,

(22:47):
of being comfortable with being a novice, because that's generally not a nice place to be.
We like to stay in our lane. We like to know the things we're good at and keep doing those.
And generally speaking, it makes sense to do that.
But in a world of incredible disruption, we are probably going to have to jump

(23:09):
lanes a few times, right?
To change different focuses in our careers or jump between careers even.
And to do that, we will need to be in an uncomfortable place of being a novice.
And it's not easy to try learning a new thing and be surrounded by people who
are so much better than you because of course they have a lot more experience, right?

(23:32):
And so it's easy to give up in that moment and lose heart.
And I think that's something that young children have that as adults we often tend to lose, right?
The young child playing the violin, making a horrible sound,
isn't self-conscious about it.
And so she keeps on playing until she gets good, whereas an adult will often give up.

(23:56):
Up and that's why we need to be able to teach these
this aspect of character so that people
will keep reinventing themselves and to
have the courage to do so beautiful that's such a beautiful message for not
just children to be i'm going to tell my kids that to be more like a chef than
a cook and make sure they don't get me wrong and i don't take it literal like

(24:19):
an ai might but also so that many adults do as well.
And this is a nice way to introduce the idea of AI and HR because AI will totally
revolutionize or reinvent that realm of the world as well.
I'd love you to give us a high level of what's happening there.
Absolutely. We are seeing an era of algorithmic management, of machines beginning

(24:47):
to take over a lot of those sorts of middle management or line management functions.
Of observing people doing their thing on the job, of setting people's shifts,
of perhaps even strongly contributing towards hiring and firing decisions.
Indeed, sometimes already some companies have people doing deliveries and there

(25:12):
are several cameras in their van and they are
observing the efficiency of that worker and indeed looking for potential infractions.
However, sometimes they get it wrong. Sometimes somebody might just happen to
scratch their neck, but the system decides that they are talking on their phone

(25:34):
and writes them up and possibly even docks their pay.
And it's actually very difficult to challenge that, right?
To challenge the decisions or predictions of these systems.
And indeed, when these models go very wrong, as we've seen in,
for example, the Horizon post

(25:54):
office scandal, which was a very simplistic system by modern standards.
But still, people trusted it too much.
And we tend to do this because we observe the system working pretty well,
maybe 95% of the time, but we forget that one time in 20, that machine is going

(26:15):
to make a misapprehension about something.
And when you're dealing with potentially somebody's fate, whether it's in health
care or the judicial system or financial systems, etc., or indeed employment,
those outcomes can be catastrophic if we don't provide enough oversight and

(26:36):
supervision and double-checking of the system's work.
We saw in the Horizon Post Office scandal, of course, that there were hundreds
of unsafe convictions and dozens of people wrongfully sent to jail for years,
many of which had to sell their homes to pay off debts that weren't theirs because
they'd been falsely lumbered with these fraud charges.

(27:00):
Marriages were broken up. At least three people took their own lives and never
saw their vindication, which of course took years for many people.
And I'm sure there's still a lot more to come out about that particular scandal.
Unfortunately, this keeps happening. We saw something similar with the Dutch

(27:21):
child benefit scandal, whereby people whose first nationality was not Dutch,
even if they become naturalized Dutch citizens,
were really given the third degree and were even threatened to have their children
taken off them, etc., when they were by and large generally innocent.
And it caused such a furore that the Dutch government actually collapsed as a result.

(27:43):
Same thing has happened in Denmark, in Michigan, in Australia, many other places.
We have not yet learned this lesson of not trusting machines too much,
not trusting their judgments, always verifying that these machine decisions
are safe and reasonable.

(28:04):
And I think, unfortunately, there will be further tragedies before we begin
to really internalize that lesson.
However, we do have some examples in the past of learning similar difficult lessons.
The air travel sector in the 1940s and 50s went through a similar phase of many

(28:27):
different tragedies in short succession.
It was an exciting time, but it was also a scary time to be in the skies.
Thankfully we learned quite quickly from those tragedies. We developed new logging
systems such as the flight recorders and cockpit voice recorders,
which told us about what was going on in the system and indeed in the human

(28:50):
beings interacting with it.
We developed new rules such as sterile cockpits, etc.
And pretty soon, air travel became statistically the safest way to travel from A to B.
And so I think we'll go through a similar phase. The short to mid-term with
AI will be a little bit hairy, a bit of a rocky road.

(29:11):
But so long as we keep learning, keep adapting, and earnestly move to using
best practices as they emerge.
With the help of standards, certifications, and well-reasoned legislation,
I think that after After a time, things will shake and down and we'll be able

(29:33):
to use AI in a way that we can begin to trust and more and more consciously
and less cautiously integrate into our personal and professional lives.
Now, many of our audience may be concerned about what the future of work looks
like if there's work, if we're on UBI, universal basic income.

(29:56):
And you hinted at some of this yourself. So I'd love you to share you as an
example, how you manage to piece together education, the mindset you approach that education with.
You're an entrepreneur. You hinted at this through the idea of learning to get
back up when you fall, reinventing yourself, consistent learning, et cetera.

(30:18):
But I'd love you to just give a quick thought to those people who are worried
about the future education of the future work of the future.
I think that we're definitely going to still need plumbers.
We're going to still need people to do plastering and care work and a range

(30:39):
of different activities that require
fine craft skill and require a lot of empathy for other human beings.
There are a lot of roles that we don't necessarily wish to give to machines,
even if they could do them, because it would potentially not be very dignified, right?

(31:00):
Like having an AI preside over your funeral, I think most people would find
that to be an undignified use of the technology.
So some roles are definitely going to be safe from being usurped by AIs anytime soon, I would imagine.
In terms of universal basic income, I think that I have some doubts around the

(31:23):
thermodynamic feasibility of that.
I think that it'll be very difficult to pay a lot of people from a portion of
human wages earned by other people.
However, there is an alternative, I think, that could work.
Because these AI agents, which are able to form a kind of an ensemble to work

(31:45):
together, is pretty similar in many ways to a corporation, right?
And indeed, it's possible to create virtual corporations of AI agents where
we have an engineering department, a quality assurance department,
marketing department, etc.
And all of these agents coordinate to create a product, whether that's a movie script or a video game.

(32:08):
And so if these AI corporations are maybe even competing in the free market
with human corporations,
that's going to be an interesting development, especially because some of those
AI corporations will be hiring human beings to go and do legwork for them, right?
Things that machines necessarily aren't best at.

(32:30):
I think that there's an opportunity, therefore, for machine-driven corporations
to provide dividends to human beings as a form of universal basic income.
I suspect that in decades to come, the majority of our economy will actually
be driven by these AI-managed companies,

(32:50):
because they will gradually out-compete a lot of the human ones,
But that these AI-controlled companies will provide us with dividends based
upon their provisions in the market,
and that we can perhaps use those to augment our income to support our lifestyles.
I think that's possibly going to be one way that we can help people to transition

(33:14):
towards potentially doing some other things with their lives,
as AI increasingly takes over in many, but not all, sectors. Brilliant.
Now, and Nell, for people who are interested in getting you to come to work
with their organization, to speak at their events, where's the best place to find you?

(33:34):
I have a little website at nellwatson.com, novemberecholemawatson.com.
I put some articles and resources up on there. And of course,
you may find my book, Taming the Machine of Interest, and you'll find info on
that at tamingthemachine.com.
And also I found a nice animation that's due to be released today,

(33:56):
the day of this release of this podcast as well.
And I'll link to that and I'll link indeed to your website.
For now, author of Taming the Machine, Ethically Harness the Power of AI,
Nell Watson. Thank you for joining us.
Thank you. Thanks for joining us on Inside Learning. Inside Learning is brought
to you by the Learnovate Centre in Trinity College, Dublin.

(34:18):
Learnovate is funded by Enterprise Ireland and IDA Ireland. Visit learnovatecentre.org
to find out more about our research on the science of learning and the future of work.
Advertise With Us

Popular Podcasts

Dateline NBC
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.