All Episodes

November 20, 2024 43 mins

In this episode of the Conflict Tipping podcast, host Laura May welcomes Dr Rachel Adams, founder and CEO of the Global Center on AI Governance and author of The New Empire of AI: The Future of Global Inequality. Rachel’s book explores how AI is reshaping global inequalities and examines its historical ties to colonialism. Together, Laura and Rachel explore the complexities of AI governance, the AI divide, and the ethical challenges facing emerging technologies.

Key Highlights:

  • [00:00:00] Rachel’s journey into AI and governance: From her PhD on transparency and surveillance to becoming a global thought leader on AI governance, Rachel shares her professional journey.

  • [00:05:10] Why isn't AI transparent?: What makes AI systems so complex and why transparency in AI remains a critical and elusive goal.

  • [00:08:16] AI, inequality, and colonialism: How AI’s development and supply chains echo historical patterns of extraction and exploitation, and its disproportionate impact on the Global South.

  • [00:18:21] The AI divide: Examining the stark disparities in access to AI technologies and their benefits, and the resulting social and economic inequalities.

  • [00:23:26] Who does the work, and where?: Exploring the human cost of AI production, from data labelling to e-waste, and the economic challenges for workers in the Global South.

  • [00:28:36] AI governance and policy-makers: The need for international regulation, capacity-building in the Global South/Global Majority Countries, and empowering oversight institutions to create fairer systems.

  • [00:36:35] What can we do to help?: Concrete steps for individuals to support more equitable AI development and the importance of raising awareness about AI’s impact on global inequality.

  • [00:40:53] Where to learn more?: Connect with Rachel--and buy her book!

Links:

Key Takeaway: Dr Rachel Adams argues that AI’s inequalities cannot be fully understood without recognising t

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Laura May (00:10):
Hello, and welcome to the Conflict Tipping podcast from Mediate.
com.
The podcast that explores socialconflict and what we can do about it.
I'm your host, Laura May.
And today I am very excitedto have with me, Dr.
Rachel Adams, a global expert onresponsible AI and the founder and CEO
of the Global Center on AI Governance.

(00:32):
Rachel's new book, The New Empire of AI,The Future of Global Inequality takes a
deep dive into the impact of AI on globalinequality and offers insights into how
we can address these pressing issues.
So welcome Rachel.

Rachel Adams (00:47):
Hi, Laura.
Great to be here.
Thank you for having me.

Laura May (00:51):
So great to have you here.
I've wrangled you here from LinkedInand I'm very happy to be talking.
But before we dig into your book, Iactually understand that you did a PhD.
And I want to ask you alittle bit about your journey.
So what was your PhD about?

Rachel Adams (01:07):
Sure.
Gosh, so my professional journeyis that I started out working for
the South African Human RightsCommission here in South Africa.
So I have a background.
My masters was in internationalhuman rights law, and I was leading
their work on information rights.
So access to information, privacy,freedom of expression which were the

(01:31):
rights that were kind of first impacteddirectly by digital technologies.
And when I was studying and supportingtheir work on access to information,
this idea of transparency was becominga really, really big and important idea.
And from a academic perspective.
I was concerned that we were onlyhearing positive talk about transparency.

(01:58):
We need to make the state transparent.
We need to make corporations transparent.
And then our economies will bestronger and democracy will be greater
and there will be no corruption.
And it was this very easy narrativeand I wanted to explore that and
make that more complex because it wasclear to me that this kind of rise
of the idea of transparency was goingalongside the rise of surveillance and

(02:24):
the making transparent of individualsto the state and to actors of power.
Yet that wasn't really acknowledgedwithin this big ideas and
discussions around transparency.
And the other idea I was concerned withwas this idea of transparency comes
out of Western enlightenment thinking.

(02:45):
So I was interested in a philosophyof transparency and what this idea
meant in non Western contexts.
And connected with this idea ofwhiteness and making visible, making
clear and pure and making white.
So there was some tensions that Iwas really interested in exploring.
And so that was, it feelsa really long time ago now.

(03:06):
I think everyone says that their PhD waslike ages ago and it doesn't really hold
much relevance to what they do now, butit definitely gave me a very critical
basis for exploring these big ideasthat seem to be so wonderful, but hold
tensions and dissonance within them.

Laura May (03:26):
That is absolutely fascinating.
Like what an interesting idea.
And I suppose you must've been working onthat in the early stages of legislation,
such as GDPR in the EU, when theystarted to be protections around privacy.
And so it must've been a fascinatingmoment to be in this sort of field
of transparency and privacy andrights and how it all fits together.

Rachel Adams (03:47):
It was, it was a really, really interesting time.
And I published a book calledTransparency, New Trajectories in Law.
That's based off my thesis , but thework on transparency led me into AI
because transparency became this kindof key ethical principle with which some
of the big questions about AI um, werewere going to be solved, but it also

(04:13):
showed the limitations of transparency.
So it was originally we could makesomething completely transparent.
If we had good record keeping andthese records are published, people
could know exactly what was happeningand exactly what was going on.
And then AI came along and there wasthis thing of the black box and a human
will never be able to completely know.

(04:33):
So it totally turned this ideaof transparency on its head.
But at the same time, There wasthis renewed emphasis for we need
transparency around how these systemswork, when these systems are going
to be used and who might be affected.
And so this transparency issue becameeven more interesting and complex.

(04:56):
And then AI just was such a interestingidea from all these different perspectives
that from a intellectual perspectivebasis it just really intrigued me and
led me to all the work I'm doing now.

Laura May (05:10):
Absolutely.
And so just for those who aren't totallyfamiliar with how AI works, I mean,
could you explain a little bit moreabout why AI itself is not transparent?
You mentioned this black box.
What's what is the box?

Rachel Adams (05:24):
Yeah.
Well, I think probably first thingsfirst is I'm not a technologist.
I am a legal scholar and a researcherand a policy advisor, but artificial
intelligence relies on and processeshuge, huge, huge amounts of data in
order to discern patterns within it.

(05:46):
And then it might provide recommendationsbased on that, like when we have
recommendation systems on Netflix orYouTube that provides personalized
recommendations based on your browserhistory or what you've liked before,
or it can produce new content.
So that's generative AI, large languagemodels, chat GPT, Lama, Claude and so

(06:10):
forth, which then take the recommendationsystems to a new level and produce
new audio, video, text content basedon kind of a synthesis of what it's
been exposed to and its training data.
But these systems are highly,highly, highly, highly complex.

(06:33):
And part of the reason that AI isso valuable is it can work at a
scale and a pace that humans can't.
So its kind of whole purposeand value is that it's able to
do things that humans are not.
But because of that, and because ofthe complex layers in which it does its
analysis and it moves through what'scalled these neural networks, and often

(06:57):
there's millions of neural networksthat are layered on top of each other.
And the size of that, there'ssimply no way of knowing what
particular kind of input of dataled to what particular output.
It's just too big and too complex.
So we call this the black box.
It's that kind of decision making areawithin an AI system that's just too

(07:22):
vast and too complex for humans everto fully, fully know and understand,

Laura May (07:30):
So just to put that into concrete terms, for example, so
my Instagram feed, the reels thatare shown to me are decided by some
mysterious algorithm somewhere, somehow.
And because I guess of my behaviorson Instagram before it's decided that
Laura will like this kind of reel.
And the kind of reel that's decidedthat I will enjoy is Catholic priests

(07:52):
prosetylising, but with memes, so,yeah, I mean, I don't know what
has led to this behavior, but youknow what, Instagram, you're right.
These are fantastic.
So that, and that's the kind of thing.
So we can't answer like what made the,the AI behind Instagram think, Oh, Laura
will enjoy these hilarious priest moments.
But it was right.

(08:12):
So thanks so much for that explanation.
I think that was really useful.
And so then moving on, in someways from transparency, you've
mentioned this complexity and thatall of this data is being used.
How does that relate to inequalityas a central theme in your
book, in your latest book?

(08:33):
Cause you're overachieving

Rachel Adams (08:36):
So I think the question of inequality.
So, first of all, inequalitieshappen at multiple levels and
in multiple intersecting ways.
So I think sometimes we talkabout inequality in a very easy
way when in fact it's a very,very complex phenomenon and idea.

(08:58):
And part of what the book is trying toargue is that we cannot understand some
of the really deep seated inequalitiesthat this world of AI in which we
are now a part is both producing andreproducing without understanding the
history of colonialism, which was theconditions which produced the most

(09:23):
critical inequalities that we are facedwith today, which are racial inequalities
and inequalities between countries.
There is also huge inequalitiesregarding gender that are more culturally
specific, that happened for differenthistorical reasons in different parts
of the world, but these are alsointertwined with racial inequalities.

(09:48):
And so I'm interested in how does thehistory of colonialism, the history
that has produced the conditionsthat have allowed for a lot of these
global inequalities to manifest andAI to take advantage of, how does that
history help us better understand theproblem in front of us and therefore

(10:10):
address it in a more sustainable way?

Laura May (10:16):
So let's dig into that then.
For you and in the book, what are themost obvious implications of colonialism
for AI and the development of Ai?

Rachel Adams (10:28):
Yeah, well, I think one of the clearer ones is
around how AI systems are built.
So the supply chains and the value chainsright from the extraction of raw earth
minerals and materials needed to buildelectric cars or build computer chips

(10:48):
or build the kind of global hardwarethrough which AI systems navigate This
depends very very much on Africa andmany parts of the so called Global South
So in order to have these systems thatare Making life better for some life.

(11:11):
Really, really good for some people.
So huge economic upturns, people reallyenjoying Alexa being able to help their
children change channels easily, or ordertheir shopping or whatever it might be.
They're dependent on people in placesacross the Global South in ways that
mimic the history of colonialism.

(11:34):
So what we used to have in colonialismwas the extraction of resources
from places like the Congo, and thenthey're immediately shipped to be
manufactured in other parts of the world.
So the real economic value is in themanufacturing and the refinement of these
minerals to create things from them.
And there's a whole set of skillsand industries that surround

(11:57):
that refinement process and themore complex industrialization
beyond extraction itself.
And during colonialism, what happenedwas the extraction takes place from parts
of Africa or places in South America,and then it goes for refinement in
areas in Europe or America, where theseindustries are more complex and they grow

(12:19):
and they're able to then go on and createcars and computer chips and whatever.
And so we're, we're having that again,where places like the DRC is being ravaged
and completely exploited in terms ofextracting different rare earth minerals.
And, DRC is the richestcountry in the world from this

(12:41):
kind of mineral perspective.
Yet it's one of the poorest.
90 percent of the country isliving in extreme poverty.
So it's contributing in major ways.
The Global South, that is, orpreviously colonized places are
contributing enormously to theproduction of these systems that are
not not serving them in a way thatjust reproduces past inequalities.

(13:05):
And this is not just the supplychain as a starting point
but at an end point as well.
So e wastage and when we've decided we'vehad enough of our smartphone and we want
another upgrade our ipad no longer works.
Well, where do they all go?
They get dumped in places across theGlobal South creating these kind of

(13:25):
electronic wastelands that produceshuge environmental damage, causes real
health problems to local communities.
So right through tothe end of cycle as well.
And then there's all sortsof labor issues within that.
So AI systems are really complexto produce all that data that's

(13:46):
needed to train them needs tobe sorted and labeled by humans.
So when you click onto awebsite and you have to prove
you're not a robot by labelingpictures of motorbikes or traffic.
It's somebody spending months andmonths and months and months doing
that kind of work, which is menialwork and would drive anyone crazy.

(14:08):
But it might be much worse than that.
It might be labeling images ofchild pornography or things that
are really harrowing or sensitive.
And much of this labor is takingplace in vulnerable communities.
So not just across the global South, butin prisons or refugee camps where people,
are desperate for some kind of livelihood.

(14:30):
And so have little choice to takewhatever labor is available to them.
So that kind of indentured labor, thatkind of exploitation is very connected
to the histories of colonialism, notjust in terms of where it takes place,
but the predatory extraction thattakes from huge numbers of people in

(14:53):
order to serve a very, very small elite.
And then I think one of the reallyclear ones is around racial bias.
So we know these systemsare racially biased.
They're also biased in terms of gender.
And so that intersection meansthat within the African continent,
for example, These systems arehighest risk for African women.

(15:18):
But addressing this is a verycomplex thing because it requires
building data sets that includemore data about these people.
And oftentimes in order to do that,you're going to be violating privacy
rights or exploiting people again throughthe extraction of their data in order

(15:38):
to make your systems more accurate.
So it's a really complex system, butone of the things that I always was
struck by is just the size of big tech.
I mean, these are the biggestcompanies in the world.
They are companies that aremore powerful and have far
more money than many countries.

(16:01):
And so we have to be thinking aboutthe kind of imperialism that that
they hold simply by their size andby the scale of their operations.
So that was part of what I wanted toexplore and tease out in the book as well.

Laura May (16:19):
Absolutely.
And I really liked that pointyou made at the end there.
Because when we do think aboutcolonialism, we think about it in
terms of countries, or it's empiresor countries like, Belgium and Congo,
UK and everywhere but we don't reallythink about it in terms of companies
colonizing areas and populations.
And that's kind of whatyou've just said, right?

(16:39):
That these technology companies areeffectively the new empires in some
way, the new extractive colonial powers.

Rachel Adams (16:48):
They are, they are, I think there's still important connections
to European and Western power.
And Western power was built onthe land and the resources and
the people that it took from theGlobal South through colonialism.
So we also have to recognize that thesebig tech companies that are largely

(17:12):
in the States were made possiblehistorically by conditions of colonialism
that supported the rise of the West.
But it's also important to remember thatcolonialism was an economic venture.
You know, it was the Dutch East Indiancompany that colonized South Africa

(17:33):
and India and parts of the world.
So it was companies and the establishmentof capitalism and global capitalism, a key
motivating factor behind the establishmentof empire was about creating new markets.
And it was about creating landsthat could contribute to the supply

(17:53):
chains and the value chains ofwhatever products and services were
being sold out of colonial powers.
And so we're seeing a littlebit of that again, those kind of
colonial supply and value chains.
So I think there's a number of differentways in which this AI empire manifests and

(18:17):
connects to earlier forms of colonialism.

Laura May (18:21):
As well as this particular .Heritage of colonialism and how it
relates to AI today and potentiallyexacerbating inequalities,
you talk about the AI divide.
What is the AI divide?

Rachel Adams (18:35):
yeah, the AI divide is becoming a little bit more mainstream.
It's almost becoming moreof a mainstream issue.
I'm seeing it being talkedabout more and more.
We had this idea of the digital divide,which was how we had half of the world
connected to digital technologies.
And when I say half of the world, itwould be like 98 percent of Europe or

(18:58):
parts of Europe um, and then considerablyless in other parts of the world.
So in South Africa, it's about60 percent of South Africa
is connected to the internet.
But across many African countries,it's far less and that internet
connection, is often shared.
So you might have, one internet connectingdevice within any given household.

(19:21):
So just think of the politics around that.
It might not be stable at thecost of getting internet access.
So buying data for yourphone is much, much higher.
So if you're in South Africa or Namibia,you're paying more to access the
internet than you are in other parts ofthe world and particularly the West,

(19:41):
There's a kind of gendereddigital divide as well.
So within Africa, particularlymen have increasingly more
access to digital technologiesand the internet than women do.
And I think this is very much becausenow the internet has become a tool
for social and economic mobility.

(20:02):
So it's not just about beingable to give somebody a call,
but it's about job opportunitiesand connections and so forth.
So, that was the digital divide.
Some people are getting access, somepeople are not, or getting unstable
access, and this divide is really fallingalong the lines of this global inequality.

(20:24):
The AI divide is thedigital divide to an extreme.
So it's about who has access tothe benefits of AI technologies.
Many people in the West have accessto Alexa or to Siri or to a government
service that's using AI technologies inorder to be able to better retrieve your

(20:49):
records and give you a better service, oryou have Netflix, which recommends things
to you or a number of other things..
And then, many people, particularlyin the global south, do not have
those luxuries, and for them toget these luxuries requires not
just a phone, but a smartphone.

(21:10):
And when we're talking about a householdor sometimes even a village sharing
one phone, what on earth does yourpersonalized LinkedIn or Instagram
feed look like in that context?
And what particular harmsmight that show up?
So for example, if you're sharing aphone and there's a young girl that's

(21:33):
having questions about her sexualityor her menstrual health or something
really intimate and personal and sheuses the internet on the phone and
then somebody else comes along and cansee that browsing history or is then
fed other content that's similar,it creates all sorts of issues.
So these are some of the things thatwe mean with the AI divide, but it's

(21:55):
also on a big structural level too.
To produce AI systems to produce somethinglike chat GPT is something that's only
possible if you are in particular partsof the world and you have access to a
particular set of resources, millionsand billions and millions of dollars.

(22:16):
So in Africa, for example, we havepeople that are as talented as the open
AI folks who could build those kinds ofsystems, but simply do not have access to
the kind of compute resources that theywould need to build something like that.
Or at a lower level,you might have chat GPT.

(22:39):
It's got 20 dollars a month if you wantto subscribe to the more upgraded version
and 20 dollars a month might mean nothingto somebody in the West, but to somebody
in Kenya or Uganda, it could be a massiveproportion of your monthly income.
And so it's just not feasible for youto be able to use something that might

(23:01):
be life changing or have access to it.
So this is what we mean by the AI divide.

Laura May (23:10):
And I suppose even just thinking about what you said in terms
of, There's people who are just astalented as the, open AI guys and girls.
There are some girlshopefully in there somewhere.
But I suppose that even where theyare, there's not the resources to
create something similar locally.
And of course, then they couldbe poached by international
companies and leave the country.

(23:30):
And then that's also intellectualresources, which are departing
or emigrating, because I mean,of course you would do that.

Rachel Adams (23:37):
Yeah, absolutely, absolutely.
So I think we're just becoming moreconscious that AI is contributing to
this set of divisions, within society.
And part of the task we have is tounderstand, well, what are the
implications of these growing divisions?

(23:59):
And part of the things that I'mconcerned about is it's going
to make life quite untenable forpeople across the global South.
If AI is now making this kind ofrace to the bottom in terms of
human labour, becoming onshored backto places across the West or the

(24:20):
well developed parts of the East.
And so, the kind of cheap labor callcenters, that kind of thing, that was
being offered across the global South isno longer needed because an AI can do it.
What does this mean for people acrossthe Global South who may not have
many options in terms of how toearn livelihoods, I think it creates

(24:43):
conditions for nationalism to rise.
We've seen this with recommendersystems with spread of misinformation
or disinformation I think it'screating the conditions for extremism.
It's making the motivation tomigrate to other parts of the world
where economic opportunities aremore available, even stronger.

(25:06):
Ultimately, everyone will be affectedby this growing divide, and so we all
need to become more aware of that.

Laura May (25:18):
And I guess the thing that really doesn't make sense in
all of that is the whole idea ofhaving to move to another place
to do this kind of digital work.
And I mean, of course there'sinfrastructure issues in terms of
access to good quality internet.
But beyond that, there's noreason people wouldn't be able
to do this wherever they're from.
We've proven that we can do remote work.

Rachel Adams (25:39):
Yeah, but I think there's the kind of building the AI systems.
We want that to be ableto happen anywhere.
And I think there's increasingly moreattention from donors, philanthropy
organizations, international developmentorganizations, that we need to be
funding the establishment of computeinfrastructure across the Global South

(25:59):
in order to address some of these AIdivide challenges at a fundamental level.
But the digital work opportunities thatare available, there are a lot of them.
It's just that they are notenough to provide people
with a meaningful livelihood.
So people do these tasking jobs thisMechanical Turk platform where people

(26:22):
can register to do little bits ofwork, but they might need to spend
three or four hours understandingthe task, researching around it in
order to be able to respond to it.
And then they get paid one ortwo dollars for doing that task.
So it's not, nobody is making a living ina way that's going to lift somebody out

(26:44):
of poverty by much of the tasking microwork that this new digital economy is
creating in different parts of the world.

Laura May (26:55):
No, and I remember that in the academic context as well.
I mean, mechanical Turk is usedquite a bit to do research.
People put on their surveysand they'll pay you a small
amount to finish the survey.
And yet I was shocked by peoplewould really race to the bottom
there as well to pay the minimum.
I mean, when I had recentparticipants like full disclosure,
I pay UK minimum wage to do it.

(27:18):
But, and that was one of thehigher amounts on there, which
to me was absolutely wild.
It's like, you just reallydon't value these people.
And it just, it always really shockedme that especially people doing critical
kinds of research would just pay theminimum, but of course then there's
structural constraints about your fundingand where you get that funding from
and how you can actually pay for this.
And so it just really highlightsthat this extraction is happening

(27:41):
from so many different industries.
And it seems to all begoing in one direction.

Rachel Adams (27:46):
Yeah.
And I think that one of the otherthings with this kind of work is
that much of it is coming from thebig corporations who are outsourcing
aspects of data labeling or within thisbroader A I uh value and supply chain.
It's sorting data, getting itready for AI systems to train.

(28:08):
But the people that then do that workdon't know what they're doing it for.
They don't know who they're doing it for.
They don't know how itconnects to this bigger system.
So there's this criticaldisenfranchisement that's taking
place where people are doing this workwhere they don't know what it's for,
and they're not connected to othercolleagues, and it takes out some of

(28:32):
the humanness and dignity of work.

Laura May (28:37):
If you had a magic wand, what thing would you do to help resolve
this particular type of inequalitywhen it comes to labor extraction?

Rachel Adams (28:49):
Yeah, I mean, I think, because I've been working
with policymakers on this issue andon a number of different issues.
I think it's not an easy thing to fixbecause some of these issues go beyond
AI itself and the data governance,labor issues, data value chain issues.

(29:10):
And so having a good AI policy thatcovers some of these things it's
not necessarily going to fix it.
And we also have jurisdictional issues.
So even if Kenya put in place a great lawto ensure that contract workers or workers
on platform economies or gig economies hadgood rights, it's hard to ensure that this

(29:35):
is adhered to and respected and compliedwith when everything's happening at a
cross jurisdictional international level.
So I think part of it is that we may needinternational regulation on this issue.
And then one is a really importantlevers I think we need to be building
capacity and ensuring a well resourcedare independent oversight institutions

(30:01):
within countries in the Global South.
So the labor authorities within Kenyaor other parts of the world, human
rights commissions, data protectionauthorities, competition commissions.
I think we underestimate the reallyimportant legal interpretation and
standard setting that these andprotection mandates and monitoring

(30:28):
that these institutions play.
Going forward, I'd love to seemore support, resources, capacity
building happening there.

Laura May (30:41):
And is that in any way related to what you do at the
Global Center on AI Governance?

Rachel Adams (30:46):
Yeah, it very much is.
So part of our work is exploring thewhole ecosystem around AI governance.
So it's exploring, what is theresearch that we need in order
to develop evidence led policysolutions and policy innovations?
How do we monitor the efficacyof different policy choices?

(31:10):
At the moment, we don't reallyknow what is the best way to
regulate these technologies.
So we need the research in order tobe able to understand and map that.
And then we do training and capacitydevelopment across the ecosystem.
So building the capacity of theseindependent oversight institutions.

(31:31):
We work with policy makers.
We've just launched two new courses,one on AI ethics and policy with the
University of Cape Town and anotheron AI and human rights in Africa
with the University of Pretoria.
So we're really keen to supportthe development of capacity on AI
governance in a way that's locallyappropriate across Africa in particular.

Laura May (31:57):
And I mean, you've mentioned policymakers a few times.
I understand you've served ona bunch of expert committees.
You've worked with UNESCO, you'veworked with the Bill and Melinda
Gates foundation because you are avery fancy and wonderful human being.
And so how has all of this work influencedyour perspectives on AI or AI governance?

Rachel Adams (32:16):
Yeah.
I think it's allowed me to understandjust how complex this space is.
I think as a researcher, for a longtime I thought all we need to do is
really understand the issue well, andif we have the right research that
allows us to understand what tailormade policy solutions or regulatory

(32:42):
responses are needed, that would be that.
But actually, it's far,far more complex than that.
One of the really important thingsthat I've learned is that communities
and citizens need to be wantingsome kind of policy response or
some kind of regulatory response.
There needs to be some kind of demandfor change because policymakers

(33:06):
are most interested in respondingto the demands of their citizens
and representing their citizens.
And yes, there's going to be someissues that we need to regulate that
are people don't need to care toomuch about them, but ultimately,
people are going to be affected.
And I think one understanding how policydecisions are made and what influences

(33:28):
those decisions beyond research andalso how we bring more people into these
conversations and how does this activepublic, this kind of critical publics,
critical voices, critical investigativejournalism, this filmmaking to help
people understand these things andhave a sense of why this is important.

(33:51):
So that's been one of my key takeaways,is that we need to really expand this
conversation, bring more people in,understand what people's perceptions
and attitudes are towards artificialintelligence, which we know fairly well
in the West, what they are because wehave surveys, but we don't have those

(34:11):
public perception, public attitudes,surveys across much of Africa.
And so we don't have a sense of, youknow, what are people most worried about?
What, how are people understandingthese kinds of issues?
What kind of language are peopletalking about advanced digital
technologies in, and bringingthat into how we respond to it?

(34:34):
You know, I come from critical theoryand post colonialism, post structuralism,
and feminist school of thought.
And so I've always approachedthings very critically, but working
with lots of different actors frominnovators to philanthropists, to
governments, to the African Union,to the Googles and the Metas,

(34:56):
everyone wants to do the right thing.
Everyone wants to create a happyworld where everybody benefits.
The intention is not we want to kind ofjust ensure that a few white men are
really, really rich billionaires andeverybody else is just kind of serving
them in some feudalistic new world order.

(35:19):
And, and so what I'm really interestedin is how do we find that common
ground where there is that goodwilland good intentions in order to build
this new world order in a way that issignificantly more equitable and just.

(35:42):
I think we've made some strides.
I feel like, if you'd ask methat question or I'd talked
about that a couple of years ago.
I would be like, you know, we're reallyin different places and governments want
one thing and our big tech companiesare advocating for something else.
And we're dealing with thisterrible US-China division.
And innovators are not being understood.

(36:03):
But innovators at the same time don'tunderstand this, that and the other.
I think there's more understandingand more common ground now.
And what we need to do is really,really, really build on that and think
about the kind of multi stakeholderalliances that are needed because no
one actor is going to champion this.
It's going to require us all to figure outhow we can build this and work together.

Laura May (36:30):
Incredible.
I feel like you should runfor president somewhere.
That was a really good speech.
Anyway.
For those who are listening and they'reindividuals and they've been fired
up by everything you've spoken about.
Beyond buying your book so thatthey can get to grips with it a bit
more, what concrete steps can theytake as individuals to help move us

(36:51):
in the direction we need to get in?

Rachel Adams (36:53):
He Yeah.
I mean, I think this is a very wide issue.
You know, we're talking about trying tobuild more equitable innovation systems
and more people have access to thetechnologies needed to build more relevant
AI systems that are locally driven.
We're talking about a set of technologiesthat are almost inherently biased

(37:19):
towards people of colour and women andgender minorities and vulnerable groups.
We're talking about a really powerfultool that people and companies and
actors and positions of power know thatthey can leverage to their advantage.
And, you know, there's crucial examplesof AI being used in military domains in

(37:41):
ways that is very, very, very concerning.
So this is a huge, huge issue.
It's not a single idea of A.
I.
and the problems associated with it.
So I think I know it sounds like not much,but really understanding more and being
more aware and being more conscious andhaving more conversations about these

(38:04):
things is really, really fundamental.
It's that more people need to be concernedso that collectively we start building
more ideas about how we do it differentlyso that people start building alternatives
to Netflix or alternative recommendersystems or alternative technologies
and provide people with market choices.

(38:25):
So they're not locked intoGoogle or locked into iPhones.
But it's also maybe having moretransparency to go back to that about
the supply chains and value chains of A.
I.
Maybe if somebody knew that theirsmartphone had cost the lives of children
in DRC to extract the minerals neededto build it, they might think twice.

(38:49):
Or if you know you're goingto throw this phone in the bin
where it's going to end up.
So I think that if people havemore awareness, they would and more
understanding on these things and moreconversations with more people people
would make different decisions and thendemand things from their MPs differently.

(39:10):
So not just how the UK or othercountries get ahead in terms of AI
and maintain that competitive edgeand triumph over China and Russia,
whatever it might be, but let's alsothink about the implications on the
Global South, because if we don't,they're going to come knocking at our
door and we're going to feel them beforewe've had a chance to anticipate them.

(39:34):
So.
Yeah, I know.
It's not a silver bullet.
And I really think that the bookis the start of a conversation.
Everyone wants a nice, neat answer.
And there are some tentativeanswers in the book.
But it's really about inviting more peopleto care and be part of this conversation.

Laura May (39:55):
Mm.
You just reminded me ofthe Fairphone as well.
I always keep an eye on them causethey're dedicated to like sustainable
chains of supply chains or what haveyou, but they're also like triple the
price I mean, I get why people don'tbuy them, I try to hold onto phones
for at least five years, but yeah, whenpeople are like, Oh, I need to upgrade
every year, I'm like, please stop.

(40:16):
Please stop.

Rachel Adams (40:17):
Yeah.
But there's behavioral economicsthat are pushing people to do that,
and if we had tighter controls overmarketing and big tech, I think
that's a really important start.
And I think the sustainabilityconversation is important, but very often
it's bracketed around environmentalismand we need to care about the
people as well as the environment.

(40:40):
So ensuring that there's a strong senseof justice as well as sustainability.
Are these technologies that havebeen not just sustainably produced,
but produced in a way that's just.

Laura May (40:53):
Totally.
Okay.
And so for those who are interestedin learning more about you, your work,
the book, where can they find you?

Rachel Adams (41:03):
Well, I'm on LinkedIn and on X, and then our center, the Global
Center on AI Governance, is GlobalCenter, e r dot ai, where you can
find much more information about theAfrican Observatory on Responsible AI
and the Global Index on Responsible Ai.
And you can contact me

Laura May (41:25):
and I'm assuming your book is available from all major retailers.

Rachel Adams (41:29):
Yes, I've seen it online with various retailers.
So it's been published with Polity Press,but widely available online and in stores
from the end of November in the UK.
And then early January,2025 in North America.
And there's an audio booktoo on Audible, I believe.

Laura May (41:49):
I do love an audio book.

Rachel Adams (41:50):
Yeah, me too.

Laura May (41:52):
And so when people go out and they read your book,
what is the key message youhope they'll take away from it?

Rachel Adams (42:00):
I think people understand that artificial
intelligence is producing inequalities.
I don't think that there's widerecognition that we cannot understand
the production of those inequalitieswithout looking at the history of
colonialism and the history thatproduced many of these inequalities.

(42:21):
So that's the key message, is that ifwe want to understand the conditions
out of which AI has arisen andthe new implications it's posing
on our society, we cannot forgetthe history of colonialism and the
divided world which it created.

Laura May (42:38):
Awesome.
Well, look, thanks again so much, Rachel,for joining today, and for everyone
else, we'll see you next time with theConflict Tipping podcast from Mediate.Com.
Advertise With Us

Popular Podcasts

United States of Kennedy
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.