Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to Digitally
Curious, a podcast to help you
navigate the future of AI andbeyond.
Your host is world-renownedfuturist and author of Digitally
Curious.
Speaker 2 (00:14):
Andrew Grill.
Today on the show, I'm joinedby Kerry Sheehan, an
award-winning AI policy andstrategy expert with extensive
experience in responsible AIimplementation, governance and
ethics.
As a qualified machine learningdeveloper and former strategic
advisor to the Alan TuringInstitute, she brings a unique
perspective to AI ethics andguardrails.
Her work spans across the UKgovernment BSI AI Standards
(00:37):
Group and various sectors,including health tech and
business.
Speaker 3 (00:41):
Welcome, kerry
Brilliant.
Thank you.
Good to see you, andrew, andgreat to be here.
Speaker 2 (00:46):
Now.
Your career path is fascinating.
It's evolved from journalismand PR to becoming a recognised
AI policy expert and machinelearning specialist.
Perhaps you could share yourjourney that led you from
journalism to AI, ethics andguardrails.
Speaker 3 (01:00):
Yeah, I guess on the
face of it it does sound a bit
of an odd one.
How do you go from the creativeside to more the data side?
The two often don't collide,although everyone has to have
the data, the math skills,analytical skills now.
So journalism and PR, wherestorytelling and public trust
are everything.
(01:20):
But I was always curious aboutwhat happens behind the scenes.
And even just very quicklygoing back, it seems too many
years now they were looking atautomating certain parts of the
news.
How do you automate emailinboxes?
Because they're just gettingswamped day in, day out.
And that has led, you know,many news outlets to today to
build fully blown data platforms, ai-led news.
(01:43):
You see AI, ai led news reportsnow.
So that's what sparked myinterest in talking to some of
the techies and some of thejournalists, more on the tech
side of things, going back thoseyears.
So it's more about you know whocontrols the narrative, who
gets left behind, what's reallygoing on behind the scenes, you
know, in those world.
And that led me to the dataside and eventually machine
(02:04):
learning.
The turning point came when Irealized these tools weren't
just technical.
They were actually shapinglives.
You know whether that'snarrative that we hear in the
media, right down to decisionsthat are made on UI everyone
else now, day in, day out.
And that's when I kind ofthought oh, what is this all
about?
You know ethics and governance,and it led me from there to go
(02:25):
through some really steeplearning curves, particularly
getting picked up by a ladycalled Dame Wendy Hall, who I've
got all the time in the worldfor.
She picked me up and basicallysaid put your money where your
mouth is.
You need to go on a toughlearning curve if you really
want to be credible.
And that's what I did.
Speaker 2 (02:42):
So for our
non-technical listeners, what
are AI guardrails and why arethey important?
Speaker 3 (02:48):
Well, we often hear
about AI guardrails.
So I think I was thinking aboutthis how do you best position
it for a general lay person thatisn't as much into the
technical detail as some of usout there?
So I think, if you think of AIguardrails, they're like the
bumpers in the bowling alley.
You know, I still do use themsometimes, which can be quite
(03:14):
fun.
It's not glamorous in thatsense, but they do stop things
from going completely off track.
You know, technically they'relimits, safeguards, built into
AI systems to keep them fromcausing harm, discriminating,
making decisions.
We can't understand or reverse,but I quite like that bowling
alley analogy.
Speaker 2 (03:30):
The other one would
be the bumper cars when you're
in a Dodgeman car and it'shelping you not bump into the
other people.
So I think in this brave newworld of AI, regulations and
standards are going to be soimportant.
When I and I'm an engineer,you're probably aware of that so
when we can pick our phone upand we arrive in France and it
just works, that's because ofstandards and basically having
(03:52):
standards that work the GSMstandards.
So the same is going to have toapply to AI, so that it works
around the world, but also so itworks for us, not against us.
So your role as an advisor tothe British Standard Institute's
AI Standards Group whatprinciples have guided the
development of global AIstandards and how do these
standards serve as ethicalguardrails for AI development?
Speaker 3 (04:12):
Yeah, I mean.
First of all I'd like to say,you know, a big well done to
standards making bodies.
You know, like the BSI in theUK, that is really starting to
lead the way on some of the AIstandards development within the
UK and across the world.
It's not easy by any means toget to consensus across the
world and it really is aboutfostering good practice.
(04:32):
So people may or may not beaware, you know some of these
standards.
They don't stand up in a courtof law but they can be used in a
court of law and it really isabout fostering good practice
and hopefully one day it willbecome best practice.
So standards are really rooted,you know, in areas of ai such as
fairness, accountability,transparency, particularly human
(04:54):
oversight.
Um, there's a really good bsistandard, the ai management
standard 42001.
That's etched in my brain now.
Um, and that is the casebecause it really is, you know,
a good checklist for making sureyou are doing the right thing
and on the right side.
And ultimately, standardsprovide basically a shared
language for responsibledevelopment, which is what we're
(05:17):
all trying to to get to, andnot to slow down innovation,
because that's not what anyonewants to do, but to try and give
it some structure.
As we said.
What are the guardrails thatneeds to be in place, and it
gives architects both creativityand building codes, I think in
a sense.
So I am a keen advocate of them.
Speaker 2 (05:38):
Now you spent some
time at the Alan Turing
Institute.
Let's just step back.
Alan Turing, famously, back in1950, wrote a white paper and
his first line of that paper wascan machines think?
Could you just maybe give us abit more colour about what the
Alan Turing Institute is aboutand what your work entailed
there?
Working on ethicalconsiderations data and AI
(06:07):
institution.
Speaker 3 (06:08):
It's obviously
grounded in the principles of
Alan Turing, is widely believedto be the forefather of AI.
It is a research academicinstitute as well and it's
leading the way in again makingsure governance, ethics are the
priorities, but also going intosome really interesting research
areas that will help drive youknow, the UK, the economies
(06:28):
globally forwards, but makingsure, again, we are staying on
the right side.
So I was approached to be partof a strategic advisory group
when the Alan Turing Institutewas developing its latest
five-year strategy a couple ofyears ago.
I was very privileged to beable to do that with some very
esteemed people, just to provideinsights, challenge ideas.
(06:52):
To what should the Alan TuringInstitute be focusing on?
Going forwards?
It's a very pivotal time formany of these organizations out
there.
As we know, the pace of AI hasincreased exponentially over the
past couple of years.
It looks like it will continueto.
So how do we really enable theati to be forward thinking,
(07:14):
support innovation, be a centerhub for, you know, ethics and
governance, whilst not losingpace with what's going on out
there?
And again, one of the hardestchallenges that you'll always
face in these positions is sortof balancing bold innovation,
because that's where we all wantto go.
We often hear it's talked aboutwith long-term strategy can be
(07:35):
a bit of a dichotomy there.
The short-term innovation let'sget going, and is this really
in the longer-term interests ofpeople?
So, in a nutshell, I I helpedshape, you know, parts of the
strategy around explainability,inclusion.
You know working with othersand supporting them to develop
some of the challenge areaswhich are still ongoing till
today.
Speaker 2 (07:55):
So just pick up on
one thing you meant there.
You talked about explainability.
Now, I learned about this terma couple of years ago in talking
to some of my AI experts on thepodcast.
Again, for a non-technicalaudience what is explainability
and why is it so important?
Speaker 3 (08:07):
So it's really
important that you know if
anyone is using, say, you've gotthe end users of an AI system,
they can understand enough ofhow that system has made
decisions on them.
So, for example, you apply fora mortgage, a banking service,
and you get turned down.
There's no real explanation forit.
(08:28):
You think, oh, I was all righton one, two, three, maybe not
four.
And then you call them up oryou email them or go on live
chat and I've actually done thisas an experiment.
Can you explain to me howyou've made this decision?
And they say, no, it's just ourcriteria, it's commercially
sensitive, we won't tell you.
So that is one end.
People need to understand that,because how can we put in place
(08:50):
effective challenge mechanismsfor consumers, customers, people
, citizens and redressmechanisms if we don't
understand it?
And the other side is obviouslymaking sure that the tech
developers are fully puttinginto those AI life cycles all
the different various touchpoints.
What exactly is this algorithmdoing?
(09:11):
Now, that can get quitecomplicated for certain
audiences when you're talkingabout national security and
things, but at a general level,we need to understand how these
decisions are being made, whatalgorithms are being used, are
they the right ones, and was thedata the right data?
Was it ultimately any bias ordiscrimination caused?
And do you even know you'reinteracting with AI?
(09:34):
That's going to be a big thinggoing forwards.
Speaker 2 (09:36):
So there's been a lot
of work in the EU around the EU
AI Act and I talk about this alot because it's probably one of
the first bits of regulationthat's actually found its way
from the decision room out intothe real world.
How would you characterise thecurrent UK approach to AI
regulation and how effective isthis principles-based approach?
Speaker 3 (09:56):
So the UK's approach
to AI regulation is based on the
five AI principles.
They are the principles of theprevious government and that has
now moved forward into thecurrent AI opportunities action
plan, which is really drillingdown into those areas and
(10:18):
stating you know how they willsupport innovation, economic
growth, how they will supportuptake of AI across many, many
sectors, if not the wholeeconomy.
The principles-based approachand it's not my approach, but I
have a good understanding of itis really to support innovation.
So when you're talking aboutregulation of AI in the UK, you
(10:42):
know many regulators have beenand are working up their
approach to regulation, whetherthat's via call for inputs,
consultations, putting outguidance to industry, again to
foster that good practice goingforwards.
It's always based on outcomes.
So those principles allowcompanies, sectors, professions.
(11:04):
We want you to use ai, use airesponsibly, sensibly, use ai
for whatever your interests maybe, be inclusive, but ultimately
make sure you have fair,transparent, explainable,
appropriate outcomes for people.
So it's not too restrictive.
It really is an enabler toinnovation.
Speaker 2 (11:26):
So I love.
You've mentioned the wordactionable a few times, which I
love because it's in my brand.
I call myself the actionablefuturist, as everyone knows.
So what frameworks are mosteffective when actually
implementing ethical AI inorganizations?
How do you go about deployingethical AI in an organization
that's been running for 10 or 15years?
Speaker 3 (11:44):
If you think about
ethical AI, some of it is an
extension of basic good practicegovernance, risk governance,
good business decisions andlooking at those possible
impacts or unintendedconsequences, whether that's to
the business, to the people thatwork for you, that work with
you, the communities that youserve, the society, um at large,
(12:07):
but with the added extension ofhaving that real fundamental ai
um understanding and the andthe outcomes um and things like
that.
I mean there are lots of umframeworks popping up out there,
there are lots of guidancepieces popping up out there, so
I'd always recommend having alook to them, whether that's
(12:30):
OECD, again, british StandardsInstitute has got its 42001
management standard which actsas a framework for things that
are considered good practice tohave in place.
But one which I also really likeis the outer framework, the
assessment list for trustworthyAI and impact, any impact
(12:51):
assessments that are tied toreal user outcomes.
So again, it's how do we movefrom theory to practice to be
practicable, to actually help,you know, organizations and
business move forwards?
But if I'm thinking about whatis the most effective thing here
, you know frameworks I meandiverse teams asking
(13:11):
uncomfortable questions early,for example.
That's where realresponsibility starts and that
can also help you to shape anagile framework.
You can have the foundingprinciples of your framework in
place, but depending on whatsystems, tools, programs you're
developing or even buying,because it's all about your
understanding as well as yourinternal development.
(13:32):
Whichever path you choose oryou may choose both that's where
real responsibility starts.
Speaker 2 (13:38):
Let's just touch on
diversity for a moment there,
because what I understood fromthe research a few years ago was
that diversity is so importantwhen you're actually running up
AI teams, Because in the past, Ithink there was famously a
Google or an HP team that did animage recognition system that
wasn't able to recognize peopleof color because the people
developing it weren't of color.
(13:59):
So how important is diversityof thought, diversity of skill
thinking, diversity of all typeswhen you actually run an AI
project, so that the people thatare developing it have very
wide range of diversity andyou're not getting that bias in
the model.
Speaker 3 (14:17):
This is the vital
question.
This is the ultimate number oneon the checklist for me.
I mean, I have been heard tosay a few times you know, if the
team's building the AI systemsor coming up with the AI ideas
aren't diverse and ultimately,if they don't represent those
(14:37):
that the end results are goingto serve it not ethical, we
should be saying no.
Going back to the, to thedrawing board.
I mean that's hypothetically.
I mean, how do we do that?
Um, realistically?
So it's about bias, mitigationapproaches, and I I like to
think of bias as bad seasoning.
So I've gone from the bowlingalley to now we've, we're in the
(14:58):
, we're cooking the dinner,we've got some, some bad
seasoning.
You can ruin a great recipewith just a little, and that's
what I tend to do quite often.
But you can mitigate thingsthrough.
You know, you can look at thingslike data, rigorous testing,
but across demographics, that'sreally important because it
can't always be aone-size-fits-all, particularly
(15:19):
for some of these big systemsthat we're now seeing in place
which are making decisions ondiverse population.
I mean, you've mentioned theexample there, the
discrimination against people ofa certain colour.
We're not always saying thishas been done deliberately.
It could just be an actualoversight.
It could be an indication thatthere wasn't enough oversight
points or think points andguardrails in place, but by
(15:45):
having actual humans with livedexperience in various different
areas review the results, Ithink that can only be better,
whether that's ideation,implementation, but all the way
across the life cycle.
So really that is no differentto building diverse,
high-performing teams currently,and diversity of thought comes
(16:06):
in all different forms it'sdemographics, it's various skill
sets, critical thinking.
It's not always the typical,obvious things either.
Speaker 2 (16:17):
I read actually,
before ChatGPT 3.5 was deployed,
they got 40 people from Upwork40 being a small number, they
can manage, and I read the whitepaper about that.
You might give us some color tothat.
So how important was it forthem to basically have those 40
people and throw all sorts ofethical challenges at them so
they could see what they'regoing to get Now?
Day one I think it still was abit rough at the edges.
(16:38):
It still was a bit rough at theedges, but when people realise
that they actually did that, isthat a really good example of
before you set an AI system towork, actually trying it on real
diverse subjects.
Speaker 3 (16:49):
Absolutely.
I wouldn't let anything leavethe building, so to say, without
actually fully testing it, andtesting it with a variety of end
users, and that is, across thediversity mix, whether, again,
that's demographic age, culturalnorms, different communities
and populations, differentlanguages, for example, but also
(17:13):
to ensure digital inclusion aswell, those who may have
barriers to access, and you'rediscriminating against them
before you've even started.
So there's lots to think aboutthere.
Speaker 2 (17:24):
So you've done a lot
of work with UK government
departments for servicedevelopment and innovation.
What have been the mostchallenging ethical dilemmas
you've encountered whenimplementing these systems and
how did you address them?
Speaker 3 (17:34):
Like working with any
government across the world.
There is a lot to balance theworld.
There is a lot to to to balanceum, and at the heart of it, um
is always inclusion, user-leddesign, um, user-led
contributions, etc.
Again, it goes back to thatpoint of who are these systems
going going to, to serve andtrying to make them as fair and
(17:57):
equitable as possible, um.
One project that I did work onthat I can talk about, which was
very interesting, wassupporting farmers, for example.
You know how to provide smarter, more efficient services to
farmers to get the subsidies outof the door and enable them to
(18:19):
get the food to market to keepthe nation fed.
They do an absolutelyphenomenal job and some of that
was, you know, education wecan't do things on paper anymore
, it's just not going to workbut also ensuring that, as you
move more towards automateddecision making, you are
reassuring people that fairnessisn't just a checkbook.
(18:41):
So you have to look at thosekind of areas.
So some of that is aboutcommunication and engagement and
inclusion, to ensure that youare bringing people along with
you.
Again, explainability,appropriate transparency.
They fully understand what youdo and that's embedded into
every stage, whether that is,you know, the data collection.
This is how we're going to doit now model training, human
(19:03):
oversight.
And for me, what is the bestmechanism?
I think it's regularindependent audits, transparency
to users and not just what theAI does, but why and what's in
it for the end user.
These are the outcomes we'regetting.
We think they're fair, they'reright, but this is the benefits
to you, whether that's smarter,more efficient access to
(19:25):
services, decisions, money inyour pocket, whatever that may
be.
Speaker 2 (19:30):
A lot of clients I'm
working with.
The edict from on high has beenwe're doing AI, which is very
broad, and so of course peopleare getting.
They want to get to marketquickly, so innovation and speed
to market is important.
But how can organisationsbalance responsible AI use with
competitive advantage in theserapidly evolving markets?
Speaker 3 (19:50):
If we think about
responsible AI, it shouldn't be
a burden.
It shouldn't be a burden forany company, business or entity
out there.
Ultimately, it's about yourbrand's reputation strategy.
That's what you always have tothink of.
You know, I do believereputation has been won and lost
on ai going forwards, um,whether or not the end result is
right or not.
That could again, um, just bethat appropriate transparency
(20:12):
piece.
But the companies that aredoing this right are really
attracting the talent.
We often hear that there's a bitof a skill shortage out there,
and now we're all going on amass upskilling, which is
absolutely fantastic.
They're winning contracts andbuilding trust, and I do foresee
that some may possibly bemaking ethical shortcuts.
You might save money today, butit's going to cost you, perhaps
(20:35):
the market tomorrow.
So there is a balancing actthere.
Yes, you can come up with theideation, prototype, minimal,
viable products and quickly getthem out there and tested, but
you still need those appropriateguardrails in there.
And again, when you've gotprofiles and profits involved,
you can kind of see why some ofthese things may happen.
(20:57):
But actually I think you know itis for people, people whether
you're working in tech devops oryou're, you know, a business
function supporting, you've gotto stand up and say, right, is
this the right thing?
Where are these checkpoints?
Where's our pilot?
Where's our testing?
Okay, this is now good to go,and we, you still need those
guardrails and those test pointsas you move um along.
(21:18):
Um, it can't just be.
Let's, let's chuck up AIdecision-making system and put
it out there and see whathappens in the wild,
particularly not en masse,because your reputation will be
shot down.
Speaker 2 (21:29):
So are there any
companies or areas or industries
that you think are getting itright and people should look to
those, as this is best practice?
Maybe we can call out a coupleof really great examples.
Speaker 3 (21:40):
For me.
I'm always mindful of sayingbest practice because often best
practice is considered to begold-plated beyond the lawyers.
But we're just trying to foster, obviously, good practice.
In the future we may move tobest practice because obviously
this is, you know, work inprogress for many of us out
there.
I quite like the nuclearindustries and the aviation
(22:02):
industry, so the real safetyconscious industries I think you
know high profile brands, youknow high street brands.
Customer service facingentities could have a lot to
learn from those areas.
And that's mostly because theywork on a functional safety
model which, again, you can findit, you know, have a look
(22:24):
online.
To the extent you know, withaviation, for example, most of
us get on a plane.
Now I don't even think about,oh, is this going to get me
there?
What's going to happen?
It may be a floating thought insome people's minds.
You sit down, you think, oh, amI going to get my cup of tea?
Oh, my gosh, is the wi-fi goingto work?
(22:47):
When am I going to get off?
Type thing.
And that's because they'vestuck to the functional safety
principles and just kept drivingdown the risk as much as
possible to what is believed tobe the lowest acceptable
standard, and they still work onthat today.
So I think there's a lot tolearn there from a governance
point of view particularly,particularly because they're
dealing with really highconsequence areas and those
which could be catastrophic tolife.
Speaker 2 (23:06):
So we're seeing a lot
of public models and now I'm
seeing with clients.
People want to go inside theenterprise and build their own
enterprise, gpt.
What are the differingconsiderations they should have
around guardrails, when it'sinside the enterprise and inside
the firewall?
Speaker 3 (23:19):
There does seem to be
a trend of building in-house
LLMs.
Everyone seems to want an LLM.
I'd say, well, what for, again,if it's making you smarter,
more efficient and, of course,you can build the walls around
it so you keep your data.
And you can do that with someof the big tech companies or you
can do it yourself.
I think you just need toconsider what you actually want
(23:41):
it to be used for whether it'sto scan through documents, to to
provide people with summariesquicker, or whether, again, it's
to help that ideation to moveto the next critical thinking
decision making stage.
Building some of these systemscan still be quite expensive, so
, again, you've got to work outyou know the return on
investment on some of these.
(24:01):
But but again, there is asaying out there just because we
can doesn't mean we should.
So again, it's about reallyunderstanding what you need them
for.
And would an LLM be part ofyour future business operating
model going forwards?
Would it dock into your currentand future infrastructure?
For example, are you going tostart to automate your work
(24:22):
processes and your workflows andmove that forwards, or are you
really just thinking pie in thesky, piecemeal?
Everyone else has got an LLM,so I think we should have one.
Speaker 2 (24:33):
So, looking ahead,
what emerging ethical challenges
do you anticipate in AIdevelopment, and what guardrails
should organisations beestablishing now to prepare for
these challenges?
Speaker 3 (24:42):
It's always about
trying to be on the front foot
with sort of your AI, emergingtechnologies, quantum, for
example, and the wholeinnovation piece, whilst doing
it responsibly.
So some of the things that I'mlooking at quite closely you
know, sort of foundation models,see where they go, synthetic
data, for example there's a lotof discussion out there on that
(25:05):
and also agentic AI quiteclosely.
I think agentic AI has got goodpromise.
And again, quantum, becausequantum and AI will be closely
linked, particularly goingforwards.
And also there is the ethicalconsideration for quantum AI,
whether that's quantum-fuelledAI or they work quite closely
together.
So we will need, whichever waywe go, we will need guardrails
(25:27):
for autonomy, accountability andgovernance.
I mean, some of this istechnology agnostic.
It has to be going forwards andthat would also place, I think,
whether it's people in theboardroom or DevOps in a good
place going forwards, especiallyas AI begins to act as a
decision maker rather than thetool, and that's when it will
get very interesting, I think.
(25:49):
And sort of guardrails, I mean,I think for me it's about
guardrails without rigidity,because obviously we have to be
agile now with AI If we'reasking people across the AI life
cycle at various points in thesystems to be agile.
Now, with AI, if we're askingpeople across the AI life cycle
at various points in the systemsto be agile, you might need to
switch the system off there.
Pause.
(26:09):
That that's not quite right.
Or somebody's asked their datato come out, whether it's
anonymised or not.
What are you going to do there?
So there shouldn't be handcuffsis what I'm saying.
There should be more like thescaffolding.
That's how I think ofguardrails and for me, the best
ones are principle-driven oneswith room for context.
This is going to be reallyimportant going forward.
(26:31):
What is the context?
Ai can make decisions all daylong if we get this right, but
what is the context here beforewe actually go ahead with that
system or with that end decision, for example, whether that's
internally a business orexternally to end users?
You know, for example, how youhandle explainability in a
(26:51):
health app, for example, fromrecommendation system.
You know may differ fromsomewhere else and ultimately,
standards as part of this givedirection, but they're not a
dictatorship.
So there's, you know, agilescaffolding, but not handcuffs.
We need to be not rigid, buthave those pause points and
(27:12):
those shut off points andrestart points as necessary.
Speaker 2 (27:16):
Just picking up on
the agentic theme agentic AI or
AI agents.
This is where AI will besomewhat autonomous.
So, as you said, it's makingdecisions based on various
inputs.
What I've always thought isthat it's going to be about
trust.
So I would trust an agenticsystem to book me a meeting.
I'm not sure I would trust ityet to access my bank account
and pay for something.
So where will trust come intothe world of agentic AI?
(27:37):
Do you think?
Speaker 3 (27:42):
I think that goes
back to you have to build trust
to start with.
Again, there's a lot ofdiscussion about how do you
treat people fairly with AI.
You know there are current lawsin place in many countries.
People, consumers, citizenshave to be treated fairly.
What does that actually mean?
That is to be determined by youknow, whoever's developing the
systems and the ultimatedecision makers, and it also
(28:04):
goes back to, I think, brandreputation, organizational tech
development, trust.
You need the right guardrailsin place.
You need to explain you knowwhat you're doing with these
systems for, ultimately, peopleto to trust them and they need
to do what they say they'regoing to do after you've
explained it to people.
Because Because ultimately, youknow, human beings have a fear
(28:24):
of the unknown.
There is a psychologicalelement to this and AI, for many
, can be the fear of the unknown.
We know AI, for example, hasgot a bit of a marketing problem
out there.
The robots are coming.
You see robot hands over thetypewriters, the glowing brains,
for example.
It's got nothing to do with itand that's from some quite high
profile commentators and someorganisations.
(28:47):
So we just have to kind of bemeasured with this and a lot of
it will come down toexplainability and then into the
future.
You know, ai will just becomethe tool.
It won't be the story anymore.
That's where I hope that thisultimately goes.
Speaker 2 (28:59):
So regulation is a
hot topic.
Every government wants to beseen to be regulating things
properly.
How can companies best preparefor what will be probably fairly
heavy regulation, depending onthe country and the culture?
Speaker 3 (29:12):
Yeah, this is always
an interesting question and it's
particularly important forthose that may have to operate
across various differentregulatory jurisdictions.
You know you earlier referencedthe EU AI Act, for example,
across the EU member states.
Uk is no longer a member of theEU, although we have continued
(29:35):
with, you know, the general dataprotection regulation, the data
side of things, which is a bigwraparound for AI.
I think, ultimately, whereveryou operate, it's about
understanding if you are in aregulated sector, of which there
are many.
I think in the UK alone there's90 to 100 different regulated
sectors, some well-known, somekey economic regulators and some
(29:57):
quite niche areas.
It depends where you areoperating Keeping in touch with
what they're doing,understanding any call for
inputs or consultations thatthey're putting out and, you
know, reading them quitethoroughly, responding to them.
If you don't have your say umthat, that that's up to you.
But that is um a mechanism andthey are quite willing to have
(30:21):
conversations with people andit's about understanding what
the approaches are.
I mean the euai act.
It's no secret, it's on the eucommission website.
Um, we all knew it was coming.
You know it's a phased approachand 2026 is a big date for when
you know some of the biggerthings will start to be
implemented.
For example, the eu is puttingon regular webinars now for
(30:44):
businesses that operate in thosecountries, through those
countries, to those countries.
You may not be based in themember state, but you might take
data from them or operatethrough them um see if you can
attend some of those um, have alook at what guidance is being
put out, have a look at whatinformation is being put out and
then start to map it out.
You know, put in place.
(31:05):
Ok, I operate in the UK.
I need to ensure these are theprinciples I adhere to.
Most regulators in the UK aredoing it against current
legislative frameworks.
So you've already got yourlicensing agreements and codes
that you need to adhere to.
There's various laws in placethat you would still need to
adhere to, and you know you needto foster that good practice.
(31:26):
So it's just about mapping itout in that sense, and I would
think that you know anyone thatis in a regulated sector,
whether it's in one jurisdictionor a couple or many, wherever
it is in the world, a couple ormany, wherever it is in the
world.
If you haven't been thinkingabout this already, what's been
going on?
But it's never too late.
Speaker 2 (31:47):
Some great practical
tips there.
We're almost out of time, butwe're at my favourite part of
the show, the quickfire round,where we learn more about our
guests.
I'm going to fire somequestions at you, iphone or
Android.
Speaker 3 (31:56):
Android.
I like systems I can open upand poke around in Window or
aisle, aisle.
I like the freedom to get upand go, metaphorically and
literally.
Speaker 2 (32:04):
Your biggest hope for
this year and next.
Speaker 3 (32:06):
That we stop asking
if AI needs ethics and start
asking how to build it in asdefault.
Speaker 2 (32:11):
I wish that AI could
do all of my.
Speaker 3 (32:13):
If it could triage my
inbox for me in an effective
way, I think I'd give it a hug.
The app you use most on yourphone?
Probably an app called Notion.
It's becoming a bit like mysecond brain at the moment.
The best piece of advice you'veever received Don't just build
things people can use.
Build things they should use.
What are you reading at themoment?
I've started to read a goodbook.
(32:34):
It was recommended to me A ladycalled Wendy Liu, which I'm
sure a lot of your listenerswill have heard of.
It's called Abolished SiliconValley.
It's quite provocative, but itcould be quite timely.
Who should I invite next ontothe podcast?
I think maybe someone from acommunity organisation working
on tech justice.
I think they might be able togive you a completely different
lens on AI.
How do you want to be?
Speaker 2 (32:54):
remembered.
Speaker 3 (32:55):
I think it's someone
who'd helped AI work for
everyone, and not just thepowerful.
Speaker 2 (33:05):
So we're all about
actionable things here.
What three actionable thingsshould our audience do?
Speaker 3 (33:08):
today to prepare for
the threats and opportunities
from AI, ethics and guardrails.
If you're using AI already,audit the AI system that you use
.
Think about how it treatspeople.
What's your governanceframework?
Could you confidently disclosethat to somebody or communicate
it in a very easy, basic way?
Add an ethical review to yourproject board or your working
(33:28):
groups, whatever your structureis within your business
organisation, and includesomeone new with lived
experience in your next designmeeting.
We can't stand still and Ithink there's always people out
there that can add something new, great actual advice.
Speaker 2 (33:44):
Kerry, a fascinating
discussion.
I've learned so much today.
How can we find out more aboutyou and your work?
Speaker 3 (33:49):
Well, linkedin would
be the best place.
I do go back to people.
I was happy to haveconversations.
I'm not saying I know all theanswers by any stretch of the
imagination, but we're all greattogether.
We can always find someone thatcan answer any questions and I
promise I will get back toeveryone and I'm a bit behind,
(34:09):
but I will respond.
Speaker 2 (34:10):
Kerry, thank you so
much for your time today you're
welcome good fun thanks.
Speaker 1 (34:14):
Thank you for
listening to Digitally Curious.
You can find all of ourprevious shows at
digitallycuriousai.
Andrew's new book, DigitallyCurious, is available at
digitallycuriousai.
You can find out more aboutAndrew and how he helps
corporates become more digitallycurious with keynote speeches
(34:37):
and C-suite workshops atdigitallycuriousai.
Until next time, we invite youto stay digitally curious.