All Episodes

April 19, 2024 24 mins

In this special episode ahead of series two, Smera and Jonah give us a whistle-stop tour of the UK’s national showcase of data science and AI - AI UK.  

The event’s producer, Lillian Hughes talks us through some of the programme highlights and Smera and Jonah chat to participants including Michael Wooldridge (University of Oxford), Doug Gurr (The Alan Turing Insitute), Rebecca Cosgriff (NHS England), Stephen Meers (DSIT) and Jake Elwes (ZiZi Project). 

 

https://ai-uk.turing.ac.uk/ 

https://www.youtube.com/@AI_UK 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jonah (00:01):
Too Long, Didn't Read.
Brought to you by theAlan Turing Institute.
The National Instituteof Data Science and AI.
Hello, welcome to a TooLong, Didn't Read special.
I'm Jonah, a content producerhere at the Alan Turing Institute.

Smera (00:17):
And I'm Smera, a Research Assistant in Data Justice
and Global Ethical Futures.

Jonah (00:21):
Ahead of our second season of Too Long, Didn't Read, we wanted to ease you
in with this special episode all aboutsomething close to our hearts, or at
least something that has been consumingour workouts for the last few months.
AI UK, the national showcase ofdata science and AI brought to
you by the Alan Turing Institute.
On this episode, we're going toexplore how drag shows can use

(00:42):
AI to explore society's biases.
How the UK could be the best place tofurther AI in healthcare and how children
are helping shape the AI of the future.

Smera (00:51):
AIUK is our annual two day event and this year it took place in
March at the QEII Centre in London.
It was packed to the brim.
Brim with interactive demonstrations,workshops, and talks from some of the
leading minds in data science and AI.
You can imagine the setting Jonah,you're in Westminster, houses of
parliament across the street, Britishthink tanks lining the streets of

(01:13):
White Hall, the crisp spring weather,and most importantly, data science.

Jonah (01:18):
You make it sound very romantic.
But, uh, we were kind of runningaround like headless chickens trying
to get interviews and things like that.
So ARUK is open to the public andthe entire event was streamed online,
free for all, but the audience doestend to be predominantly researchers,
business and government people.
So since there was so much goodand relevant stuff, we decided
to dedicate an entire episode towhat we learned and who we met.

Smera (01:46):
So to kick us off, we are joined by the one and only Lily
Hughes, the producer behind ARUK.
Welcome, Lily.
I imagine AIUK was a lot of work.
Have you recovered?
Hi, Samira.

Lilian (01:58):
Thanks so much for having me.
I don't think anyone ever really recoversfrom AIUK, but I am doing much better now.
I've rested, the sun's shining,back to my normal work hours.
It's all good.

Jonah (02:09):
Well, congrats on your work, Lily.
Um, what is AIUK for ourlisteners and why is it relevant?

Lilian (02:15):
What AIUK is, it's really an opportunity to bring all these
different people from across theAI ecosystem in the UK together.
So Turing works with.
academics, with researchers,with universities across the UK.
We work with industries,with corporate partners.
We work with government, policymakers, civil servants, and we
work with the third sector as well.

(02:35):
And very rarely do events bring all ofthose different people together to talk
to each other and learn from each other.
And that's what AIUK does.
Great.
Yeah.
Meeting of minds.
A meeting of minds.
That's really cool.
Thank you.

Smera (02:48):
Well, amongst the minds that were met, how is AIUK actually put together?
Who decides who, you know, whichminds receive the showcase to
talk, discuss, and demonstratesome of the work they're doing?

Lilian (02:59):
AIUK is a work of so many people.
So there's a couple of differentgroups that I work with the most,
I would say the most important ofwhich is the program advisory group.
So this is a group thatvolunteers their time from Turing.
It's both business leaders at Turingand researchers at Turing, uh,
from all different levels across,across our, across the Institute.
They come together.

(03:20):
They think about what weshould be platforming at AIUK.
And then we go from there.
What's possible, what isn't possible, howdo we want to shape it, working together
to build those ideas, develop them,and eventually showcase them at AIUK.
And a showcase it

Smera (03:33):
was.

Jonah (03:34):
Yes, it was, uh, a lot of people involved and.
We were all some of them.

Smera (03:39):
So you've seen at least two AIUKs through.
So what's different about this year?
What was your favorite bits?
Well, what are some main highlightsthat you came across and what
was different about AIUK 2024?

Lilian (03:51):
I don't know if I can ever choose sort of favorite sessions.
They're all brilliant in their ownway, but there were some really
fun things that we did this year.
I worked, as I said, with the programadvisory group to come up with
some content that's a little bitdifferent from what we'd done before.
One of the things that Ireally enjoyed doing this year
was the opening provocation.
This was a project spearheaded byDrew Hemnett, uh, called The New Real.

(04:13):
So we worked with Jake Elwes andcollaborators to present a kind
of positive utopia of AI, a bitdifferent from the, the fear you
sometimes hear about existential risk.
We were really thinking about howhumans and AI and society, all of
this is going to come together.
And there is a, there's anopportunity here for positive change.

(04:37):
And I wanted to encourage everyoneat the event to be curious about
AI, to think about it in a way theymight not have approached it before.
So we did this as the openingprovocation to really start off on
that foot, celebrating AI, celebratingartists and celebrating being

Smera (04:51):
curious.
That's brilliant.
I think that was a phenomenal show.
We did manage to catch up withme, the drag queen and the project
creator, Jake Elwes backstage at AIUK.
And we asked them why drag isa good vehicle to explore AI.

Me the Drag Queen (05:06):
We're throwing rhinestones and wigs and big makeup
on scary tech so that people canunderstand that there are amazing ways
to use it as well as insidious ones.

Jake (05:14):
Yeah, and drag is a great way of doing that, because I guess AI systems
have a lot of bias towards normativity.
So for us, if we're injecting drag kingsand drag queens and drag things, gender
non conformity into those systems,it's a wonderful way of exploring, kind
of, biasing the AI towards othernessrather than normativity, and breaking
it down, and seeing when it glitches,and finding poetry, and when it fails.

(05:37):
I think

Me the Drag Queen (05:38):
I need technology, any, anything that impacts society in
such a way has to represent and reflect.
all of society.
You can't just have it modeled on themajority because the rest of us do exist.
We're here.
There may not be many of us, butwe're here and we're kind of more
fabulous than you, so get us involved.
I guess for me, it ain't a party ifthe queer's aren't there, darling.

(05:59):
So

Smera (06:00):
me, the drag queen and Jake, they also told us about how
AI has influenced their own work.
I think this is important becausewe see now why representation is
important, especially in places wherequeerness is often on the margins,
if it is even included at all.

Jake (06:15):
I've been working, yeah I guess I've been working with like AI machine learning
for like coming up to 10 years now.
Um, so right in the early days Iwas like AI in its infancy, the
earliest generative adversarialnetworks after Ian Goodfellow's paper.
I was in the basement of my art schoollike programming these systems and
hacking with them and and kind of findingpoetry and when they failed and broke

(06:37):
down and back then it was like couldonly generate these tiny little images.
And I guess my thinking around AI haschanged a lot, like I used to very much be
interested in, in these kind of questionsof agency, uh, and consciousness and
how, how much kind of agency can I givethe computer as an art making machine.
But then, you know, As I kind of carriedon researching in this field and realizing

(06:59):
that questions about who's buildingthese systems and who are they building
them for became more the forefront.
Looking at bias, looking at farmore pragmatic, what's in the
data set, whose data set is it?
Is it my data set?
I think early on it was aboutappropriating large data sets and kind
of making them fail and pointing outsometimes political things in that sense.
But then it became far less about thesort of metaphysical questions around

(07:21):
AI and more about the kind of pragmatic,How is this affecting people right now?
And how can we, yeah, offer alternatevisions of kind of AI features?

Jonah (07:32):
It's interesting, isn't it?
This, it sort of shows how art isone of the first methods to take new
technologies apart and like questionthe ethical side of how they're used.
Maribeth Rohr, an artist and researchengineer at Google DeepMind, also
fresh off the stage, told us about howAI relies so much on categorization
and how queerness can encourageus to think beyond categories.

Maribeth (07:52):
Queer representation is important in AI because it's, um,
one of those spaces that breaks downthe binaries that we have and the,
the fixed categories that we have.
I think queerness pushes theboundaries of categorization
into spaces that are more fluid.
And in AI, we often are puttingthings into categories and boxes.

(08:12):
Classification was a, is still avery like common application of
AI and queerness challenges that.
And it's important to be.
critically interrogatingour technical systems.
And so queerness brings that perspectiveand that's really important for building
like societally responsible and justalso better performing AI systems.

Jonah (08:30):
Such a cool way to start AI UK, Lily.
Um, I imagine it challenged alot of people's preconceptions
that opening provocation.
Um, we'll link.
Jake's work in the show notes,so be sure to check that out.
So that was somethingnew for this year's ARUK.
Something else that was new for 2024that wasn't there in 2023 were workshops.

(08:52):
Am I right with that?

Lilian (08:53):
We actually had some workshops in 2023, but they were only an hour long.
They were very limited in scope.
This year we really expanded it.
We had seven two hour workshopsthroughout the two days, and
they were really designed to be.
These very interactive, very intensivesessions, uh, where we could, they

(09:14):
were designed to solve problems.
If that makes sense, I didn't wantthe workshop to be another panel.
We had plenty of spacefor panels on the stages.
So what is a workshop?
What are you doing there?
Are you building a community?
Are you finding new projects?
Are you solving a problem?
Are you getting into data into research?
All these things, I was like,that's all on the table.
And there were some hugelycreative workshops at AIUK.

(09:36):
I'm really, really proud of thispart of the program, actually.

Jonah (09:39):
Yeah.
I thought they were really cool.
Like I was running around with my cameraand I spent quite a lot of time in,
um, the Lego play workshop where, uh,they were sort of building things to
discuss cyber defense and it was so good.
There was so much interaction.

Lilian (09:52):
And I think one of the things about events like this, and
actually this is, it calls backto what Jake was doing as well.
That, you know, these.
events, these showcases, they'reopportunities for us to experiment
and play outside of the everyday.
We can try things we haven't done before.
We can meet people we haven't met before.
And I think if you're all around a table,all playing with Lego or all brainstorming

(10:12):
ideas with, you know, with strangers, withnew friends, if you're watching a draft,
a deep fake drag queen, you know, it'sgoing to challenge what you want to do.
What you think and what youencounter in your every day.
And it's so important to do that work.
It's so important to experiment and toplay, especially in a field like AI.
So I think the workshops werereally trying to encourage that.

Jonah (10:33):
During another workshop, I spoke to Rebecca Cosgriff, who is the deputy
director for the data for research anddevelopment program at NHS England.
Um, Rebecca was a lead.
on a workshop about unlocking healthcaredata for safe, transparent, and fair AI.
And she told me what she was learning fromhaving this interactive sort of session.

Rebecca (10:51):
Yeah, I think one of the key learnings for me today was that there
was actually really significant consensusacross England, Scotland, and Wales, which
were represented on the panel, but alsoon some of the table discussions across
industry, academia, and the NHS on someof the key enablers of AI, including
things like the proactive curation of databefore it's provided out to researchers.

(11:13):
Um, it's really important thatorganizations like the Alan Turing
Institute have rapid access togranular, multimodal healthcare
data generated by the NHS and otherdata sources to answer some of our
really key imperative questions onhow we improve care for patients,
support the NHS and drive innovation.

Smera (11:31):
Speaking about healthcare data, there was also another panel
on improving disease detection.
Chair of the Board of the Turingand Director of the Natural History
Museum, Doug Gurr was on this paneland he told us why the UK is not only
best place, but probably the onlyplace where such advances can happen.

Doug (11:49):
So healthcare data is probably the most sensitive area.
You've got to be able tobring patients with you.
You've got to bring trust.
And that's why you need a regulatoryenvironment that can reassure.
that we're going to do this in anethically sound, sensitive, safe
way, but at the same time a way thatdoesn't constrain that innovation.
And the UK is, I would say probably,but actually I'm going to go for

(12:10):
certainly, the best place in the worldto do this, because only really in the
UK do you have those amazing data sets.
And So with the opportunity to bringtogether the data science talent, the
clinical talent, get the governmentinvolved and actually bring everybody
around the table so that for the firstplace in the world, we can truly reap the
benefits of what AI can do for healthcare.

Smera (12:30):
The element of trust raised by drug is very crucial with the
technology we are confronting.
Almost daily we see reportson AI for good, especially in
healthcare, but we also see how AIhas the capacity to disrupt labor
markets, influence the economy.
Giving a lot of peoplereasons to be skeptical.
Hence, I think that trust factor shouldbe at the center of developing an

(12:51):
inclusive and equitable path forward,particularly in using AI for healthcare.

Jonah (12:56):
Yeah, definitely.
So when I was exploring the demonstrationarea, I caught up with a team who are
developing digital twins of hearts.
So you can have a digitalcopy of your own heart with.
all the accurate medical data, andthen they can run simulations of, say,
different drugs or stresses on yourheart and see how it would respond.
That's pretty cool, isn't it?

Smera (13:17):
I think my own heart may be too broken for a digital twin.
Ah, Samara!
Anyway, this is phenomenal work.
It was really exciting to see them3D printing these hearts right
in front of us on the expo floor.
But beyond just the excitement ofseeing a 3D printed heart, the future
of this work could revolutionize healthcare for a very particular group.

(13:38):
Can you guess which one?

Jonah (13:39):
Is it a vulnerable group?
It is a vulnerable group.
Do you want to get specific?
Is it, um, children?
Children.

Smera (13:48):
So we can use digital twins to advance children's health care without
actually involving real human children.
Children have very different bodiesthat are constantly changing, and
we cannot merely copy paste adulthealthcare responses to that of a child.
Moreover, I think the ethics of,you know, trialing and testing these
healthcare responses on a vulnerablegroup like children is at the

(14:10):
forefront of the concerns that arefaced by regulators and governments.
Of course, at the heart of all of thisis data and the Turing has been working
very closely with children to betterunderstand how to safely use that data.
From the, what can childrenteach us about AI session?
We grabbed Turing fellow, Veri Aitken, andSteph Wright from the Scottish AI Alliance

(14:33):
to talk about their project, whichputs children's voices at center stage.

Steph (14:37):
Well, in Scotland's AI strategy, we had a commitment to adopt, uh, the
UN's policy guidance on AI and children.
And we wanted to explore how we canengage with children to get their
input into our shared AI futures.
Uh, it just so happened at that time,Vari and her team at Alan Turing
Institute were also interested in that.

(14:59):
And, um, I thought, what betterorganizations to bring together
than the Academic Excellence of ATIwith the children's rights based
approach of the Children's Parliament.

Mhairi (15:10):
When we think about child centered approaches to AI, it's important
that we're not just thinking aboutsafeguarding children from the risks.
Of course, that's one really importantdimension of it, but often there can be
kind of overly paternalistic approaches,which are all about identifying from an
adult's perspective what the risks areand safeguarding or protecting children
from those risks or perceived risks AI.
But if we don't actually speak tochildren and understand from children's

(15:33):
perspectives what their experiencesare, what their interests are, what
their concerns are, we might misssome really important aspects of this.
Um, and it's also important that thisisn't just about identifying risks
or safeguarding children from risks.
It's also about finding ways thatwe can maximize the value and
maximize the benefits of technologyand innovation for children.
When we speak to children about AI, thebig themes that come out, the kind of

(15:54):
central areas that, that they really.
Yeah.
I want to focus on discussing, um,uh, quite consistently around themes
of fairness, um, and particularlyhow these technologies might work
differently for different children.
Uh, and I think the, uh, the childrenthat we've spoken to certainly seem
to really kind of intuitively, uh,gravitate towards the concept of fairness.
It wasn't something that we introduced.
It wasn't something that we planned tohave as a, as a central theme of the

(16:17):
engagement, but it was really what,what the children wanted to talk about.
Um, and they grasped very quickly that,that AI might have different outcomes
for different groups of children.
Um, and that they were really wantingto understand more about how we could
develop these systems to make themfairer, to make sure that they, uh,
had, you know, equitable benefitsfor, for different groups of children.
That's part of the value of havingchildren in these conversations,

(16:38):
because, well, as an adult, we might,a fairness is maybe a kind of a
concept, you know, an abstract thing.
And something that we know isimportant, but adults often make
kind of, uh, justifications.
Oh, well, that might not be fair, but it'sbecause of this, this, this, and this.
Whereas children willsay, that's not fair.
That's not okay.
You know, we need todo something about it.
We need to make that fair.
And actually that's sort of thevalue of bringing that children's

(16:58):
perspective into these discussions.
Yeah.
I love it when the plan comes together.
So obviously the collaboration kicked off.
We're now approachingthe end of phase two.
The first year was, um, Uh, ledby the Turing Institute, uh, to
explore children's rights and AI.
The second year, which we've just cometo end, is about exploring how to dig

(17:18):
deeper into operationalizing some ofthe, um, findings in phase one of the
project, especially around, you know,safety, uh, AI in education and, um, And,
and bias, which were all these concernsthat children expressed, they were
particularly interested in exploring.
So phase two was about, you know,partnering up with actual organizations

(17:41):
with actual projects or policiesthey were developing that the
children can, you know, meaningfully,

Jonah (17:49):
That's really interesting.
Maybe this will begin to fix thetrend of future generations having
to fix past generations mistakes.
Talking of futures, I was atAIUK last year and heard a lot of
predictions for the year ahead.
And a lot of them were about howgenerative AI like ChatGPT was going to
go stratospheric and how we will startto encounter more misuse of those tools.

(18:13):
This year I caught up with MichealWooldridge who chaired a session
all about large language models
and asked for his thoughts.

Mike (18:15):
So I think there's two things I think are really interesting to
keep an eye on in terms of risks.
So the first is about misinformation anddisinformation, particularly in elections.
And as I'm talking now, withinthe next year, we're going to
have more than a billion peopleworldwide going into elections.
We've got elections in the UK, we'vegot elections in the US, elections in
India, the world's biggest democracies.

(18:36):
And the fear that was just beginningto be voiced a year ago was that
AI was going to be used to generatedisinformation on an industrial scale.
Now the worrying thing is we are beginningto see the signs of that happening.
We're beginning to see fake news stories.
And actually, interestingly, we'rebeginning to see news stories where people
are claiming that it's AI generated,even though it's actually original, which

(19:00):
is not something that we anticipated.
So I'm Keeping, I'm looking at thatnervously, um, the government's announced
initiatives to try to deal with that.
Let's hope that they get it right.
And I think the Turing will be, will befront and center in those discussions
about, about how to get that right.
The other thing is the age oldquestion of AI and employment.

(19:21):
And again, a year ago, we were lookingand contemplating whether large
language models were going to lead tounemployment on a, on a large scale.
For example.
We're just beginning to see thefirst signs in some sectors of the
impact of large language models.
So we're beginning to get, for thefirst time, uh, some understanding

(19:43):
of how this technology isgoing to affect the workplace.
And I think this is goingto be a crucial year.
At the end of this year, I think there'sa real chance that we will have seen
some really significant signs of how AIin general, but large language models in
particular are affecting the workplace.
So I think that is somethingthat we should really keep
an eye on over the next year.

Jonah (20:07):
Lily, are there any sessions that we haven't touched
on that you want to cover?
I'm probably drawn to some of the, thebig headlines, um, some of the more
accessible sessions, um, Uh, but I'm veryaware that there's loads of stuff that's
been talked about that will be interestingto lots of different audiences.

Lilian (20:23):
Yeah, absolutely.
So there were about 50sessions, all in all, at AIUK.
So there are so many I wish I could talkabout, and they really do range in topic.
We had sessions on AGI and LLMs.
There were sessions on productivity that.
It might not seem sexy, but that's thestuff that's going to impact my life.
That's going to make it easier forme to fill out forms, for me to

(20:45):
interact with the state, that's goingto make my life better as a citizen.
And I think that stuff's reallyimportant and really interesting.
It's really important to have theseconversations that we might not be
having in other spaces, or we arehaving in siloed conversations where
just experts are talking to each other,but at ARUK, Like I said, everyone's
there, policy makers, experts fromacross different fields, researchers,

(21:08):
students, professors, industry leaders.
So you're getting all sorts ofpeople together to talk about things
that they might not have previouslytalked about or heard about.
Defense and security, for example.
I think it's really important toplatform those issues at AIUK and
have a discussion outside of thedefense and security ecosystem.
Obviously, the peoplewho are interested in it.
They're welcome to come, they learn, andthey can contribute to that conversation.

(21:31):
But people who might not be familiarwith that field, it's great for them
to see that content as well, I think.

Smera (21:36):
No, I fully agree, Lily.
I think with the recent developments,we can see how AI and tech is being
positioned as a very importanttool in a nation's defense arsenal.
Throughout history, we've seen howadvances in tech have been driven
by investments in research anddevelopment by departments of defense.
For instance, you know, I remember in ourepisode on chip wars, We saw how defence

(21:59):
investment was critical to improvingthe microchip architecture and capacity.

Jonah (22:05):
I do.
Yes.
That was a good one.
Chip Wars.
You can check it out on Series One.
So there was a session at AIUK calledThe Secret Session, which was a chat
between Tim Watson from the AlanTuring Institute and Stephen Mears,
who works for the Defence Scienceand Technology Laboratory, DSTL, the
research arm of the Ministry of Defence.
We spoke to him about the impact AIis having on defense and security.

Steve (22:25):
So I think, uh, defense and AI, obviously a topic that a lot of people
feel concerned about, but for me as ascientist working in the defense area,
I really see this as a transformationaltechnology that can really support
our armed forces in the difficult rolethat we undertake, that they undertake.

(22:46):
So, um, everything fromthings like commander control,
where they have to make.
difficult decisions, how can weuse AI to help get them the best
information, to help them makethe best possible decision in the
different environments that they're in?
Intelligence, surveillance andreconnaissance, how can we use AI to
help make sense of massive amountsof data and help them understand

(23:09):
what's going on around them?
And then perhaps closer to home, howmight we be able to use AI to counter
disinformation and misinformation andhelp ensure that people can really
understand and the authenticityand provenance of the information
they're seeing on the internet.

Jonah (23:29):
Lily, thank you so much for joining us for this, uh,
Too Long Didn't Read special.
Uh, and congratulations toyou and all your colleagues.
And that also obviously includes us.
Congratulations toeverybody involved in AIUK.

Lilian (23:42):
Thank you so much, Jonah.
It's been great to be here and I'm so gladwe can re watch it on YouTube as well.
Yes,

Jonah (23:47):
all the sessions, cut downs, lots of exciting content will be on our
YouTube, but we'll probably talk aboutthat when it actually appears there.

Speaker 9 (23:54):
Thanks, Mera.
Nice to see you again after all this time.
And hopefully I'll see you ina month for a brand new season.

Jonah (24:02):
Yep, and uh, thank you of course to Jesse, who's in the
background furiously scribbling awaynotes, that is, not just drawing a
picture, um, and you for listening.
See you soon for series two.
Toodaloo!
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.