All Episodes

August 13, 2025 39 mins

How does a particle physicist end up shaping the UK Government’s approach to artificial intelligence? In this thought‑provoking episode, Andrew Grill sits down with Dr Laura Gilbert CBE, former Director of Data Science at 10 Downing Street and now the Senior Director of AI at the Tony Blair Institute.

Laura’s unique career path, from academic research in physics to the heart of policymaking, gives her a rare perspective on how governments can use emerging technologies not just efficiently, but humanely. 

She shares candid insights into how policy teams think about digital transformation, why the public sector faces very different challenges to private industry, and how to avoid technology that dehumanises decision‑making.

Drawing on examples from her work in Whitehall, Laura discusses the realities of forecasting in AI, the danger of “buzzword chasing”, and why the next breakthrough in Artificial General Intelligence might well come from an unexpected player, possibly from within government itself.

This is a conversation for anyone curious about the intersection of science, policy, ethics, and technology, and how they can combine to make government more responsive, transparent, and human-centred.


What You’ll Learn in This Episode

  • How Laura Gilbert moved from particle physics research into government AI leadership
  • The strategic role of AI in shaping modern policy and public services
  • Why forecasting in AI is harder than it looks—and how this impacts decision‑makers
  • The balance between technical capability and human‑centred governance
  • Why governments must look beyond the tech giants for innovative solutions
  • Lessons from the Evidence House and AI for Public Good programmes

Resources

Tony Blair Global Institute Website
UK Government AI Incubator
Laura on LinkedIn
Raindrop.io bookmarking app

Thanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/order

Your Host is Actionable Futurist® Andrew Grill

For more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com

Andrew's Social Channels
Andrew on LinkedIn
@AndrewGrill on Twitter
@Andrew.Grill on Instagram
Keynote speeches here
Order Digitally Curious

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Voiceover (00:00):
Welcome to Digitally Curious, the podcast that will
help you navigate the future ofAI and all things tech with your
host, actionable futurist,Andrew Grill.

Andrew Grill (00:12):
Today on the show we have Dr Laura Gilbert, CBE
Senior Director of AI at theTony Blair Institute for Global
Change.
Laura embodies a spirit ofdigital curiosity and visionary
leadership, harnessingartificial intelligence to help
governments deliver resilientpublic services for better
outcomes for citizens globally.
With a doctorate in particlephysics from Oxford and degrees
from Cambridge, laura has adiverse background spanning

(00:33):
defence, intelligence, quantity,finance and medical tech
entrepreneurship.
Awarded a CBE for Services toTechnology Analysis in 2023,
she's also a visiting professorat LSE and a seven-time British
Cervat kickboxing competitor.
Welcome, laura, hello.
Thank you very much for havingme Analysis in 2023.
She's also a visiting professorat LSE and a seven-time British
Savate kickboxing competitor.
Welcome, laura, hello.
Thank you very much for havingme.
So, to start, could you sharewhat your role as Senior
Director of AI at the Tony BlairInstitute for Global Change

(00:55):
entails and how it advances AI'simpact on government and public
services globally?

Laura Gilbert (01:01):
Tony Blair Institute is sort of taking a
slightly changed role, I think,in the world of political advice
and leadership.
The Institute focuses very muchon advising world leaders to
try and generate better outcomesfor their citizens and drive
better decision-making ingovernment, but the work now is
taking on a more practical tone.

(01:21):
We are building up a techincubator, so bringing in expert
AI, data security people toactually deliver products and
solutions that are specificallytailored for the needs of
governments and, again, to tryand drive that better
decision-making process.

Andrew Grill (01:37):
So we met recently at the Dell Executive
Networking Forum in London whereyou picked my interest in
getting you onto the show bydiscussing the Tetlock study.
It was fascinating and it hadthe audience enthralled for, I
think, five or six minutes.
Could you explain what it isand how it applies to experts
like us trying to predict whatmight happen with AI over the
next five years?

Laura Gilbert (01:56):
This is something I quote very frequently, so I
find it fascinating.
I think, for context, I firstbecame interested in the Tetlock
study when I joined DowningStreet in September 2020 and was
trying to sort of figure outwhy the use of evidence wasn't
as widespread as I thought itshould be.
The Tetlock study was kickedoff in the mid 1980s by Philip

(02:18):
Tetlock and he wanted tounderstand something very
similar really.
And he wanted to understandsomething very similar really.
So he found about 284, 285-ishpolicy professionals and people
that were working in journalismor government or that sort of
thing, and he asked them aseries of 100 questions and the
questions were relatively simpleand it was along the lines of

(02:41):
this thing that's happening inthe world now, in 20 years' time
, will there be more of it, lessof it or roughly the same?
And they had to give theirpredictions and he very
patiently waited 20 years.
Then he wrote up his researchand I think Daniel Kahneman read
this paper and commented thatthe policy professionals had
done about as well as monkeysthrowing darts at a dartboard in

(03:01):
terms of predicting the future.
So they did slightly betterthan just random guesswork and
they did less well than what wasrather fancifully termed the
minimally sophisticatedstatistical model, which is
pretty much just chart whathappened before, draw a straight
line through it with a rulerand assume that's going to be
the next thing.
Interestingly, they did worsein their own area of expertise

(03:23):
than they did in areas that theyweren't expert.
It wasn't statisticallysignificant, but it was slightly
worse.
So they're actually less goodat predicting the future, where
they really knew their stuff andthey were highly, highly
confident and that was the mainoutcome of the study that they
actually were going to be provenright, of the study that they

(03:45):
actually were going to be provenright.
And even when they werepresented with the results of
this study, they tended to comeup with a lot of justifications.
You know, well, I was nearlyright.
Well, if only this other thinghappened, I would have been
right.
And it's a very interestingpsychological study and it tells
us a lot about our ability topredict the future.
And for me, it means that ittells us a lot about our ability
to make good decisions, whichis what I'm particularly

(04:06):
interested in.
We're overconfident in ourability to predict what happens
next, and what that means isthat when we say to ourselves,
well, if I do this, then thiswill be the outcome.
We believe we're good at it and, across the board, pretty
universally, we're not very goodat it and therefore future
prediction for me is somethingthat I try to think about in

(04:27):
terms of possible outcomes andthe kind of things that you need
to evaluate and monitor, tocheck the direction and to
really press back on my thoughtprocesses because I know I'm
very vulnerable to this to makesure that I'm not making those
assumptions and assertions thatlead me in the wrong direction.

Andrew Grill (04:46):
So I find that fascinating, because there are
so many people out there thatthink they can predict what's
going to happen next in AI andthe answer I give is you can't.
And when I'm asked forlong-term predictions, I take a
long intake of breath and I go.
Well, I don't think this isgoing to be right.
There have been a couple ofthings recently that have come
true that have taken six orseven years.
I actually interviewed someoneabout six years ago about AGI

(05:07):
and he said the company that'sgoing to nail it will be one we
haven't heard of yet andprobably the open AIs of this
world, if you know that category.
But I'm wondering now, as wesit here in 2025, who might be
that other silent company that'sgoing to really blow the doors
off?

Laura Gilbert (05:21):
There's a few long bets that I might put a
tiny bit of money on, but Ithink you're absolutely right.
We really didn't.
Not only did we not know chatGPT was coming, but OpenAI
didn't know chat GPT was coming.
Deepseek was a real surprise.
So anyone that tells you thatthey know what's happening next
is probably, at best, foolingthemselves, and in some ways,

(05:41):
that's you know.
It's a very useful thing to, Ithink, bear in mind when you're
thinking about planning for thefuture, not over committing to a
paradigm different, but therewas a study around the same time
on tech predictions and theyshowed about expert
technologists being rightroughly 20% of the time on those

(06:14):
longer-term predictions andquite a lot of the time when you
are and I used to see this alot in quantitative finance when
people make a good bet and thenthey win big on it, they
believe that that reflects ontheir genius rather than the
fact that a certain statisticalpercentage of bets will win.
So we need to be careful aboutthat as well, I think.

Andrew Grill (06:32):
So a question for you and I get asked this all the
time how do you stay up to date, how do you stay on top of
what's happening, how do yougaze far enough ahead to make a
prediction or understand what'shappening next to advise your
clients versus just doing theTetlock?

Laura Gilbert (06:47):
and putting the data in the dartboard?
Great question.
I definitely don't stay up todate.
It's very.
I think it's getting harder andharder in the world to be up to
date with almost anything yourability to be up to date with
the increasing flow ofinformation and there are many
things I'm interested in.
You know everything from thesort of news about various wars

(07:08):
going on at the moment throughto you know, tech innovations or
the latest band.
It's harder and harder.
So I think I think what I tryand do is I try and be very,
very well informed aboutanything that's going to
directly affect what we're doinghere.
And then I try and have anetwork of very interesting
people, yourself included.
So I really like LinkedIn forthis, where I am scanning once a

(07:29):
day to see what people aretalking about, and I carve out
two hours in my diary a week togo and try and learn something,
to look things up, the thingsI've bookmarked that I haven't
had time to read during the week.
I try and go through them, butit would definitely be a lie to
say I'm very well informed.
I'm quite often surprised bythings and I think that's
healthy because life is verybusy.
Advances in technology have notmade our working lives easier.

(07:52):
They've made them more complex,if you ask me.
So I think getting that balancewhere you feel confident enough
in your own work and sphere andinterested enough in everything
else is the best I can do.
Could I put?

Andrew Grill (08:03):
the word in your mouth curious you certainly can.

Laura Gilbert (08:07):
Yes, you certainly can.
I think curiosity is the reasonthat we're all here and
enjoying our work, and itmatters to me a great deal to
enjoy my work.
So I think I really related toyour curiosity.
The future belongs to thecurious.
I was reading a book recentlyby Tim Hartford how to Make the
World Add Up belongs to thecurious.
I was reading a book recentlyby Tim Hartford, how to Make the
World Add Up, and he sort ofcites curiosity as the main way

(08:27):
to protect yourselves againstmisinformation and to make sure
that you have the best knowledgeand information.
And I thought of you as I wasreading it.

Andrew Grill (08:35):
Actually, Well, it's interesting the way you
stay up to date.
I have a similar thing.
I also use LinkedIn as mynewsfeed because I follow and
connect with people that areinteresting Some people I don't
actually directly connect, Ijust follow them.
But what I do, I use an appcalled raindropio.
It's a great bookmarking app,so if I see something on
LinkedIn or somewhere anywhere,I'll grab it and, like you, I

(08:56):
put some time in the diary to gothrough it.
But because it actuallycaptures the moment in time, it
actually takes a snapshot.
So if someone were to deletethat, I would still have it, so
I can go back to it.
I can search through it.
In fact, the book has all thelinks that are referenced as a
raindrop page and that, I think,is very healthy, because I
don't have time to stop and readeverything, but I want to go
back to it and find it andsometimes finding that thing you

(09:17):
saw three weeks ago is reallyquite difficult.

Laura Gilbert (09:20):
One of the things I am finding, though, is that
my worldview on LinkedIn isgetting narrower, so this point
about expertise becomingnarrower the things that the
algorithms are showing me are,like today, almost entirely
agentic workflows based, and Ineed to figure out how to widen
it out.

Andrew Grill (09:49):
So you know something where I'm probably
entering.
You know different prompts andsort of connecting more widely
and picking up some of thosewider interests would be very
useful at this point.
Well, a bit of a segue.
So if you look at your LinkedInfeed my LinkedIn feed you would
think that everyone's doing aGentic.
It's a Gentic and it's workingbrilliantly and people are
saving all this money.
What I say, and I've got theforesight that I'm every week in
a different organization, I'min the trenches.
The last two years I've beenspeaking to companies of all
sizes and examples when we metat Dell and the customers that
were there.
What surprises me is that noone's doing a GenTec.

(10:10):
They're all.
I haven't even heard about it.
Sometimes, when I mentioned atthe Dell event, probably a few
people in the audience had heardabout it.
Once I had a woman who saidI've got my own indenture gauge
and I'm playing with, but what Isee?
There are four things that Ithink that are holding companies
back and I'm wondering ifyou're seeing the same thing.
The first is training.
Not that you have to go andlearn how to use ChatGPT, but
people don't even understandwhat it can do.

(10:30):
So just an awareness is thefirst thing.
The second is budget.
No one is saying in thecompanies I'm talking to, we're
going to put a large amount ofmoney aside next year for
training and AI tools.
They and AI tools.
They're just not doing that.
Third thing is data, which is aperennial problem.
The data isn't there.
And the fourth thing processes.
The processes that they'redoing today won't work any
better under AI.

(10:51):
In fact, they'll be worse off.
So are you seeing those sort ofblockers from stopping people
to be agentic all day, every day?

Laura Gilbert (10:58):
It's a very interesting one.
So on your first point aboutthe sort of people, I think a
lot of people have picked upchat GPT at this point it
doesn't teach you how to use itand it looks almost deceptively
simple.
So you know, what I've beenreally thinking about here

(11:27):
building tools, particularly forpolitical leaders is the way
that we wrap those largelanguage models is to try and
force it, to ask the user whatthey really want.
So if you go in and I did thisthe other day, actually sort of
with a senior politician you goin and you say to say, claude or
ChapGPT, can you tell me aboutNorth Sea Oil?
It will come in and give youmaybe a summary.
But the second time I tried itit told me about the commercial
interests around it.
And the third time it gave methe history, and none of those
prompts were different.
I hadn't given it anyindication of what I wanted.

(11:49):
And if I was writing a paperand it had done that, then I
might go oh well, that's allthat is and I'm writing about
the history of.
So it guides you in a waythat's uncontrolled.
So we're sort of trying tobuild tools that go the other
way and say you want to write aspeech.
Could you tell us about theaudience?
Could you tell us about thetone you're trying to hit, any

(12:10):
points you want to includebefore it runs off and uses all
of that energy?
And we're really seeing I thinkyou know quite a lot of that
naive use case infiltratingGovernment's another great one
and this is one of the firsttools we built in government
when I built the AI incubator atthe end of 23,.
Sort of all this work kicked offand it was to do with

(12:31):
government consultations andthere was a need there.
So when government goes anddoes a consultation, they do 700
or 800 a year publicconsultations requirement and
the average consultation returnsit's a new metric for you about
as much text as 400 Brexitwithdrawal agreements and
traditionally you have a team ofmaybe 25 analysts.
They'll work three or sixmonths to go through that and

(12:54):
come up with a report, and so ofcourse we built a tool that
could run through and write thatreport in an hour and directly
reference back to all of thecomments.
So people aren't excluded, butit's all in one place and you
can find it and it's a greatexample of answering the wrong
problem with AI, becauseactually consultations are not a
good way to get public opinions.

(13:15):
They're full of lobby groups,they're full of niche kind of
people that maybe have the timeto do this sort of thing.
You're not getting across-section of people's
genuine, unfiltered views at all.
So and I think we see this alot in and it's to be clear it's
still a good piece of work.
It saves a lot of money whileyou're still doing it that way.
But a lot of the time we'reseeing people building AI to

(13:37):
replace workflows when what theyshould do is not do that
workflow at all.
You are perpetuating a systemor workflow, an outcome
sometimes that is the wrong oneby naively going and just
putting technology on it.

Andrew Grill (13:52):
I always say to people if you're looking to
process, why have you alwaysdone it this way?
And AI won't fix that.
It's interesting you talk aboutprompting because often I'll
give an example.
Three weeks ago I was in a roomof small to medium-sized
business in the north of Englandand they really hadn't had any
exposure with with these toolsat all, and one thing I gave
them that became a light bulbmoment is ask it a question, but

(14:12):
then ask it to justify itsanswer, and what it will then do
is it will show you it'sworking in a way and how it was
thinking.
You can then go that's the wayI would think, and the light
bulbs in the room went off,thought I, I didn't know we
could do that.
So there's just this basicunderstanding of using it like a
search engine rather thansaying justify what you're doing
and give me some answers thatwill challenge my thinking to

(14:32):
your point, to then almostprompt me.
Well, what else do you need?
What else do you need to betterunderstand how to answer this
question?
If I was sitting next to anintern, if they're smart enough
and they had that problem tosolve, they'd be going.
Well, what's the audience andwhere is it and who are the key
decision makers that are goingto be there?
When we see a gentic AI, itwill do some of the work, but

(14:53):
right now it gives you an answerand just sits back and says,
well done my job.

Laura Gilbert (14:57):
Yes, and it's really interesting to sort of
watch people go through thatjourney when they do do that,
because I think it's just it'sso very unintuitive to people
the way it works.
And you know, when you look atthe reasoning as well, you you
get a different response if youtell it you're going to ask for
its reasoning in advance.

(15:18):
So if you get it to generate,then you say, please give me
your reasoning.
It's faking the reasoningreally.
Now that doesn't make it notuseful, because if you view
something like Jack, GBD, Claudeet cetera you know Mistral, if
you view them as brainstormingassistance, fantastic, it
doesn't actually matter if thereasoning is real, because, as

(15:38):
you say, you're challengingwhether or not.
I would have thought of it thatway.
But when people take it at facevalue, they can be heavily
misled and really make mistakes.
So we need to change thethinking and actually this is
one of the things we're lookingat in policy generation.
Here is can you put in anoption which just goes no holes
barred, just go nuts, come upwith wild policies that would be

(16:02):
impossible to implement?
That's very, very useful as abrainstorm for government
officials thinking actually wereally need to get this done.
What can we break and whatcan't we, and get out of the
mindset of, well, nothing can bebroken.
You have to work within currentconstraints.
So it's very useful forbrainstorming, but I think
that's not how people are mostlyusing it.

Andrew Grill (16:21):
Well, I crossed my fingers.
Did it live at this event I wasat.
It's a family-owned businessbeen going for 150 years.
I basically tasked it to go andlook at the next five or six
years where they should go.
I didn't know whether what itwas going to show would be
valuable.
So I put it on the screen andsaid no holds barred, does this
look sensible?
And they looked at it and wentwell, we hadn't thought about
that.
We hadn't thought about that.
Again they didn't know theycould use it for brainstorming,

(16:42):
because, no holds barred, you'renot wasting any time really.

Laura Gilbert (16:53):
But back to my first point, the training just
showing people level, settingwhat they can do and what they
is that you know.
The kind of people you reallywant to use this well are often
also the kind of people who arevery busy, and you know if
you're in a leadership position,you're finding a lot of people
who are telling their companieswe care about AI, we're going to
use AI.

(17:14):
You know, come on, everyone AIskill up and they themselves
still wandering around with anotepad and you know there's a
culture signal there.
But it also means they reallydon't have a mental model of
what they're asking people to doand what's possible and what's
not possible.
And we find this across theboard in technology, and
particularly in government.
It's fundamentally so easy forpeople to pull the wool over a

(17:38):
leader's eyes, pretend thatsomething takes longer than it
should or that it's moreexpensive than it is, or tell
them a solution is easy, whenactually it's not, and they're
going to pass that work on tosomebody else who's going to
have a really tough time.
If you are not using ityourself, even to write you an
agenda for the day or solve aminor problem, you really can't
expect your company, I think, toimplement it well.

Andrew Grill (18:00):
Well, that's my whole notion of being digitally
curious, and you have to havethat mindset at a very senior
level.
So back to your idea of apolitician.
If he or she actually did somepre-work to know what was
possible, they could then brieftheir advisor to say I've done a
first pass, ie, I know what I'mdoing, you do the next bit and
expand on that and just set itoff running.
I think that will be a reallynice way, because they're not

(18:22):
going to be experts at it, butjust start off at 10.

Laura Gilbert (18:24):
We're working with a world leader at the
moment whose use case is verysimilar to that.
They want really high qualitybriefing outputs.
They know what they want andthey want a degree of control
over it, so that they're workingwith their staff rather than

(18:44):
waiting for the information tobe filtered to them.
And that can be having workedwith a lot of government
ministers very disempowering,because for the most part,
historically, a governmentminister gets highly filtered
information evidence that goesinto policy making, through to
information about how thedepartment's functioning and
who's doing what, and whether ornot there are any blockers and
how severe those blockers areand at what point they're going

(19:06):
to learn about them.
All of those things are very,very heavily gatekept.
You give tools like this tovery senior people and it allows
them to challenge the peoplethat work for them and in the
public sector.
I think that's a good thing.

Andrew Grill (19:18):
I'm just thinking now about a reboot of yes,
minister and Sir Humphrey Appwill be having apoplexy because
Jim Hacker can actually do hisown AI research and it cuts him
out totally.
That would be a lovely way toreboot that series.
In the age of AI, what do youthink?
Would those roles still exist?

Laura Gilbert (19:34):
I only watched yes, minister, I think, about 18
months ago for the first time.
Because I only watched yesMinister, I think, about 18
months ago for the first time,so I wasn't when I went into
Downing Street.
I wasn't, I didn't have aninterest in politics.
It was, you know, a bit of asideways career move, and it was
terrifying how much it hasn'tchanged at all.
There was even an argumentabout effectively open data that
we're still having now anddecades later, and what can we

(19:58):
put out to the public, andwhether or not there's
infrastructure to do that, andso on and so forth.
I do think that, if we get itright, the adoption of AI
combined with better digitalservices, better data
infrastructure, so on and soforth, could really meaningfully
change the way that governmentsoperate, and I think you are

(20:19):
seeing that in some of the worldgovernments of the day.
You know you look at the waythat Estonia sort of operates as
a really standout.
You know forward-lookingdigital government, and I
believe it's really changedtheir processes and their
decision-making, and I'd like tosee that roll out much more
widely.
I think if you have moreempowered decision makers who
are able to cross-check andresearch in a way that's

(20:43):
achievable and accessible tothem.
Then you have a system withmore inbuilt challenge, with
much more accountability andaccountability, honestly, is
very low in the civil service inthe UK, certainly and we could
have something where actuallyyou know you do get that value
for money that we'd all love tosee sort of across the board.
So fingers crossed, but that'svery much what we're driving

(21:04):
towards and trying to helphappen.

Andrew Grill (21:07):
So you touched on there.
Politics wasn't a career optionfor you.
I'd love you to tell listenershow you got to where you are
today.
Your story from being atuniversity to where you are
today is fascinating.
How do you summarise that in afew minutes?

Laura Gilbert (21:20):
Well, it's been a series of accidents really is
probably the best way to put it.
So I went off to do physics atuniversity, so it's just what I
was interested in.
And then I wasn't quite surewhat to do next.
So I left and I got a job andit was advertised and it didn't
really say who it was for, itjust said physicists needed, and
I was slightly adrift and I'ddone some very boring work

(21:44):
experiences at some places Iwon't name, so I was slightly
despondent about my choices atthat point and it turned out to
be in defence intelligence.
So I spent a year doing that,fascinating.
I learned a great deal abouttechniques and it's the first
place I really did any coding aswell.
But it wasn't for me forvarious reasons, not least

(22:06):
because when there was sort ofan active bomb no one in the
building seemed worried about itand I thought that might not be
the world I wanted to live in.
So it was a very interestingspace.
But I then sort of had a lookaround and decided to go back to
university and be a particlephysicist and I really enjoyed
it.
I got a teaching job in Oxfordvery early in my career, much

(22:26):
earlier than I was supposed tobe allowed to Really enjoyed
teaching the students.
It was fascinating and I lovedbeing a physicist.
But it got to a point about sixyears later where the government
cut £80 million of funding justvery surprisingly, and they
changed the funding council namefrom Particle Physics and
Astrophysics to Science andTechnology, and it was a real

(22:47):
change in direction and everyonelost their jobs.
You know people without tenure.
Overnight they were clearingtheir desks out and my
supervisor very kindly said well, don't worry, laura, we've
found a job for you.
You're one of the lucky onesand you can go to Fermilab,
which is in Chicago and it's themiddle of winter and it's like
minus 50 degrees or something,so it's already not wildly sold

(23:10):
and we'll pay you $800 a month,which was worth about 400 pounds
at that point.
You get a free room in astudent dormitory.
I sort of went.
I am nearly 30.
Absolutely not right.
So I thought I'd better dosomething else and I went to the
career service classic and Isaid what job can I do that you

(23:30):
need to have done something likea particle physics PhD to be
eligible for.
So I haven't wasted my time andthey said quantitative finance,
end of book.
And so I applied for some hedgefunds and got in and had a
deeply interesting time.
It's a very different kind ofscience.
So you know, with your particlephysics you're looking for
absolute proof of something itis there or it isn't there, and

(23:52):
you've got to prove it withinfive standard deviations of
certain incredibly highconfidence With finance.
You've got a needle and you'vegot a very big haystack and the
haystack's on the back of arickety camel and half of it's
on fire.
So it's a very different way ofusing data.
You need information aboutmarket sentiment, how people

(24:12):
think.
You need to think about theinteractions between government
announcements and earthquakesand these kind of financial
instruments.
And I learned very differenttechniques the first time I
really used AI.
Actually, the third company Iwent to was a high-frequency
trading company and it wasinteresting because it was run
by these people who hadpreviously sold their last firm

(24:35):
for 200 million and they'd goneoff and got venture capital to
come and do it again, and theventure capitalists were
confident because you knowthey'd done so well and so they
said well, what we're going todo is we're just going to hire
really high IQ people that'smainly the criteria and then put
them in groups and you knowit's five-hour IQ tests with a
New York psychiatrist to getthis job and put you in a small
group of three and these teamsof three went off and did things

(24:57):
and we'd got this project,trying to use genetic algorithms
.
So, the idea being that youcan't really predict what
high-frequency models markets do, and you're going to build
these algorithms and then yourun them on the hard-past-age
and the ones that succeed youbreed, you literally swap code
over and try next generation.
The rest you kill.
It doesn't work at all.
Wow, hill, it doesn't work atall.

(25:21):
Wow, it's absolutely awful.
And in fact, mostly theseexperiments weren't working.
The company was losing money.
So, um, I sort of worked forthree companies, all all Well,
two of them collapsed and thethird one tanked pretty
dramatically while I was there.
It was a curious statisticalanomaly and nothing to do with
me, and I learned a couple ofthings.
I learned lots about differentmethods for research and data
etc.
And I learned that I didn'treally like finance and a lot of

(25:43):
it was because I wasn't veryproud of it.
Met people for dinner as aparticle physicist.
Then they say that soundsinteresting.
I go oh, it is, and in finance,I'd have to tell people I was
in finance and feel good aboutit.
So my friend was doing thismedtech startup and he had
10,000 pounds and he had akiller app in his mind and he

(26:04):
went to this tech company appdevelopment and said can you
build this app?
I've got 10,000 pounds.
He said, absolutely yes, we canGave him back the app.
And it didn't turn on, it justcrashed.
So he said, well, it doesn'tturn on.
And they said, well, give usanother 10 grand, we'll make it
turn on.
So I said, well, I'm prettysure I can build this up.

(26:24):
He doesn't have any more money.
You know, and whilst I was infinance on evenings, weekends,
on the train, et cetera I wasbuilding these apps and I didn't
think it was a very good ideato start with, but they were
very simple and we measurablyimproved people's lives and it
was targeting people who werehomeless, people with multiple
and complex needs, people whowere unable to communicate, with
very severe disabilities, andyou know I won't go into details
, but you could literally seethe impact on people's lives and

(26:47):
it was wonderful.
So after this third companythere's a long story had an
operation.
I was unwell for a while and bythe time I went and was offered
another job in finance, Irealized I just didn't want to
do that, joined this startup anddid that for 10 years.
We built it up, we took itthrough small media enterprise.
It was sold.
We exited march, the second,2020 with the idea that we would

(27:09):
you would go and do a bit ofconsulting and of course, that
was just as COVID.
So a few months later, havinghandled the idea of the children
being at home all the time andsort of you know, got through
summer, I saw this jobadvertised in Downing Street and
I'd had some glasses of wineand thought it'd be a good idea
throw a CV at it and wasabsolutely astonished to get it.

(27:30):
To be honest and I think Ihadn't it was the director of
data science.
I hadn't realised I was a datascientist until that point, so
and the rest was sort of it wasvery interesting, it's I'd gone
from coding in a basement whichis what?
Because I was CTO of themedtech company, but very hands
on and you know, wasn't bigstaff Through to, but very hands

(27:58):
on and you know, wasn't bigstaff through to.
Well, dressing like a grown upfor one thing and sort of
walking into the primeminister's office every morning
was quite a culture shock, andmy job became more about
learning how to persuade andinfluence people and building an
amazing team.
I mean, they are phenomenal todo that modeling, but actually
try and get people to listen toit, which is where the TELOC
study comes in.
How do you get people who arevery entrenched to be able to

(28:19):
change their mind is one of thebiggest problems.
So, yes, I did that untilearlier this year, and then I've
joined the Tony Blair Instituteto do something very similar,
but with worldwide impact, andI'm thrilled to be here.

Andrew Grill (28:29):
So what you built for number 10 in terms of data
science how did that?
I mean, when you started there,generative AI probably wasn't
really a thing.
While you were there, it becamea thing.
How did that impact what you'redoing there and what you're
doing now?

Laura Gilbert (28:43):
We were already using the early versions of
large language models, actuallyin a few projects.
There was somethingparticularly that I have a bee
in my bonnet about, which ismaternal death statistics and,
as is right and good, you cannot, in number 10, look up people's
health records, so you can'tjust go and do a research
project.
What you can do is pull out thepublicly available incident

(29:05):
reports, and I think about 70%of all of the incident reports
where people come to harm ornearly come to harm are actually
in maternity, and we're stillin a position where nearly one
in 10,000 babies that are born,the mother dies.
So if you know 100 women, theyknow 100 women, and we're four
and a half times more likely todie if you're a black woman.
So I felt there was quite a lotto do there.

(29:27):
So we were doing sort of LLMexperiments when chat GPT hit
and it was suddenly somethingreally different.
So we were very well positionedand already knew how to kind of
work in this place.
What changed was that?
Well, the first thing thathappened was, I think, six
people immediately declaredthemselves new government head
of AI and there was quite ascramble because, you know,

(29:49):
suddenly it was interestingSuddenly the data scientists
might get invited to the parties, um.
So it was uh, you know, it's apoint at which what we were
doing was suddenly of interestto people, um, and there was a
this massive scramble for peopleto try and position and get
money really and sort of come upwith ways to capitalize.
What happened to me was sort oftwo things in a row.

(30:11):
The first one was that I metGeoffrey Hinton very early on,
who terrified me, so sort ofkicked off this piece of work to
try and get everybody quiteworried about the safety aspects
, which obviously then you know,there was then the summit and
there was the safety taskforceetc.
And following that, a realacknowledgement that actually we
needed to do something muchmore practical because we were

(30:34):
worrying about the risks ofother people doing things in AI
and we were not worrying aboutthe risks of us not doing it.
And it became very apparent youknow, if you are running a bank
and you don't want to be hacked,you find yourself the best
hackers and you hire them.
We needed to do that with AI.
We need to find the best people, get them in the building so
that they could help with risks.

(30:55):
But also the NHS, you know thefunding that's available,
particularly the NHS and otherservices.
They're really a threat.
You can't afford to keeprunning them indefinitely not
the way we do now, not the waywe do now.
So if we don't pick up our gameand start deploying these kinds
of technologies to deliverpreventative healthcare that's

(31:16):
less expensive, to get peoplethrough the system more quickly,
to save money in theadministration, et cetera, if we
can't do that, then we are justgoing to decay.
So I built this AI team up and,if you're interested, aigovuk,
they're doing many interestingprojects.
We adopted this and I feel verystrongly about this.

(31:36):
We adopted a mantra which isradical transparency.
You know, if you're building aproduct, the code goes out in
the open.
There are transparency reports.
The team writes blogs on whatthey're doing and why they're
doing it and shows excerpts andshows videos, so the public
really know it is designed to bein their best interest and not,
you know, to restrict benefitsor whatever, and it really is.

(32:00):
And the other thing we reallywanted to do and again this is
still very important to me nowwas I wanted to make government
more human for people, and itsounds really counterintuitive,
but I would go around and saywe're going to make government
more human with AI, but it'sreally really important to think
about how you're using thistechnology and what you're
giving people.
A really good example of thisif you write into the Department

(32:22):
of Work and Pensions ahandwritten letter, it will take
50 weeks 5-0 for somebody toread.
It will take 50 weeks 5-0 forsomebody to read it.
If you are handwriting in,that's a certain cross-section
of people.
They might be very digitallydisenfranchised.
They probably don't have accessto a lot of services and some

(32:43):
of them are highly vulnerableand what can happen is after 50
weeks, when somebody gets backto them, they're not there
anymore, they haven't survived,and that gives me chills every
time I think about it.
So there's a lovely piece of AIthat will read those letters,
looks for vulnerability andanybody who appears genuinely
vulnerable.
Somebody will call them rightaway.

(33:04):
And you've made a governmentservice human and I want to do
that sort of worldwide, I think,using AI to enable people who
are doing caregiving to give thecare, to enable teachers to
really focus on children'ssocial development, to be able
to diagnose children muchearlier when they have the sorts
of conditions that benefit fromearly intervention, and to keep

(33:25):
them safe from harm, to givepeople jobs that they actually
enjoy doing.
One of the things about healthand social care sector is,
roughly, I think it's one in 20people in the UK work in that
sector and that's not one in 20workers, it's one in 20 people

(33:47):
children.
So if you can come up withimprovements in the working
lives of those people so thatthey are happier and healthier
and in a good mental healthspace because it's very
stressful work they go home.
There's a knock-on effect ontheir families.
There's a knock-on effect ontheir communities.
The kind of impact you can havefor making those jobs more

(34:08):
rewarding, less stressful anddraining can impact the whole
country really in one go.
So I feel very strongly aboutthis and this is one of the
things that we've really focusedon not just automating things.
You're trying to give people ahuman, faster, kinder service.

Andrew Grill (34:24):
Looking ahead, what do you see as the next big
thing in AI?
Not necessarily technology, butwhere we're going to use it.
That we may not be thinkingabout now.

Laura Gilbert (34:32):
I hate these ones because it's pure guesswork.
I don't know.
I tell you what I really think.
I think that there's almost abifurcated future ahead and in
one of the spaces we have aworld where some people are
really enabled and reallyempowered and, you know, really

(34:53):
supported by AI and technologyand do very well out of it, and
other people are left behind andthe inequality widens.
More people don't have jobs,more people move into the
billionaire space.
Or there's a world in which wecan take this kind of technology
and we can narrow inequalitiesand we can give everybody a
basic standard of care probablya basic standard of income sort

(35:15):
of comes into that Earlierinterventions when they're
unwell, earlier interventionswhen they need mental health
support, all those sorts ofthings and make their lives
easier and safer.
And it's not a prediction, Ithink it's a choice and I care
very deeply about that.
So what I really want to seefrom people, tech companies

(35:37):
through to laymen who areputting pressure on service
providers I want to see peoplesend a signal that they care
about the second world comingtrue.
So I couldn't do a good job ofpredicting the future, but I can
tell you the future that I wantand I feel accountable for.

Andrew Grill (35:56):
I think what you're saying is the future we
need is a world of ethical AI.

Laura Gilbert (36:00):
I think that's exactly right.
You've summarised that muchbetter than I did.

Andrew Grill (36:02):
We're almost out of time.
We're up to my favourite partof the show, the quickfire round
, where we learn more about ourguests, even more than we know
already.
Window or aisle Window, alwayswindow, your biggest hope for
this year and next.

Laura Gilbert (36:16):
I am building a team here and I built two teams
in government recently who Ipassionately love and respect
and adored working with, so mybiggest hope is that we succeed
in building a very similar teamhere.
It's going well so far, but ifyou walk into work every day
with a smile on your facebecause of the people you're
working with, that's a great day.

Andrew Grill (36:32):
I wish that AI could do all of my Laundry.
The app you use most on yourphone WhatsApp.
The best advice you've everreceived.

Laura Gilbert (36:37):
Two pieces of advice, if that's all right.
Emily Lawson, who used to runvaccines delivering is
incredible.
She told me that if you are ina job where you feel angry more
often than you feel optimistic,you should leave it, and I think
that's great.
And the other one is not quiteadvice, but Alex Chisholm.
Sir Alex Chisholm, who was thepermanent secretary in cabinet

(37:01):
office for a while while I wasthere, he told me that to
succeed in the civil service,you have to be relentlessly
optimistic, and I think that'strue of life, and I engraved it
on a flask relentlesslyoptimistic.
What are you reading at themoment?
My favourite book in the worldis the First 15 Lies of Harry

(37:21):
August, and I'm just rereadingthat again quickly.
Who should I invite next ontothe podcast?
Ed Dominguez at ServiceNow is avery interesting man.
He used to work in governmentas a special advisor and he's
now working in public policy.
How do you want to beremembered?
I'll tell you what.
My father answered thisquestion shortly before he died
and he simply said I've had fun.

(37:42):
I think I would like to thinkthat of myself, and I think how
people remember me.
I would like them to think thatI absolutely always tried my
very best.

Andrew Grill (37:52):
So what three actionable things should our
audience do today to understandhow we can use AI for good?

Laura Gilbert (37:59):
Practice it yourself.
If you can't understand AI, youcan't understand how to use it
for good, so you need to getyour hands dirty and try and
break.
It is my top tip Go and givechat GP logic puzzles.
That was fun.
Understand what's gone wrongbefore.
Very often we have people whothink that the answer is about
regulation and publishingethical guidelines.
It's not when we've got thiswrong before.

(38:20):
It's been lack ofprofessionalism, people that
didn't think through checkingwhether or not it responded in
the same way to white people andblack people, for example.
And I think, thirdly, itprobably is about your intent.
It won't happen by itself.

(38:40):
Adopting AI for greater good isnot something that is a side
effect of developing thistechnology.
It has to have people who careabout it, as we discussed.

Andrew Grill (38:57):
So care about it, demand it, get involved.
Laura, a fascinating discussion.
We could have talked for hours.
How can we find out more aboutyou and your work?

Laura Gilbert (39:00):
So I definitely recommend looking at my previous
team, aigovuk, the Incubatorfor Artificial Intelligence,
because they are well advancedand doing amazing things.
Follow me on LinkedIn.
We will have some announcementscoming up building our first
team, our first products now andit should be sort of fairly
public very soon.

Andrew Grill (39:17):
Laura, thank you so much.
I hope we speak again on many,many things.

Laura Gilbert (39:21):
Absolutely.
Thank you so much for invitingme.

Voiceover (39:23):
Thank you for listening to Digitally Curious.
You can find out more aboutAndrew his keynote speeches and
brand partnerships atactionablefuturistcom.
You can order the compendiumbook to this podcast at
curiousclick slash order.
Until next time, stay curious.
Advertise With Us

Popular Podcasts

New Heights with Jason & Travis Kelce

New Heights with Jason & Travis Kelce

Football’s funniest family duo — Jason Kelce of the Philadelphia Eagles and Travis Kelce of the Kansas City Chiefs — team up to provide next-level access to life in the league as it unfolds. The two brothers and Super Bowl champions drop weekly insights about the weekly slate of games and share their INSIDE perspectives on trending NFL news and sports headlines. They also endlessly rag on each other as brothers do, chat the latest in pop culture and welcome some very popular and well-known friends to chat with them. Check out new episodes every Wednesday. Follow New Heights on the Wondery App, YouTube or wherever you get your podcasts. You can listen to new episodes early and ad-free, and get exclusive content on Wondery+. Join Wondery+ in the Wondery App, Apple Podcasts or Spotify. And join our new membership for a unique fan experience by going to the New Heights YouTube channel now!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.