All Episodes

March 6, 2025 25 mins

YS Chi speaks with Philippa Scarlett, RELX global head of government affairs. As a child of diplomats, Pippa has a unique background which gives her cross-cultural competency in a role that requires global thinking.  She shares her insights on how different governments think about AI and the opportunities and challenges they face when leveraging AI and generative AI for their own use. What are the different governmental approaches to AI regulation? How best to approach regulation when facing such a pace of technological change? And in the absence of regulation, what roles can companies play in advancing innovation and responsible development and use of AI?

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:08):
Pippa, welcome.

Philippa Scarlett (00:10):
Thank you, YS.

YS Chi (00:11):
Yeah, before we dive in, can I ask you to provide us a
bit of a personal introductionof yourself, because you have
one of the most uniquebackgrounds and your journey up
to this point has beenfascinating.

Philippa Scarlett (00:23):
Well, we share a unique background YS. I
am a child of diplomats, as areyou. Mine in the United States
government. So, I grew up allover the world. My parents
joined the US Foreign Servicewhen I was about two years old,
and as a result, I've spentabout half my life in the United
States and the other parts,including Cameroon, Brazil, the

(00:49):
Philippines, Yugoslavia… havingto leave Yugoslavia when the war
in Bosnia started - care ofUncle Sam, bringing us out. And
then my parents went on to servein Ireland. So, grown up all
over and as you shared, my roleis a global one and so I feel I

(01:12):
can bring to bear thatexperience and cross-cultural
competency in a role thatrequires all members of the team
to think globally.

YS Chi (01:24):
And now you are based in…

Philippa Scarlett (01:26):
Atlanta, Georgia.

YS Chi (01:27):
That's it, yeah. But you're still traveling quite a
bit to go work with your teamsacross the globe.

Philippa Scarlett (01:35):
Yes

YS Chi (01:35):
That’s great. When one looks at your background, it's
quite interesting to see thejumping back and forth between
public service and privatesector. What triggers these
switches?

Philippa Scarlett (01:51):
Yes, I've had kind of a non linear career,
perhaps. I think because of myparent’s public service, I've
always had an interest inserving and making an impact at
scale, beyond just my immediatecommunity. And if I'm fortunate
enough to be a part of anorganisation that's with
leadership that I admire to dothat. That has been the

(02:17):
animating principle, I wouldsay, in my different career
moves. Yes, I've had theopportunity to serve in
government, both as a careercivil servant at the US
Department of Justice, and thenlater as a political appointee,
and also in the White House. Butalso, in the private sector.

(02:38):
Part of the reason to go to theprivate sector is, I think, if
you want to effectuate change atscale. If you have an
organisation that has principlesof integrity and ethics and
serving their, in this case,customers, you could actually

(03:00):
make quite a difference.

YS Chi (03:03):
Probably quicker…

Philippa Scarlett (03:03):
Yeah, and a lot quicker. And that's an
important point. Democracy takesa long time to do things
appropriately. But privatesector, the largest employers,
they want to make a difference.
They can do so as a matter ofinternal policy and how, what
products and services they seekto develop.

YS Chi (03:23):
Well, that rich background is really wonderful
for tackling the issue on hand,right? AI, both as a lawyer,
practicing lawyer, but alsohaving been part of a leading
tech consumer company, and nowthe leading information tech
company. As we said, the publicsector is not always the most

(03:49):
innovative instigator ofemerging technologies, perhaps
with the exception of one thing,which is the internet.

Philippa Scarlett (03:55):
Right.

YS Chi (03:56):
That was absolutely invented inside the Department
of Defense. But outside, asidefrom that, government is really
not the instigator of newtechnology. How has this been
different with AI, since it'sreally grown up outside, and
government is trying to figureout where it should stand?

Philippa Scarlett (04:17):
Yes, big distinction with the internet,
as you mentioned. All of theinnovation is happening, or the
majority of the innovation ishappening within the private
sector. There may be someexceptions in China that we
could speak to but looking atthe West, all the major large

(04:38):
language model, developers andAI developers and deployers,
private sector. So that's areally exciting thing, because
there are no rules formally onhow to leverage this technology
and use cases. And that's whyit's really important that

(05:01):
businesses think about it, notwaiting for governments to
decide, because it will takesome time. But also to figure
out what's the right thing, howbest to use this technology and
innovate and improve oursociety. So yes, I think the
locus has been in private sectorand the governments, mainly in

(05:22):
the West, are really playingcatch up. They don't really
understand it. They can'tattract, necessarily always the
same talent within government asthe private sector. So, there
are some challenges ahead.

YS Chi (05:36):
So, one of the interesting questions is, how
will government use Gen AIcapabilities for its own use,
not just as a regulator, but forits vast amount of services it
provides to citizens…

Philippa Scarlett (05:51):
And data that it has…

YS Chi (05:53):
And data that it sits on already.

Philippa Scarlett (05:55):
So, this is a really tricky area. There's no
clear answer yet, but clearlythe implications for military or
national security are quitedifferent than the provision of
government services, likebenefits. Exactly… One use case
that you could imagine thatgovernments would be interested

(06:18):
in harnessing is understandingweather patterns. And how to…
using these models protectagainst major storms and the
like. You could see the veryeasy applications beyond the
others that are more obvious interms of operational efficiency.

(06:40):
But there in the government,especially in democracies, the
need for human oversight is evenmore pronounced. The government
is answerable to the people, andthe services are for the people.
Therefore, machines shouldn't bemaking decisions, ultimately.

YS Chi (06:59):
Especially if that service is unique, without
private sector competition.

Philippa Scarlett (07:04):
Correct. So, pensions and health benefits or
emergency care. I think thestakes are even higher in
government use of generative. AIBut governments are grappling
with how themselves, how theythemselves, as users of this new

(07:27):
technology, what should theframework be?

YS Chi (07:31):
And in some countries, governments will try to do it
themselves. Other places willsee governments looking for
private sector solutions forthem as well.

Philippa Scarlett (07:41):
Right, and there will be partnerships too.
There can be places where theprivate sector can help the
government in its own work.

YS Chi (07:49):
Like education for example.

Philippa Scarlett (07:51):
In education for sure.

YS Chi (07:53):
And health care.

Philippa Scarlett (07:54):
Health care.
I think that's another frontier.

YS Chi (07:59):
Different governments are approaching this, obviously
differently. You mentioned Chinabeing one of them. Europe is
approaching it slightlydifferent from the US as well.
Certain ownership issues aswell. Can you describe for our
audience, what are some of themost visible differences between
these different governments thatare trying to get some handle on

(08:21):
this?

Philippa Scarlett (08:22):
The European Union with the ‘EU AI Act’ was
the first out.

YS Chi (08:27):
Absolutely jumped out of the gate.

Philippa Scarlett (08:28):
As the largest jurisdiction or economy,

YS Chi (08:32):
As they did with GDPR.

Philippa Scarlett (08:34):
As they did with GDPR privacy. That is
significant. Some say, they'redoing that because most of the
major companies that aredeveloping this technology
aren't there, so it's easier toregulate. But nonetheless,
they've done it, and it's quitean achievement. Now there's the
process of bringing that intoforce, which is underway in

(08:57):
Brussels. They've taken a verykind of hands on and
consultative, I would imagine,approach. But the flip side of
that, that you will hear on thisside of the Atlantic in the US
is… well, the reason why thereare no AI companies in Europe,
quote, unquote, is because of somuch regulation. How to find

(09:21):
that balance, to build someframeworks and structure without
it being wild or totallyregulated.

YS Chi (09:30):
Right. And you see that actually, within even the EU,
right?

Philippa Scarlett (09:33):
Yes.

YS Chi (09:34):
You have the more of the, let's figure out innovation
mindset of ‘a la France’ andthen the opposite, and I'm not
going to name the country.

Philippa Scarlett (09:44):
Yes, for sure. I think the UK is also
trying to figure out post Brexitwhat role it could play, perhaps
as an intermediate between theEuropean model of more
aggressive, shall we say,efforts at regulating versus the
US, which, in your firstquestion, I'll speak to that.

(10:05):
Obviously, there's been verylittle regulation and you would
say largely by design. Coloradois an important exception. The
first US state to seek toregulate artificial intelligence
that happened this year.

YS Chi (10:20):
When I visit different countries, in particular
different language zones, sincewe're talking about large
language models. I see an effortby different governments to make
sure they don't fall behind bycreating large language models
in their own language and theirown content. Do you see any part

(10:41):
of the world that isparticularly noteworthy right
now in making that investment.

Philippa Scarlett (10:47):
Outside of China?

YS Chi (10:48):
Right outside of China, and outside of EU, US.

Philippa Scarlett (10:52):
You would know better YS, what would you
say?

YS Chi (10:56):
I’m impressed with the model that is being built in
Arabic language, for example,Arabic content. I'm sure, I
don't know, but there is one inRussian language, I'm certain of
that. Although I don't haveaccess to that. I'm seeing some
of the Portuguese languageeffort in Brazil as well. But in
each country, I do see them notwanting to make this an English

(11:19):
language or romance language,driven and dominated. I do see
that. This, then is actuallyan opportunity for us. We are
obviously a strong contentcompany, but also a very strong
analytics company. Tell us someof the things that you see as an

(11:40):
opportunity for us, when we getin front of these regulators, to
say, “Hey, don't go too farhere. Don't go too far there.”

Philippa Scarlett (11:47):
100 percent.
RELX, as a company, has beenusing algorithms or early
versions of AI for about adecade. This is not brand new
for us. And we have undertakenfor ourselves to develop
organising principles about theresponsible development and

(12:08):
deployment of AI, which we'vepublished a couple years ago,
ahead of regulation. I thinkthose are important kind of
framework structures that we'vebuilt, hand in hand with our
innovation. That's exciting. Theother piece is where… our
customers are in law and inscience and in medicine, for

(12:32):
example, and so our generativeAI products are for customers
that recognise the importance ofveracity, of trustworthiness in
the data and therefore theoutputs of the generative AI. I
think that uniquely positionsour company. In the law, you

(12:54):
need reliable information.
That's how businesses andpeople's lives can be decided,
likewise in health. All of theefforts that we've undertaken to
curate the underlyinginformation over many, many
years. I think, puts the valueof our products and services at

(13:18):
an even higher level, and theirenterprise, not consumer facing
only.

YS Chi (13:23):
That's right, we are pretty much a B2B company,
right?

Philippa Scarlett (13:27):
Yeah, in those areas.

YS Chi (13:28):
Yeah, we are. I think you give me an opportunity to
jump to the next question that Ireally wanted to ask, and that
is… we've set on, principles ofresponsible use of technology,
responsible use of AI, dataanalytics for a long time,

Philippa Scarlett (13:47):
Yeah.

YS Chi (13:48):
How has that affected our ability to deal with these
upcoming regulations, upcomingscepticisms, upcoming conflicts
between different players inthis field?

Philippa Scarlett (14:03):
I think it gives us credibility because
we're not just waking up to whatare the risks, what is our North
Star in this technology,recognising, of course it will
continue to evolve. We don'thave a crystal ball and all of
the future use cases, but wehave undertaken and publicised

(14:26):
that we think hard about this.
Not because there's aregulation, but because that's
the responsible thing to do forour customers and for the
ultimate, the general public. Ithink it really enhances our
credibility that we undertookthis, not because of an external
expectation, but our own drive,and to serve our customers.

YS Chi (14:50):
Can you give us some examples of really exemplary,
responsible use of AI, among theproducts we have now already
launched?

Philippa Scarlett (15:03):
Oh gosh.
Well, in the legal area, I wouldsay, we've emphasised privacy in
our search queries becauseobviously a lawyer is doing
research to represent his or herclient, and those queries may
tip someone off on the strategyor particular aspects of a

(15:27):
client. We have certainprinciples there to protect
against or to maintain privacyinterests. In the health space
with ClinicalKey AI. Anotherexample would be, well,
transparency. It's a big thing.

YS Chi (15:47):
Yeah, it is.

Philippa Scarlett (15:49):
In the consumer facing generative AI
products, for example, there aresome products that seek to do
this, but generally you don'tknow what is the basis of the
summary when you query aquestion.

YS Chi (16:06):
It is a black box.

Philippa Scarlett (16:06):
It's a black box,

YS Chi (16:08):
Ours is not.

Philippa Scarlett (16:09):
Ours is not and not only is it not a black
box, but we empower theclinician who's using ours to
find the most relevantinformation. It's transparent,
and we help surface research orother, citations that might be

(16:30):
helpful in that clinician’swork.

YS Chi (16:34):
So, as government sits now and watches everything
unfold so quickly around. Andbillions and billions of dollars
are being invested by competingtech companies to try to be the
‘winner take all winner.’ Whatare the things that governments

(16:55):
need to think about right now asthey think of quote, unquote
regulating it?

Philippa Scarlett (17:02):
I think the problem with regulation is that
it's a moment in time, but itcan't anticipate all the things
that will happen in 10 years orfive years, and with the pace of
technological change, there's areal risk to a really rigid
approach. I think the strengthwill be in principles, principle

(17:26):
based framework, with follow up,that will enable the technology
to grow, but in a way that isnot harmful to the general
public. That makes sense, thatone starts in the security area,
especially where the stakes areparticularly high. I think the
best regulation will be one indialog with the developers and

(17:50):
implementers of AI.

YS Chi (17:53):
It's a difficult balancing act.

Philippa Scarlett (17:55):
It is.

YS Chi (17:56):
You don't want to stop them from making great progress,
but on the other hand, we don'twant to hide under the carpet
some of the impacts that couldhave on humanities.

Philippa Scarlett (18:05):
That's right.

YS Chi (18:07):
One of the capping elements of our responsible use
of AI principle is humanoversight

Philippa Scarlett (18:18):
Yes.

YS Chi (18:21):
How do we explain that?
How do we explain the concept ofhuman oversight, and why is it
so important not to forget thatpiece?

Philippa Scarlett (18:30):
We serve people. We serve customers that
have direct impact on humanlife.

YS Chi (18:37):
Yes.

Philippa Scarlett (18:37):
and property…

YS Chi (18:38):
You mean, like doctors and nurses…

Philippa Scarlett (18:40):
Doctors and nurses and lawyers.

YS Chi (18:43):
Law enforcement.

Philippa Scarlett (18:43):
Law enforcement, financial
institutions. A business canthrive or not. In that case,
it's really important. Dataanalytics can help uncover and
insights, but it shouldn't bethe decider of the decision

(19:04):
based on those insights. That'swhere the human is important in
democratic societies. Humanbeings are the ones who are
accountable to the voter, andmachines are not. If we're
creating these machines, thatneeds a human oversight,
particularly if you think aboutour use cases. Human oversight

(19:27):
is critical.

YS Chi (19:29):
I don't want to create a crystal ball situation here, but
if we move this time frame out afew years, how differently do
you see government's role beingin the world of this rapidly
changing world of AIdevelopment? Whether that's Gen
AI or predictive AI, or whateverthe new AI variations would be.

Philippa Scarlett (19:53):
I think there is momentum now for governments
to be more involved. They'verecognised both the promise and
some of the deep risks. I wouldanticipate we will see a
significant uptake in governmentinterest and engagement on

(20:15):
generative AI in particular. Butjust as soon as they do that,
there'll be another technology…

YS Chi (20:20):
Of course.

Philippa Scarlett (20:21):
Right around the corner is quantum computing,
and what that will mean forencryption and all kinds of
other things. It's always a onestep behind, at least, but I
would anticipate in 2025there'll be a lot more
government attention in allmajor markets, certainly in the

(20:41):
west, to focus on AI.

YS Chi (20:46):
Do you think this is something that will necessitate
global alliance?

Philippa Scarlett (20:54):
Yes, I think this is one of the examples
where governments and privatesector of countries that are
already in alliance, will becritical. We see this in the UK
and its relationship with theUS, long standing business and

(21:16):
political and intelligence andmilitary relationship. But I
would anticipate, for example,that alliance to think about
regulation in a way that alignsalso with the political values
of those countries.

YS Chi (21:33):
And what can we advise smaller countries who may not
have the same kind of resourcesand advancement in technology to
do? As far as governments areconcerned.

Philippa Scarlett (21:43):
Well, I think this is an important space for
Global South. Well, we knowBrazil as the host of the G20
this year and also of COP nextyear.

YS Chi (21:56):
That’s right.

Philippa Scarlett (21:58):
And India, also an important major economy
and player here. I think we…quote smaller countries. I mean,
India is not a smaller country.

YS Chi (22:10):
Right, nobody can say that.

Philippa Scarlett (22:12):
But in terms of where the major companies
reside, I would imagine thosegovernments will have more to
say. Brazil is looking at an AIbill as we speak. Obviously
informed by what may behappening in the US or in
Europe, but I would anticipatethey will have a bigger voice.

(22:34):
There are a lot of people.

YS Chi (22:35):
Any wish list from government on this issue over
the next 12 months?

Philippa Scarlett (22:39):
I think there is a recognition that this takes
special subject matter expertisethat, for some of the reasons
that we discussed earlier, don'treside in the government
currently. So, you will seegovernments trying to bring that
know-how in because they don'tunderstand it, and therefore,

(23:01):
what's the best way to regulateit? How is it even possible you
could come up with ideas in akind of vacuum? But obviously
they would like it to beeffective, and so that requires
some knowledge of actually howthis technology works.

YS Chi (23:20):
It will be indeed interesting to see how
government manages to leave thedrivers of this progress as
independent as possible, butcertainly taking certain steps
to avoid recklessness

Philippa Scarlett (23:37):
Right.

YS Chi (23:38):
And that balancing act is just not a science. Are there
a sine qua non, noncompromisable principles that
you would like to leave theaudience with that, we believe
we need to advocate? All of us,all 35,000 of us, on this issue

(24:00):
of generative AI?

Philippa Scarlett (24:01):
Transparency and human oversight. Those are
the, I think, probably thebiggest keys. What are we
talking about? What went intothis box? And then what will
happen with the outputs? Humansneed to be involved. They're the
ones accountable in our systemsof government.

YS Chi (24:22):
Transparency and human oversight. You've heard it from
Pippa.

Philippa Scarlett (24:26):
Yes.

YS Chi (24:27):
Thank you so much for joining us today, Pippa. This
has been very informative andlook forward to seeing how
governments do handle theirparticipation in this journey
with the private sector.

Philippa Scarlett (24:42):
Thanks so much YS.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.