All Episodes

October 14, 2024 16 mins

An interagency effort has crafted a document to support the entire artificial intelligence research ecosystem, from foundational discoveries to societal applications. Jillian Mammino, a contractor at the U.S. Department of State's Bureau of Oceans and International Environmental and Scientific Affairs; Mary Beech, director of workers and technology policy in the U.S. Department of Labor’s Office of the Assistant Secretary for Policy; and Craig Jolley, a senior data scientist in the Bureau for Inclusive Growth, Partnerships, and Innovation at the U.S. Agency for International Development discuss the Global AI Research Agenda.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:03):
This is the Discovery Files podcastfrom the U.S.
National Science Foundation.
The Global Artificial Intelligence
Research Agenda is a new documentcrafted by an interagency effort.
It provides guidance for funders,researchers, and publishers to support
the entire artificial intelligenceresearch ecosystem,
from foundational discoveriesto broader societal applications.

(00:27):
We're joined by Jillian Mammino,a contractor at the U.S.
Department of Statein the Bureau of Oceans
and International Environmentaland Scientific Affairs.
Mary Beech, director of workersand technology policy in the U.S.
Department of Labor's Office
of the Assistant Secretary for policy,and Craig Jolley, a senior data scientist
in the Bureau for Inclusive Growth,Partnerships and Innovation at the U.S.

(00:49):
agency for International Development.
Thank you all for joining me today.
Thank you for havingus. Good to be with you.
Thanks for having me.
Jillian, I'd like to start with you,
with the state Departmentreleasing this document.
I'd like to ask you the big question.
What is the global artificial intelligenceresearch agenda?
The global AI researchagenda was announced and released

(01:11):
by Secretary Blinken on the sidelinesof the UN General Assembly.
And if you're interested in readingor downloading
the document, you can find iton our State Department website.
The global AI Research Agenda deliverson President Biden's executive order
and the safe, secure,and trustworthy development
and use of artificial intelligenceand specifically section 11 Point

(01:34):
Sea Point, to strengtheningAmerican leadership abroad.
It includes principles, guidelines,priorities and best practices aimed
at ensuring the safe, secureand trustworthy development
and adoption of AI,and additionally addresses
AI's labor market implicationsacross international contexts
and includes several recommendedrisk mitigations for these topics.

(01:59):
I think, importantly, that it emphasizesthe need for a holistic approach
to AI research and development,which considers both
the technical advances of AI systemsand their applications,
and the resulting interactionsbetween AI technologies
and then the individuals, communities,
and societieswithin different global contexts.

(02:20):
Mary was part of the order, looking atinternational labor market implications
and thinking about how technologycan impact behavior and cultures.
Can you tell us a bitabout how AI is impacting the workforce?
Absolutely.
And there's I think a lot to unpackin that question there.
And we're seeing as AI is implementedmore broadly, its impact on workers

(02:41):
is already far reaching.
It's complexand sometimes it's even contradictory.
So, you know, it's important to really,I think, unpack what it means
for different people in different placeswho are performing different jobs.
We know that there'll bea lot of variation.
There already is across industries,
across jobs, tasks, geographiesand different demographic groups.
But also,I think it's important to really recognize

(03:04):
that outcomes will depend on the choicesthat we all make today.
And while there might bea lot of uncertainty
about how these technologies
will evolve over time,it is really critical to take steps now
so that we're making surethat we are setting up
a situation and where workers will benefitfrom these changes
and that they're protected both today,but then also for decades down the line.

(03:26):
And I want to emphasize here,we also have a keen eye towards
making sure that we're focused on thosewho are most vulnerable
during this time of transitionand continued change.
Right.
There's a lot of fearamong the general public right
now, as people are kind of figuring outwhat AI is.
Absolutely.
As generative AI has sort of burston the scene in the last couple of years,
I think it's increasedthe public's awareness

(03:47):
about what these technologiesmay be capable of doing.
That certainly can lead folksto have a lot of different questions.
And so we want to be proactively meetingthat moment.
And there's a bit of fact checkingthere as you're separating out science
fiction, artificial intelligencefrom what's actually out there.
There's a lot of, I think, need for alsojust being able to make sure that people,

(04:08):
including workers, have informationabout what these technologies are
and how they might be able
to help them now, but also the limitationson the use of these technologies
and where it's important to considerhow these technologies may not be perfect
in a number of different ways,and where it's really important for humans
to be involved, both in their developmentand in their use.
Jillian, can you tell me a bitabout how the document came together?

(04:31):
The first thing we thought was really
important was to put out a requestfor information or an RFI.
And so that was a broad call that we didback and I think March of this year,
and we really wanted to engagethe public and hear what they had to say
in response to both a brief outlinethat we had put together
and then different questionsthat were on our minds, specifically

(04:53):
the ones about AI research principles.
We thought that that was a gap.
And I think,if you read the document, it's still kind
of an open area for conversation.
The second thing that we did iswe held stakeholder consultations,
and we really doappreciate the organizations
who engaged with us and helped to convenesome of these consultations

(05:15):
of both domestic perspectivesand international perspectives.
And this included academia, thinktanks and civil society organizations.
And then finally,we did call for international
input through the US embassiesthat we have abroad.
We had them reach out to the relevant
counterpartsand flagged our requests for information.

(05:36):
Craig's thinking aboutsome of those international partners.
Can you tell us about how the documentworks to bring AI
to some of those more diversedemographics?
Yeah.
So the GAIRA really does focus on bringing
the potential benefits of AI to everyone.
You know, if you have listenerswho are on the technical side,

(05:56):
they'll understand that machinelearning systems typically struggle
with what we call out-of-samplegeneralization,
making reasonable predictions about datathat is not reflective
of the probability distributionof the data on which they were trained.
And so, in normal person terms,what this means
is that they do not functionwell in situations that are very different

(06:20):
from what they are trained on,and they don't extrapolate typically well.
And so what we see on the wholeis that today's models are trained
primarily on data from developedcountries, from rich countries.
And we don't know what to expect whenthey're deployed in the developing world.
They might workwell in a lot of cases they might not.
And this could lead to AI toolsthat make mistakes or are less safe,

(06:44):
or maybe are just less useful
for people who are livingin different global contexts.
And one really concrete example ofthat is when it comes to language,
there's just a handful of languagesthat make up the vast
majority of content on the internet,and the vast majority of the training data
that went into the large language modelsthat we have today.

(07:05):
This means that they are very,very good at English.
But when you have what we sometimescall low resource languages
that were not heavily representedin those training data,
you might have models that are less safe
or that make mistakes or that just don'tdo as well in those languages,
despite the fact that those are languagesspoken by hundreds of millions of people.

(07:27):
And there are a lot of really smart peoplein the private sector,
in academia, in various placeswho are working to address this.
But it's a really serious problemfor the current generation
of AI systems, and it's going to takea lot of work to address that.
That's interesting.
I hadn't really consideredhow the source language is being input
into the AI models might impact them,and how useful they are for some users.

(07:51):
Yeah, so I, I had a really interestingexperience a couple of weeks ago.
I was at an AI conference in Dakar,Senegal.
It's called Deep Learning Indabaand I was in a session where we were
discussing this problemof underrepresented languages,
and one of the speakers askedhow many people in the room

(08:13):
spoke an African language.
Of course I don't.
But most people in the roomraised their hand, and then she asked
how many people can read in that language,and maybe half the hands went down.
And then she asked, how many peoplecan write in that language?
And there were still a few hands up.
And then she asked, now, what would you doif you had to write your bachelor's

(08:34):
thesis in that language?
And all of the people who still hadtheir hands up just laughed at that point.
These are languageswhere even very educated, very talented
people are not producingthe kinds of technical, high quality
content in those languagesthat they're producing in
maybe their second or their third languagethat they've learned.

(08:56):
And so there's there's really a need
not just to build models,but also just to revitalize the use
of some of these traditional languagesthat maybe
have been neglectedover the last decades and centuries.
Now, I've talked to a few peopleabout how they're using AI
for kind of data processingor data management data analysis.

(09:20):
How does gyrus support usingAI for broader kinds of global challenges?
We have a section in the GAIRA onAI for global challenges,
and it focuses on a few specific areas.
We talk about climate change,about health, agriculture
and food security, and then about generalapplications in science and engineering.

(09:40):
And in each of those cases,we identify some priority areas of work.
And some of those things are
very technical in terms of types of modelsthat need to be developed.
And a lot of them are more of this sortof ecosystem support about
standardizingdata formats and finding ways for people
to work together better,that can advance work in those areas.

(10:03):
So, for example, in health,we focus on things like interoperable
standards, on protecting patient privacyand on building an implementation
science evidence base to understandthe effectiveness of health applications.
And we're really hopingthat those kinds of cross-cutting efforts
can help to remove obstaclesacross the field and make it easier

(10:24):
for a diverse range of researchers
to come in and really do high quality workin that area.
The climate area is also a special one,just because there's this duality to it
where AI has a lot of potentialto enable climate solutions.
But we've been hearing moreand more lately about the growing energy
footprint of AI and the negativeclimate externalities that come with that.

(10:48):
And so what we've tried to do inthe IRA is to encourage
climate positive applications in areaslike energy efficiency
and clean energydeployment and disaster response,
but also working to reduceAI's resource demands
and to develop more efficient modelsand computing hardware.
Mary.
Bringing it back to the human perspective,how does GAIRA address

(11:10):
the more global labor marketimplications of AI?
Here, I think it's really importantto examine how these impacts
vary in different countries,regions, sectors and occupations,
and also to consider variousaspects of identity and social factors.
So in order to understand jobaugmentation and automation
and how that's occurringor projected to occur,

(11:33):
it'll be important to really studyAI's impact on specific tasks
in order to forecast employmenttrends in different labor markets.
In addition, it'll be very importantto understand what types of new jobs
may result from AI development,deployment, and use.
So this means really identifying jobsthroughout the AI value chain,
including those involved in supportingthe development of AI systems such as data

(11:58):
enrichment workers, and for these jobs,working to document their locations.
The industries qualitytasks and impact on worker well-being,
in particular through engaging directlywith workers themselves
who are working at various pointsthroughout the AI value chain.
It will also be really importantto understand the potential impacts

(12:19):
on inequality.
So how increased adoption of AIis impacting income inequality
and wealth distributionin both within and among countries,
and to conductresearch on risks for workers,
which could include increaseddata collection and monitoring descaling
well-being,including physical and mental health,
discriminationin hiring and other employment decisions.

(12:42):
Misclassification.
And also, I really want to emphasizethat the research agenda provides
questions that are meant to informhow to prevent and mitigate
the risks ofAI related to workers in the labor market,
including those that aim to addresspotential labor market disruptions.
Studying how worker voicein representation and decision making.

(13:03):
And then, as I mentioned,you know, the value chain continues
to come up a lotin the global conversations around AI.
And so improving the understanding of the
AI valuechain will continue to be important.
In particular,the research agenda encourages companies
to report informationabout their value chains
and to explore how we could best enforcelabor laws throughout the value chain

(13:24):
and identify gaps in enforcementframeworks.
Jillian,I'm going to close out our conversation
by asking you another, bigger questionwhy do we need this document?
I think most listeners of this podcastknow that AI has the potential
to help our people solveurgent and global challenges
and contribute to global prosperity,productive innovation and security.

(13:49):
But at the same time, that irresponsibleuse could exacerbate societal harms.
And so harnessingAI for quote unquote good
really demandsa society wide effort that includes
global government, private sector,academia and civil society.
And so the global AI research agendaoutlines critical research opportunities

(14:09):
for international collaboration,including highlighting AI potential
to help advance achievement of the UNsustainable Development Goals
and leveragingAI for the Sustainable Development
Goals is an ongoing conversationand a priority, which Secretary
Blinken himself has elevatedboth last year in 2023
and again at a follow on event this year,which actually just took place.

(14:32):
You know, we really value the gear upby prioritizing safety, security,
human rights, inclusivity and trustso that stakeholder is working within the
AI ecosystem can maximize AI benefitswhile minimizing the risks.
The research agenda elevates opportunitiesspecifically for academic

(14:52):
and researchersto contribute to a body of evidence
to guide policy and decision makerstowards this overarching goal.
And, you know, sociotechnical is a termmaybe a lot of people are really familiar
with outside of the science techsociety fields.
I think that it's really importantthat this document brings additional focus
to this type of research to better explorethe nuance of the context specific

(15:18):
nature of AI deployments, and the realneed for a deeper understanding
of how technology, and specificallyAI technologies, can interact
with and influence human behavior,cultures, and institutions.
Finally, the scienceand research ecosystem itself
is innately international,and it's important for us as a U.S.

(15:40):
government to elevate the preferredcharacteristics in the AI research
enterprise that align with shared globalvalues such as inclusion and equity.
Responsible research conduct partnershipand collaboration, and respect
for human rights.
Of course, ultimately,we believe that the future of technology
lies in digital solidarity,and that's the recognition

(16:02):
and that as individual usersor large organizations,
we are all more secure, resilient,self-determining and prosperous
when we all work together
to shape the international ecosystemand innovate at the technological edge.
Collaboration and interaction
across borders really is crucialfor advancing our research.
Special thanks to Mary Beech,Craig Jolley, and Jillian Mammino.

(16:25):
For the Discovery Files, I’m Nate Pottker.
Please subscribe wherever you get podcastsand if you like
our program, share with a friendand consider leaving a review.
Discover how the U.S.
National Science Foundationis advancing research at NSF.gov.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.