Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:03):
This is the Discovery Files podcastfrom the U.S.
National Science Foundation.
An inter-agency effort has come togetherto produce a global artificial
intelligence research agenda, a documentcrafted to serve as a starting point
to align a global research visionin which research communities
continuously assessthe state of AI research.
(00:24):
Review publicationsaddressing the presented priorities
and identify gaps to guidefuture research.
We are joined by Michael Littman,
the Division Director of Informationand Intelligent Systems
in the Computer and Information Scienceand Engineering Directorate at the U.S.
National ScienceFoundation, and Joshua Porterfield,
a Federation of American ScientistsImpact fellow
in the Department of Energy's Officeof Critical and Emerging Technologies.
(00:47):
Thank you both for joining me today.Thanks for having us here.
So I want to startwith a little background
and get towhy you guys are interested in AI.
Michael, I know you worked with computersfor a long time.
How did you become interestedin artificial intelligence?
Yeah.
So for me, computingwas always about getting computers
to do interesting things.
And I think that the field of artificialintelligence is really about that.
I came of age in the 80s when there wasa national emergency, that that Japan
(01:12):
was going to do better in AI than us,and we needed more AI scientists.
I'm like, I can help with that.
Josh, I know you have a backgroundin chemical and biological engineering.
How did you become interested in that?
I was doing my PhD at Johns Hopkins,focused in nanomedicine,
and I actually defended on the last day.
The campus was openbefore the Covid 19 pandemic,
which led me to the Johns HopkinsCoronavirus Resource Center.
(01:35):
And that really inspired meto put my efforts
toward science, policy,kind of public service with my knowledge.
I get to work at the Department of Energynow, where I got to help
stand up our brand new Office of Criticaland Emerging Technologies,
which is really the front door for D.o.e.
on AI, biotechnology,quantum science and microelectronics
(01:57):
so that we can really work acrossall of the different offices
and national laboratoriesto kind of have a unified d.o.e.
vision on critical technologiesand work with our partner agencies.
Michael, as one of the partner agencies,can you talk a little bit
about how NSF is workingwith AI technologies?
Yeah,so NSF has been supporting AI research
for longer than I've been in AI,which is the net.
(02:19):
And I'm telling you,that's like the 1970s.
So the NSF has been supporting AI researchsince easily the 60s.
And it's gone through many differentphases.
This current phase, obviously, is onethat's very visible to the outside world,
but a lot of the things thatwe're seeing come to fruition right now
actually were researchtopics 20 years ago.
So NSF has really been involvedin bringing these technologies out,
(02:41):
helping to mature them and giving peoplea chance to work with them.
So moving towards the globalthinking we're having here,
Josh, can you tell me a little bit
about how the Department of Energyis working on a global scale?
Similarly, the NSF have a long and storiedhistory with artificial intelligence.
I mean, Doe
shepherds the 17 national laboratorieswhich housed the fastest openly
(03:03):
benchmark supercomputers in the world,let alone in the United States.
And so, we've really been
buildingon kind of that compute capability.
The workforce at the national labs,that expertise, and a lot of the open
scientific datacreated at our user facilities
to really contribute to this interagencyand international effort on AI.
(03:26):
And so we've been called out in theAI executive order for a lot of different
things, everything from developing redteaming processes to make sure that AI
models are safe and secure, to providingfoundation and model resources, testbeds,
a lot of things that you can findat at NRG.gov/CET.
Resources for the public,both nationally and internationally.
(03:47):
And so a lot of our internationalAI research really comes out
of the Office of Science,where we can do this open science work.
So we have collaborations across partnercountries on federated learning,
a lot of work on the next generation
of AI hardwareand supercomputing capabilities.
Our labs also initiatedthe Trillion Parameter Consortium,
(04:09):
which is an international groupof collaborators that really look
to build, test and deploy
some of these large AI models.
And so really, this kind of evolutionof Doe's engagement in
AI has kind of led to our new proposalcalled frontiers in AI for science,
Security and Technology, or Fast,which is really how we envision
(04:33):
coming together with our partneragencies, the international community,
the industry community to kind of reallybuild the next generation of AI models
that are going to fix some of thesemajor problems facing the world right now.
So a new researchagenda is what brings us together.
Michael,can you tell me what this agenda is?
So it's worth pointing outthat the idea for the ERA was first
(04:55):
made public in the executive orderon artificial intelligence,
which the Biden administrationissued in last October.
So we're getting really closeto the one year anniversary of it.
In that document,there was a whole slew of things that were
that were requested of the AI, or I guess,demanded of the agencies to actually do.
Some were targeted at particular agencies,some were targeted to all the agencies.
(05:17):
This particular item, it mentionedcreating a global AI research agenda.
So which we've been calling GAIRA,
but it doesn't actually say GAIRA in thein the document.
We now know that this is actuallythe name of a monster from Godzilla.
So that was, you know, for spiteful and,it's not a monster.
It's actually a really great thing.
So the idea of it is that thethe executive order said
(05:38):
that the State Departmentand USAID agencies in
the US would put together a globalAI research agenda.
Basically, what should we all be doingtogether to help move AI forward in a way
that advances artificial intelligencefor the benefit of the entire planet?
And so the thethe authors of the executive order,
they felt that the State Departmentand USAID have the expertise
(06:02):
to really connect a US vision to a to helpturn it into a global vision
to really connect with our partnersin other countries.
They also asked the Department of Energyand the National Science Foundation
to be a part of the developmentof the research agenda because, well, AI
research really happens at these agencieswithin the federal government.
They also ended up bringing in
(06:22):
or we also ended upbringing in the Department of Labor
to help out with the issuesthat have to do with the way that AI is
being developed worldwide involvesemploying a lot of different people
to do a lot of different kinds of tasks,and this is not happening within
any given one country.
This is actually happeningacross the world.
And so a lot of the decisionsthat get made in terms of how
AI is pursued actuallyhave impact on workers everywhere.
(06:45):
So it was really wonderful
to have the Department of Laboralso involved bringing their expertise
about workforce situationsto the development of this agenda.
It's pretty visionary,from the executive order to move towards
a research agenda,
because a lot of the conversationinternationally so far has really been
about setting up guardrailsand making sure these things are safe,
secure, trustworthy,which is absolutely a priority.
(07:06):
And we have the USAI Safety Institute to house that work.
And it's working withinternational partners actively on it.
But this is really kind of
the hopeful part of what can we dofor the people of the world.
If we've got this technology,we're sure it's going to be safe.
Where can we start deploying it?
Where can we start fixing things.
Really like that way of saying it?
Because I feel like what we're doingis instead of being reactive to all
(07:28):
these things that are just happeningand we're like, what's what do we do?
This is proactive.
This is really thinking aboutwhat is our long term vision,
how do we see AI developing in the future,
and what are the critical researchquestions that we need to be answering now
and working with our global partners
to answer now,so that the future of the world is bright.
So thinking about global partners,one of the questions that came up
(07:49):
when we were discussing this beforehand,why should U.S.
agencies be settingwhat the global standard is?
Yeah, that's a great question.
I don't think the document is intendedto tell the rest of the world
what they should be doing.
I think the purpose ofof creating a garret in the first place
is, first of all, for us within the USto recognize this is a global endeavor.
(08:10):
We can make all the kind of decisionsthat we want about what we're doing,
but we really should do thisin collaboration
and coordinationwith the rest of the world.
And so I think this is kind ofour offering out into the world, saying,
this is how we're thinking about this.This is how we'd like to work with you.
Let's get this conversation going.
I think it's really importantto point out that in terms of AI research,
I've been an AI researcher for,
(08:31):
I don't know,when you officially become an
AI researcher,there's no ceremony for that.
But for me,my first research paper was in 1989,
and I went to a conference
and it was a global conferencein the sense
that there were researchersfrom all over the world.
And so my entire career inAI has been interacting with
not just folksin the US doing AI research,
but folks everywhere in the worlddoing a research.
(08:51):
It's a global academic community.
And I think recognizing that,that this isn't we're not making these
unilateral decisionswhen we're talking about AI research.
We need to be working togetheras as a team.
That's that, to me, is the impetusfor doing something like this.
So thinking about that teamwork,I want to ask each of you about your role
in composing this document.
Can you tell me a little bitabout how you fit into that?
(09:13):
Yeah, absolutely.
So in our in our broad missionspace of science, energy
security Department of everything,we really wanted to make sure
that the research that we're fundingand the research that we're interested
in, these kind of gaps and prioritiesare identified in the document.
You know, everything
from the pressing issues of climate changeto clean energy deployment.
(09:36):
AI for science, such as materialsdiscovery, is a great thing
that we can workwith our international partners on.
And so really identifying what we've seenfrom the research community that we fund
and we engage with, especiallythrough the national labs kind of
where is that appetite for engagementwith our partner countries?
What are the gaps in data?
I think that a lot of these issues
(09:57):
that we're talkingabout addressing in the global AI research
agenda are not things that can necessarilybe tackled alone by the United States,
especially when we talk about somethinglike climate change,
where their impacts are vastly differentaround the globe.
You need different data sets.
You need different peopleto train these models.
And so we really wanted to focus on thingsthat require
(10:18):
kind of the global communityto come together in this effort.
Do you want to elaborate on
how you've worked with different agenciesin this process?
For me, anyway,this was one of the most fun things about
this is that it really requiredbringing together
the perspectivesof these five different organizations.
It's not just thatthey're different organizations.
They really come from very differentbackgrounds. Right?
So Josh thinks about the roleof AI in science
(10:41):
and trying to move science forward and,and can speak as a scientist, right?
Not necessarilysimply as an AI oriented person, but like,
what is thisactually mattering to the science?
So that was a really importantperspective.
I brought a perspectiveand NSF brought a perspective
that was with respect to, well,this is how AI researchers do their work.
And these
and we had this particular languagethat we use
(11:02):
when we gather peopletogether in conferences
and we ask them to submit papers andso forth, there's a whole culture there.
And it's a different culture.
The the State Department and US aid,they are like, from where I sit,
they seem very similar.
They like worry about other countries,but in fact they do very different things
and they have their ownindividual cultures.
There were moments during
the writing process where one group useda term that to the other group
(11:24):
meant something very different, andwe decided we should to strike that term.
Like even though that's the term thatwould typically be used by folks at U.S.
aid when they're talkingabout the global community,
it has different connotationsin the State Department.
And so we just okay,we're going to say that a different way.
So their perspective on howwe all get along as
a global community was incredibly valuableand very enlightening to me.
(11:45):
And then again, as I mentioned before,
the Department of Labor,they brought their perspective
and they used a lot of very specializedterminology.
I'm a computer scientist, right?
We have all kinds
of crazy words for things,
but I couldn't understand a lot
of the stuff that they wrote at first,because they had a very sort of economic
framing for the things that they,they talked about.
And so we would go back and forth and say,
when you say this, doesit mean the same as that?
(12:06):
They're like, well, yeah, I'm like, great,can you just say it that way?
Because I think more peoplemight understand it.
So it was just a wonderfulkind of process.
Everyone was very open and engaged,and we really wanted to do something,
not just to check a box,because this was something
we had to do for the executive order.
But we I think we all came inwith a feeling that we would like this
to actually have an impactand really try to make things better.
And so that was justa great way to collaborate.
(12:28):
How do you feel about being involvedin this agenda,
in setting the standard that it'sI hopefully go forward.
I think it's important to note that,you know, the AI executive order
and the Gaia process was not the beginningof interagency collaboration on AI,
and it sure isn't going to be the endof interagency collaboration on AI.
But as Michael was pointing out,I think it was great to really get
(12:49):
these different perspectives together.
And I think the IRA was a great catalystto force a bunch of different sciences
and people at the agenciesto speak the same language
so that we can actually communicatewith the public appropriately.
And so I think it's greatto have a product
that's coming outthat can actually you know, decipher.
What does the US governmentthink about AI right now
(13:09):
and where can we get involvedand what's coming down the pipeline.
And so I think that's very exciting.
And then additionally, you know, aswe continue to chart this course together,
how are we going to have safe, secure,trustworthy AI for good?
I have a.
Great number of new contacts and partnersthat I can reach out to.
You know,
when something comes upwithin the different mission spaces
of these various agencies and departmentsthat I think we all understand
(13:31):
a lot better.
Having been in a room with each other formultiple hours crafting this thing now.
I do want to also mentionthat we have a national strategic plan
with regardto AI research and development.
That's not our global plan.
And so one of the things that we didwhen we were developing
the IRA is to scope itso that, okay, we've already established
these are some really importantresearch topics in AI that are about AI.
(13:52):
Let's focus on the research topicsthat are really going to help
build this more global community,and answer questions that come up
when you're really thinking about howAI is impactful across national borders.
Yeah, I mean, we get outreach all the timefrom different countries saying, like,
we want to work with you on AI,what can we do?
And the the answer reallyso far has been we're working on it.
So I think this is great to say.
(14:12):
We've got something.
Now why don't you take a look at this.Where can I engage?
And I mean even from a perspectiveof a bench researcher, ten years ago,
it would have been greatto have kind of a guiding document
of where's the US governmentthinking it's interested in right now
so that I can write proposalsand think about research directions
that maybe align with wherethe funding could be going one day.
(14:33):
Of course,
the funding organizations will maketheir own decisions on where things go,
but this is kind of our North Star,hopefully.
Exactly right.
So with both of you having a researchbackground, I've talked to a lot of people
that have used AI and say, labslike you mentioned, using AI
to find different kinds of materials.
And I'm curious
what either of you is excited aboutand where this might potentially be going.
(14:55):
Yeah.
Well, I mean, so my AI research of latehas been very much focused on,
I think up to this point, up to threeor 4 or 5 years ago,
it was like,oh, could we create like an intelligence
that would actually be able to conversewith us and help us solve problems?
And we sort ofhave the pieces of that now.
Right?
We have actual programsthat we can have conversations with
(15:16):
and they can actually bring help,bring insights.
So for me, the the question has shifteda little bit from not
how can we just create thisexternal intelligence.
It's just going to be
out there in the world and do things tohow can we use this technology
to enable people to solve the problemsthat they want to solve?
And so for me, that's about tellingthe computers what to do, right?
(15:38):
They're there, they can do work for us,
but we have to express to them what it iswe want them to do.
And I think some of these new developmentslike, like language models and chatbots
help provide,a much more natural interface
so that anybody can tell these machineswhat to do.
Now, it's still on all of us,
all of us to figure out what we wantthe machines to do for us.
(15:58):
But it's become so much easierto tell the machines once
we've made that decision what to do.
Yeah, I think it's overwhelmingalmost every day
to see the number of thingsthat we could be applying AI tools to.
I mean, just working with our colleaguesat the national laboratories,
I've seen everything from AI systemsthat are now
trying to predict the conditionswe need for nuclear fusion to AI.
(16:21):
That's now speeding up, permitting
to get new clean energy on the grid,which is incredibly impactful.
And we keep getting questionsfrom Congress about because
there's a multi-year backlog and peopletrying to connect new energy to the grid.
And we've seen that we can develop
pretty quick AI chat bots to help peoplethrough the application process.
Help summarize these things.
And so really thinkingabout all the different application
(16:43):
spaces that we can use AI inis is pretty exciting.
And that'swhy we really need to keep engaging
with our partners in public,the industry and internationally, because
we're not going to see each of these usecases come across our desk.
So we need to hear where where shouldwe be developing the next cool AI tool.
So the last question I want to askyou guys is thinking about this
(17:04):
from a public general consumerend of things.
What is the benefit of a globalAI research agenda for the average person?
I think that'sa really interesting question,
because probably most peoplewill not read this document,
but if they were,it's an interesting document.
If they were,I think they would find it interesting
because it provides a windowinto the thinking of the research
(17:25):
community as to what the key topicsare going to be moving forward.
I think really the impacts,though, are going to be visible
down the line, right,so that if the recommendations of this
research agenda are taken, and I thinkwe're already acting on a bunch of them.
So it's really happening.
What we'll see is better cooperation,better collaboration across countries,
(17:45):
and more focus on helping AI be
something that is valuableand trustworthy and beneficial.
And I think that just makes people'slives better.
And so that's what I am most excitedabout with regard to the public is,
hey, I think this is going to reallyhelp us do a better job
making a toolthat's going to benefit everyone.
Yeah.
And I would say that
I think a lot of members of the publicare rightfully worried about the impact
(18:08):
that AI is going to have in everyday life,especially when it comes to their jobs.
And I think that's what was so excellentabout having the Department
of Labor involved in this processto really show we are concerned.
We are thinking about how this is goingto play an impact in daily life,
how is it going to impact hiring,how's it going to impact job assessments,
those sort of things, so that at leastas a member of the public, you maybe know
(18:30):
that we're thinking about these thingsa little more now and taking it seriously.
But I also think it's kind of a callto arms of saying that
we have a large breadth of stakeholdersthat should be involved here right now.
And it's not just AI professionals,it's not just computer scientists.
We need the basic scientists to come inand create the data
to train the systems, testthe outputs, make sure it's working.
(18:51):
We need the social scientiststo talk about impact and think about
how we're actually applying thesein a responsible manner.
I mean, a lot of this thought
wasn't given when social media kind ofcame on the scene.
So now we're trying to do thata little more proactively this time.
And the GAIRA really sets outthat there's a lot of different groups
that need to be involved in this process,and we're kind of ready to engage
(19:13):
with you.
Okay, cool. Well,thank you both for joining us today.
Special thanks to Michael Littmanand Joshua Porterfield.
For the Discovery Files, I’m Nate Pottker.
You can watch video versions
of these conversations on our YouTubechannel by searching @NSFscience.
Please subscribe wherever you get podcastsand if you like our program,
share with a friendand consider leaving a review.
Discover how the U.S.
(19:33):
National Science Foundationis advancing research at NSF.gov.