All Episodes

September 19, 2024 41 mins

Join Professor Kevin Werbach in his discussion with Helen Toner, Director of Strategy and Foundational Research Grants at Georgetown’s Center for Security and Emerging Technology. In this episode, Werbach and Toner discuss how the public views AI safety and ethics and both the positive and negative outcomes of advancements in AI. We discuss Toner’s lessons from the unsuccessful removal of Sam Altman as the CEO of OpenAI, oversight structures to audit and approve the AI companies deploy, and the role of the government in AI accountability. Finally, Toner explains how businesses can take charge of their responsible AI deployment.  

Helen Toner is the Director of Strategy and Foundational Research Grants at Georgetown’s Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI. From 2021-2023, she served on the board of OpenAI, the creator of ChatGPT. 

Helen Toner’s TED Talk: How to Govern AI, Even if it’s Hard to Predict

Helen Toner on the OpenAI Coup “It was about trust and accountability” (Financial Times)

 

Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new  Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks

 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
Helen, welcome to the Road to Accountable AI.
Thanks so much.
Why should we be so concerned about AI safety?
Lots of new technologies have risks and dangers.
I mean, should we be so concerned?
I think the answer is probably yes.
I don't think it's definitely yes.

(00:23):
So AI is a very broad technology.
It's kind of an everything technology.
Economists would say a general purpose technology.
So in some ways it's hard to even talk about AI safety as one specific thing.
But I think there's different pieces of this we could tackle.
One piece would be, what are we already seeing from AI that's being deployed in the world?

(00:43):
And I think...
There's clearly potential harms, potential concerns that are already being realized.
For example, if governments use an AI system to determine who gets benefits or todetermine how they're making criminal justice decisions.
These kinds of things are already happening.
And if those systems are, certainly if they're biased, but also if they just don't workvery well, if they're not necessarily particularly reliable, which is often true, that's a

(01:10):
problem that really harms people's rights.
is very concerning for multiple reasons.
So I think we already have evidence.
Or you could also look at something like self -driving cars, where is it an AI safetyissue when a self -driving car crashes?
You could call it that.
You could not call it that.
think either way, we can all agree we want to make sure that we're minimizing self-driving car crashes, because those are bad.

(01:37):
I think maybe what you're gesturing at when you ask the question, though, is something alittle broader, like should we be worried about
kind of more catastrophic or civilizational scale risks that some are warning about.
And I think we should be worried about them.
I think we should be taking them into consideration and trying to understand how likelythey are, what kinds of things could reduce them, if for no other reason than that the

(02:06):
people developing this technology are telling us that they're a real possibility, whichis,
pretty unusual state to be in for those developing new technology.
those researchers or engineers are often the most optimistic about what their technologyis gonna do.
So I think if nothing else, the fact that you have some of the kind pioneers of AI,including of deep learning that the paradigm we're currently in, suggesting that sort of

(02:34):
this technology they've developed could really cause
massive scale harms of different kinds, different experts have different views on this, ofcourse.
I think that alone should make us sit up and pay attention.
And so then you have that combined with what are we already seeing in terms of harms fromsystems that are already deployed?
What kind of pace of progress are we seeing in terms of the research, meaning that wemight not necessarily have tons of time to kind of wait and see and figure this out as we

(03:03):
go?
To me, that all kind of combines together as an initial high level case for
this at least being worth thinking about, worth taking seriously.
Which is not to say worth totally freaking out about and panicking about and throwingeverything else out the window, but I think certainly taking seriously.
Isn't there potentially a tension between those two categories you described that to theextent that we are spending time and energy addressing what are today hypothetical future

(03:31):
harms that it takes away from the ability to address the issues that are happening right
Yeah, I kind of have two answers to this.
think in principle, these things don't have to be intention at all.
I think they have very much this sort of problems we're seeing from AI already, kind ofexisting harms.
There's sort of a community of researchers focused on that, different community ofresearchers focused on more sort of anticipatory or speculative harms that we might see

(03:57):
for much more advanced systems.
And I think those two communities can make things seem like they have to be intention muchmore than I think they need to be.
think there are actually
plenty of priorities that different groups with different priorities, not just those two,but many others, should be able to agree on.
even things as basic as the importance of technical capacity in government, us needing tohave more expertise, more folks working in federal, state, other governments who

(04:25):
understand the technology, or the need for transparency from the companies developingthese systems, whether that's a company selling a sentencing algorithm
a local government or whether it's a company developing, trying to develop artificialgeneral intelligence, I think there should be kind of transparency requirements for both
of those companies and likewise, accountability requirements in terms of who'sresponsible, who's liable if things go wrong.

(04:52):
So in principle, I think there really doesn't need to be nearly as much tension as therehas been.
And I think there's been a little bit of movement towards sort of more coalition buildingthere.
And I think that's positive.
At the same time, I definitely have to acknowledge that in the sort of last, I guess, yearand a half, the kind of post -ChatGPT moment where there's been a bunch more interest, a

(05:16):
lot of kind of policymaker concern and motivation, I have been dismayed at the extent towhich it seems difficult to have conversations about both of these types of issues.
It really seems like it's maybe easier than I would have expected to get.
policymakers or the public really freaked out about the more cataclysmic possibilities,and then that's the only thing they want to talk about, and that sort of can dominate.

(05:39):
And then in response to that, you can have people who are concerned about the issues weare already seeing here and now pushing back and trying to get that completely out of the
discussion.
Let's say have those existential or catastrophic issues totally off the table becausethey're worried about them potentially dominating if they're sort let in the door even at
all.
So.
I think in principle, they really don't have to be intentioned in practice.

(06:01):
I understand why people see them as sometimes being intentioned.
And personally, I would love to see more conversations, more white papers, more policies.
And I think maybe Biden's executive order from last October is a good example of this,where kind of all of these issues are present.
And the fact that you are considering these really potentially scary civilizational scaleissues doesn't mean that everything else is crowded out.

(06:25):
And likewise, the fact that you want to talk about
present -day issues that we're already seeing doesn't mean you feel the need to kind ofbanish that more speculative piece.
Because I really do think that we need to both be observing what we're seeing now,thinking about trends, thinking about possible futures, and trying to do that kind of all
as holistically as we can.
What do you think are some of the ways that those communities can come together and findsome common

(06:50):
Yeah, I mentioned a couple.
I do think that these ideas around transparency and accountability really apply across theboard and figuring out exactly what that means, exactly how you implement that, what kind
of requirements are you imposing on what kind of companies.
Obviously, there's a lot of detail to figure out there.
But I think that high level framing makes a lot of sense.

(07:11):
yeah, things like I think another helpful concept has been this idea of rights or safetyaffecting systems, which I think
forget where it originated, but it's certainly now been codified into sort of USgovernment, federal government guidance for how agencies use AI.
Can you look at systems that are either affecting people's rights or affecting theirsafety and kind of impose a higher bar on those?

(07:36):
I think also sort of things like incident tracking.
when you have in aviation, for example, if there's a plane crash, there's this wholeinfrastructure set up around going to figure out what went wrong.
doing investigations, sharing information between different parties.
We have similar things for traffic accidents, for cybersecurity incidents in a number ofother, I think in healthcare there's similar things.

(08:00):
And that's another kind of thing where I think both communities interested in what we'reseeing today and also communities interested in sort of trends and future possibilities
should be pretty motivated to try and understand what are the failures we're seeing andhow can we think about preventing them going into the future.
Another area would be what sometimes gets called AI evaluations, but I think more broadlymeasurement science for AI, which to me is trying to get the whole field on more of a firm

(08:30):
scientific footing.
A really big problem both for systems that are causing problems today and also forconcerns about more advanced systems in the future is that right now we just really don't
understand the systems that we're building.
The best experts in the world can't really tell you
why these AI models do what they do and why they succeed at some things and fail at otherthings and kind of get a little wacky in other situations.

(08:59):
And I think one underlying reason for this is that, mean, machine learning researchersthemselves will describe the field as more like alchemy than like chemistry.
So meaning in chemistry, my husband is a chemist, we have really, really strongfoundational
understanding of the physical laws that underlie what's going on.

(09:21):
And so for anything that we're observing in the lab, we can really try and pull apart indetail what exactly is going on there versus alchemy was sort of more just like throw
things in a pot, try what happens if you do it at this temperature, what happens if you doit at that temperature, what happens if you do it under the light of the full moon, of
experiment a lot of different, in a lot of different ways and much more of an empiricaltry things out, see what happens kind of And I

(09:46):
machine learning researchers think of their field as well as much more of an empirical trythings out, see what happens.
Sometimes it's really bizarre and no one can really tell you why sort of a field.
to come back to the question, think improving that situation, giving us more of a solidunderstanding of what is going on inside these algorithms, what kinds of bounds can we put
on where they will perform well and poorly is the kind of thing that helps both withpresent issues and with the more future facing issues.

(10:15):
Yeah, it's a really interesting point because on the one hand, it's scary in a way thatthe people that are developing these technologies don't quite understand themselves, how
they work.
And as you pointed out before, are some of the ones who are raising the alarm at some ofthe risks.
On the other hand, there's a value isn't there and things that work.
We don't necessarily know how, but it was not really anticipated that chat GPT would bethis great leap forward.

(10:40):
And people are now actually using it in business without having to work out all of themechanistic details behind the scenes.
So is it really important to get to some point where AI is more like chemistry?
Yeah, I I think it's definitely a balance.
And I don't think the answer is to say, we have to freeze everything, not use thistechnology at all until we understand it perfectly.

(11:02):
That clearly wouldn't be the right way to trade off the different benefits and risks thatwe're seeing.
That's part of why I like the rights and safety affecting framing, because I think ithelps distinguish between, for example, is my email client suggesting a way that I can
send an email response?

(11:24):
Like, know, there's sort of limited ways that that can go wrong as long as I'm alwaysreading and checking it.
Or is my Spotify suggesting music for me to listen to?
Like, I really think that experimentation in those kind of fields, like, why not?
But I also think that it makes sense for us to look at fields or applications where thatisn't actually appropriate.
Or at a minimum, you if we are gonna use systems that we don't understand very well tohave alternatives, to have the ability to, you know, if you're applying for...

(11:51):
alone and you get denied by the algorithmic system to either ask for an explanation or askfor human review, have some kind of due process, some kind of recourse available to you.
So I think it is definitely a balancing act of figuring out where is it OK to just goright ahead as long as it seems to work basically OK, and where is that more of a concern.

(12:14):
And in plenty of cases, I think we can
also rely on existing structures that are in place to make sure that things work wellenough.
if you think about some kind of chemical plant or an energy plant putting in new software,I believe that there are existing standards for what kinds of tests you're gonna do, what

(12:34):
kinds of confidence you're gonna have about what you're putting in place there.
so, or aviation would be another example where by and large, airlines are not,
airplane designers are not just going in and throwing things in because they seem to work.
They have really high safety standards for what actually does work.

(12:54):
So think in many sectors, we already have mechanisms to manage this trade -off.
And it's just, there are, but there are some sectors where that's not the case.
For example, government use of algorithms, I think has not been super well regulated inthe past.
Or also some of these more general purpose systems we're seeing now, the chat GPTs of theworld.
where it's not really relegated to a particular sector or use case.

(13:16):
And we're also kind of lacking these existing institutional mechanisms to determine sortof how good, how reliable, what kind of guardrails do you need.
So I think, yeah, in some cases we're pretty well set up, but in other cases we maybe neednew mechanisms or new guardrails.
You've been working in this area for a number of years now.
How would you say your own view on these questions has evolved since, say, you know, 2021?

(13:42):
Yeah, I think.
changed in a few different ways.
One thing that's been absolutely fascinating to see has been the scale of response toChatGPT as a product.
think something that was really, I that was surprising to a lot of people in the field wasthere had been a series of advances, a series of relatively gradual advances starting

(14:05):
maybe in 2012, sort of depends on how you count.
And ChatGPT was a moment where there wasn't some big technical breakthrough, but it turnedout
having a system that worked well enough, that was not super straightforward to get it tobe really toxic and racist and sexist the way that some other chatbots had been, and that
was available for free in a very kind of user -friendly interface, that really obviouslykicked off a wildfire of attention among the public, among policymakers.

(14:34):
And I think in many ways that has been quite heartening to me and to others in the fieldto see that, because I started getting interested
interested in this field in 2013, 2014, started being professionally focused on it around2016.
And a big part of why I was interested was because it seemed to me like it was going tomatter so much, like it was potentially going to be such a huge deal for society.

(14:57):
And so there was kind of a sort of a little bit of a feeling of being sort of lonelier inthe wilderness for people in the space of like, we think this really matters, we think
this is going to be a huge deal, but a lot of people aren't really paying attention towhat's going on there.
And so I think one thing that has shifted over the last couple of years has just been
in some ways feeling validated, but also feeling relieved or really pleased that there isthis much greater interest, lots of really, really smart people with expertise and really

(15:24):
important other fields starting to pay attention to what is happening in AI and how doesit relate to their field and what might be concerning, what might be great opportunities.
So that's been a big change in the positive direction, I think.
Maybe in the more concerning direction,
how I feel about this, because it varies from day to day or week to week, is just lookingat the continued pace of progress in the technology.

(15:52):
Yeah, we really are seeing, so far it continues to look like if you take this, you know,basically the design that underlies Chatchie PT, which was developed in 2017 by Google
researchers, a transformer.
It does seem like if you keep just sort of scaling up the amount of computational poweryou're using, the amount of training data that you're using to develop.

(16:12):
develop a system based on a transformer, that you do just keep getting more moresophisticated outputs.
And it's very contested what exactly that looks like, whether the trend where there are anexponential going up to infinity or whether things are plateauing and it's really not
going to be very interesting over the next few years.
Experts really, really differ on that.

(16:33):
But to the extent that we continue to see a pretty rapid clip of progress there, that also
you know, is cause I think for some concern or some caution, given that so many of thesequestions about, you know, how do we control these systems?
How do we understand them?
What kind of societal guardrails that we need are still very open.
So to the extent that we only have, you know, single digit years to, to kind of figurethat out versus, you know, 10, 20, 30, 50 years.

(16:59):
I think that's a little, a little more alarming.
But yes, how I feel about kind of the balance of those different factors changes a lotdepending on, depending on, what side of the bed I got out of what's been in the news.
how things have been looking recently.
Yeah, absolutely.
So the other end of the time you were at OpenAI was around the unsuccessful effort toremove Sam Altman as the CEO.

(17:24):
And so let me just ask you generally now that the dust has settled a little bit, whatlessons do you take from that whole
I mean, many lessons, personal lessons, lessons about how different people operate underpressure, lessons about how media works, how public narratives get built out of facts on
the ground.

(17:44):
Many, many lessons.
think maybe one big important one that's especially relevant for this conversation is justthat I think it, almost regardless of what you think of the board's actions, whether,
regardless of who you think was in the right,
I think it's very hard to come away from that situation feeling great about thesecompanies ability to govern themselves and especially to govern themselves in really high

(18:07):
pressure moments when the rubber is hitting the road and big incentives are at play.
And I think that's in part because of the sort of financial incentives in question, thepotential profit incentives.
I think also when you're talking about people at these companies, not just OpenAI, butalso companies like Google, Anthropic, Meta that are explicitly trying to build

(18:28):
what they would call artificial general intelligence.
We could have a whole other conversation about that term, but they're trying to build veryadvanced, very sophisticated AI systems.
People who are kind of involved in those efforts think of those systems as incrediblypowerful or will be incredibly powerful if they're built.
So there's these sort of financial profit incentives, but there's also all theseincentives around power dynamics and who is in control that I think it's very difficult

(18:54):
for sort of voluntary self -governance structures.
to handle.
So think a big lesson for me sort of reflecting on that period and also that I hope wouldbe a lesson that others can draw as well is that it really does look like we're going to
need some kind of outside oversight over specifically that subsection of the industry thatis working on building those incredibly capable, incredibly powerful AI systems.

(19:17):
And I think there's been some good progress over the last year or two in trying to developsome of those outside oversight structures, but they're still pretty tentative.
Many of them are still relatively voluntary.
So think we need more.
Yeah, what do you think those oversight structures should look
I mean, I think there's been a great start from, I think there's a lot of reason to praisethe companies involved and some of the voluntary measures that they're starting to

(19:40):
undertake.
There's been a move towards putting in place structures sometimes get called ourresponsible scaling policies.
Open EYES is called its preparedness framework.
Google has a frontier safety framework, which are basically structured around
basically trying to forecast or trying to describe what types of capabilities they mightobserve in the systems they're developing that they would not be ready for, or that they

(20:07):
would need to have certain godras in place before they could keep developing beyond thatpoint.
And so different companies look at different criteria, but it includes things likecybersecurity capabilities.
Do you have a really effective, an AI system that can really effectively hack differentcomputer systems or?
autonomous behavior.
Do you have a system that can really effectively go out and sort of pursue goalsautonomously, maybe preserve itself, prevent itself from being shut down, commandeer

(20:34):
server space or whatever it might be, bio weapons, chemical weapons, nuclear weapons, orother capabilities that are often brought up?
And I think it's a really good step that companies are sort of setting this up in avoluntary way to say, here's the kinds of things that we're concerned we might observe.
Here are the kinds of tests we're running to see if we observe them.

(20:55):
And here's what we'll do if we do observe them and we don't have the right guardrails inplace.
I think over time it would be great if those kind of structures could become a little morecodified, could be a little bit less just up to the companies to kind of interpret and
decide what to do and a little bit more standardized.
Maybe you need sort of external third party certification or review of those kind oftesting processes.

(21:18):
Maybe there's a role for government.
I think that's one really.
valuable potential step.
Another category, I'm not a lawyer, so I never want to claim to have too detailed a viewon this, but I think the category of liability or accountability, responsibility, would be

(21:39):
a really great area to have more clarity on.
Right now, the most obvious comparison for AI systems is software.
And generally, when it comes to software, understanding, again, not a lawyer.
But my understanding is that the companies developing software products that cause harmare generally not held liable.
It's generally pretty hard to kind of make a case that they were at fault, unlike in manyother industries where the original developers of things can be held liable.

(22:08):
And I think there's, I think trying to set up a liability regime that sort of sets theincentives more effectively for these developers could be really valuable.
And just to make explicit something I like a lot about both sort of that type of approachand also the sort of responsible scaling type of approach is that it is not about setting

(22:31):
up a really heavy handed government regime that comes in and has to approve a lot ofthings the companies are doing and really a sort of assuming that something very dangerous
is happening and that you need a really, really close government eye.
Instead, I think things like responsible scaling policies or setting up good liabilityregimes
leave a lot of room for judgment by the developers about what they're doing and what isappropriate, and a lot of room for flexibility depending on how the technology develops,

(22:58):
which I think is really important given how much uncertainty and disagreement there isabout what those developments will look
Yeah, let me ask you a little more about the responsible scaling policies.
How is it possible to come up with a set of tests around technology that doesn't exist yetwhere we don't know the capabilities and also where, as you talked about, the financial

(23:21):
and other incentives on these organizations are against putting into place the kinds ofguardrails that we might want to see.
Yeah, so it's definitely to me an example of this dynamic of the of the rollout and theimplementation of these AI systems that seem to work well, being a little bit in tension
with, or definitely getting out ahead of our scientific understanding of those samesystems.

(23:44):
And so sometimes you'll see criticism from the industry, from researchers of, how couldyou kind of put these policies into place or how could you kind of make these plans
without having much better scientific understanding of what's actually going on?
And I think that's a fair criticism, but it also,
It's tricky if you're trying to set the bar of we can only put in place any kind ofrestrictions, any kind of guardrails once we really understand perfectly what's happening,

(24:09):
but we can go ahead with selling these products, with getting hundreds of millions ofusers, with implementing them into government services, regardless of whether we have that
scientific understanding, right?
So I think it's about, certainly it would be better if we could do all of this based onmany years of experiments, tons of empirical data.
great scientific consensus among experts about what problems to test for and how to testfor them.

(24:34):
But in practice, we have this field that is moving quickly, that is rolling things out tolarge numbers of people in pretty consequential ways.
And so I think we don't really have a choice other than to try and find ways of putting inplace sort of common sense tests or common sense limits around what we might see.

(24:58):
And I do think that there's definitely, you know, it's more of an art to it than ascience.
If you look at something like Anthropic has a responsible scaling policy that goes into areasonable amount of detail about sort of what kinds of tests, for example, they would run
on the cyber capabilities of their models, the hacking capabilities, and they really kindof detail, you know, what they would be testing for, what would count as success, how they

(25:23):
would kind of grade those different evaluation methods.
And I think that is a really valuable way to do it, that they've kind of put that out intothe public to say, to allow other experts to go in and say, you know, this makes sense,
this doesn't make sense.
Because in a lot of other cases, the companies are developing those kinds of evaluationmethods in -house and determining in -house how to interpret them.

(25:46):
So yeah, ideally, we would have super well -grounded, well -validated empirical scienceand theoretical science to guide this, given
the technology is charging ahead without that.
I think we just sort of do the best that we can and the more that that can be happening inthe opener in consultation with independent experts so that assumptions can be checked and

(26:07):
they can be improved as we go, the better.
And with regard to the role of government here, you suggested in your TED talk thatpolicymakers should focus on adaptability rather than certainty in AI regulation.
What exactly does that mean and how does that connect up with what
Yeah, I think it connects to a lot of what we've discussed already.
I think to me, kind of two elements, if you think of sort of progress in AI as driving acar, I think a lot of discussions can get fixated on, you know, should we be hitting the

(26:38):
accelerator to kind of drive progress as fast as we can, or should we be freaking out andslamming on the brakes?
I think that's a really impoverished way of thinking about our options.
And so the kinds of things I think we should be focusing on, I think are maybe moreanalogous to.
making sure we have a really good view out of the windshield.
It's not foggy that our headlights are on, et cetera.
And making sure that we are really good at steering, basically.

(27:00):
And so what that means for AI is a lot of things that we've already talked about.
The clear view out the windshield to me is things like tracking incidents in the realworld so that we know how AI is causing harm.
It's things like improving our measurement science for AI so that we know we have a muchbetter sense of, OK, we can run these tests or we
pull apart this model, reverse engineer this model in this way and understand what's goingon inside it.

(27:23):
And the steering is, think, making plans for if we do observe, if this is something we doobserve, then what do we do about it?
So that's sort of in the responsible scaling policy type approach of, OK, well, if we havea model that is this good at hacking, then we need to have these cybersecurity protections
in place around it.
Or if we have a model that would give

(27:47):
If it was stolen by an external actor, would give them the ability to commit massive bioattacks or something like that.
Then we need to have these sorts of protections in place around it.
So yeah, think there's a lot that we can do when I think about adaptability.
To me, that means both being able to determine, being able to tell as early as possiblewhat trajectory are we actually on, and also then being able to respond to that evidence

(28:13):
as it comes in.
And what about the kinds of geopolitical concerns where China in particular is oftenpointed to as aggressively pushing forward on these technologies and has a lot of
government involvement and there's already restrictions being put into place in the UnitedStates.

(28:34):
Given though that so much of the technology is open source or it's based on architectures,they're in public research.
Is the genie fully out of the bottle in terms of how other actors who may be hostile towhat companies and the government of the United States think are able to exploit the
technology?

(28:54):
I don't think the genie is fully out of the bottle.
think maybe the genie's arm is out of the bottle or maybe like their head and part oftheir torso or something.
But it really depends on how the technology progresses.
Again, like many of these questions, it seems like right now the best estimates I've seenmake it look like kind of the open source models if we're talking about kind of general

(29:19):
purpose, large language models, multimodal models.
And we're looking at the very most advanced versions of those.
It looks like the open source is something like one and a half to two and a half yearsbehind the best companies, on how you count and a bunch of other assumptions.
And is one and a half to two and a half years, is that far or is that really close?

(29:42):
Depends a lot, I think, on what you expect.
So if you talk to people inside companies like OpenAI or Anthropic,
they'll tell you like, we might have human level AI in two years, three years, four years.
In that case, know, a one and a half to two and a half year buffer is a lot.
If you talk to someone who expects more that, actually this paradigm that we're using, thekind of scaling transformers, large language models, just pouring more data and compute,

(30:04):
that's kind of, you know, we've kind of like squeezed most of the juice out of thatparticular lemon.
Then perhaps one and a half or two and a half years behind is nothing.
So it does depend a little bit on the trajectory.
I tend to think that there's, that that's a pretty meaningful gap.
But it also depends on kind what you're trying to do with it as well.
Because in some cases, for plenty of use cases, you don't necessarily need the mostadvanced models.

(30:27):
so a lot of my work, I work at a center at Georgetown called the Center for Security andEmerging Technology.
We focus on AI and national security.
And so if you're thinking about AI in a military setting, things like onboard cameras fordrones and kind of automated processing that they're doing, that's not a cutting edge.

(30:48):
large language model, that's a totally different type of system.
Or if you're thinking about China's use of AI for surveillance, again, totally differenttype of system.
I think it really depends on which specific concerns you're wanting to focus on.
And let me just throw out one other concern that people talk about, which is theconcentration of power.
We have a small number of these frontier labs that have access to the compute and the dataand the researchers and so forth.

(31:16):
Is there any hope of a more decentralized environment for cutting edge AI?
Yeah, again, I think it will depend on how the technology progresses.
I think to the extent that we're in this, we continue in this current paradigm of biggeris better and you need more compute and you need to be spending, you know, the public
estimates we have are that kind of the current generation of most advanced systems.

(31:39):
know, GPT -4, Gemini 1 .5, Claude, you know, 3, 3 .5.
These systems cost sort of in the ballpark of a hundred million dollars to train.
And that's just a hundred million dollars.
potentially a huge amount of that is just for paying for the computational power, theenormous, enormous data centers with the most cutting, thousands of the most cutting edge

(32:00):
chips.
That's not very accessible.
That's sort of a naturally centralizing force if you have something so capital intensive.
And so to the extent that, and those companies, as far as we can tell, in many cases arecontinuing to invest in scaling that up.
So going from $100 million to train a single system to potentially a billion dollars tothose conversations about
10 billion or more than that.

(32:23):
So to the extent that trend continues, that is a very naturally centralizing force.
At the same time, you are also seeing over time, there's a of a strange dynamic at thefrontier of these general purpose AI systems where it is both the case, the last sort of
five or 10 years, it is both the case that the most advanced systems at any given point oftime are the largest and the most expensive to develop.

(32:49):
And also over time it gets cheaper and more accessible to build those advanced systems.
So there's sort of a natural centralizing at the very frontier of the field and also anatural decentralizing over time if you're looking at a certain level of capability.
So if you're looking for a system, for example, that is like, you know, what's abenchmark?
That is like a, you know, as good as a software engineer intern, then the first company todevelop that is gonna be very expensive.

(33:17):
probably going to be in a small number of hands.
But over time, the number of people that have access to AI that is as good as a softwaredevelopment intern is going to go up a lot.
again, I think remains to be seen depending on the trajectory of the technology, whetherthose centralizing dynamics sort of, think at the moment they tend to prevail because the
most advanced systems do tend to be more useful and significantly more sophisticated thanthe most advanced systems.

(33:41):
But if that slows down a little bit or if something else changes.
then it could be that those decentralizing dynamics of over time, the costs fall and theaccessibility rises, that might end up mattering more.
Should governments try to put their thumb on the scale to promote more decentralization?
I think to some extent, I think there is interest.

(34:04):
I think it's not necessarily the right task for governments to be trying to be competitiveat that very most expensive level.
But I think to the extent that governments can do things like in the US, you have theNational AI Research Resource, which currently is just in pilot mode, but there's a lot of
interest in kind of scaling it or establishing it more firmly.

(34:25):
And I think similar initiatives underway elsewhere, where the idea is
provide some computing power to researchers or to civil society or to universities or toothers, potentially to provide data sets in a way that makes them more accessible.
I think there's people thinking hard about different models for what gets called publicAI.

(34:45):
What else might you need?
Might you need expertise?
Might you need training?
What are the different ingredients that you can make available?
I think there is definitely a role for governments to play in making that of broader suiteof access to AI opportunities more widely available.
But yeah, I would distinguish that from trying to compete with these mega tech companies,which I think is probably not the right use of resources at this point in time, especially

(35:10):
also given that it's not just a matter of resources, but also talent.
And it's going to be very difficult to compete with industry to get the concentration oftalent you need and so on.
So I think not yet on that particular question.
Okay, and final thing is, let's say I'm at a Fortune 500 company that's investing indeploying AI, but doesn't have the resources or the capabilities to develop our own

(35:34):
foundation model.
So we're using an open AI or Gemini or something like that.
How should I be thinking about these kinds of questions about regulation and AI safety andthe kinds of concerns you've been discussing?
Yeah, I think it depends on the industry and the use case.
And generally, if for any given Fortune 500 company, they will tend to know their ownindustry and their own use case much better than I can.

(35:58):
But I think thinking through what kinds of documentation do you need to be keeping, whatkinds of potential regulatory scrutiny could you come under?
Maybe right now, it's not clear that your AI systems are subject to that.
But if it is a rights or safety -affecting use case, maybe it's worth.
anticipating that it could be in the future and starting to document sort of what you'redoing and why.

(36:20):
More broadly, I think if you're using a model developed by someone else, it's worthdigging as much as you can to get as much information about that model, how it works, how
it was tested, what data it was trained on.
Different companies will give you different levels of access into that information.
And I think it is very relevant and therefore worth looking
And then certainly, mean, know, Fortune 500 company will have a sophisticated legaldepartment, so I'm sure they'll be onto this.

(36:44):
looking very, very closely at the terms that are being offered, is there sort of some kindof indemnification or release of claims around, you know, if something goes wrong, who is
at fault?
I've definitely heard stories of some of the providers of these models trying tocontractually, you know, push all of the liability onto the B2B customer as opposed to

(37:07):
having
sit with the model developer.
So just keeping an eye on those kinds of things and being willing to push back or topotentially consider a different provider if you're not satisfied with what you're
getting.
And maybe one last piece would be just to take seriously the possibility that we might seethis continued progress over the next five years or so of continuing to have more

(37:32):
sophisticated models.
And so thinking about in your product development or in your usage of the models,
What might it look like?
What might it mean for your use case if the underlying model is getting moresophisticated?
Is that something you can just easily roll up and make use of and squeeze more juice outof?
Or is there a world where that would actually undercut your product, for example?
I think there's been a really common dynamic in AI over the last five or 10 years has beenthat at any given point in time, you can maybe create sort of a specialized model to do a

(38:03):
specialized thing, but then maybe a year or two later,
it could be pretty straightforward to develop a more general model that just is betterthan your specialized model without trying very hard.
So just being aware of those kinds of dynamics in the field, hopefully clear at this pointin the conversation, I think there's a ton of uncertainty about whether that will
continue, what it will look like.

(38:24):
But being aware that that's a possibility and that a lot of experts do expect that sort ofcontinued wave of progress and thinking about how could you benefit from that as opposed
to being vulnerable to
Good, well that's a perfect way to end being on the right side of uncertainty.
It's been a fascinating conversation.
We've covered a lot of ground.
Helen, thank you so much for joining
Thanks so much for having
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.