Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Adam, welcome.
Thanks so much for joining me.
Thanks for having me.
You, like me, have been in tech policy and involved in tech policy debates for a longtime.
To what extent is AI policy an extension of the kinds of controversies we've been havingfor decades on tech policy, or is it something new?
(00:25):
It's both something old and something new, Kevin.
I think we have seen many of the same debates in the field of ICT, information andcommunications technologies, for many, decades about the nature of our speech platforms.
And we're seeing an extension of traditional concerns that have pervaded many types ofcommunications fights in the past.
(00:47):
That being said, AI is something a little bit more important.
Thomas Edison was once asked a question about electricity in which he said, know, Mr.
Edison, a journalist said, tell us about the field of electricity.
And he corrected the journalist or whoever asked the question and says, it's not just afield, it's a field of fields.
He says it's many different fields and sub components.
(01:09):
Whatever one thinks about Edison's assessment of electricity markets.
So all those years ago, that's absolutely true of artificial intelligence, right?
There's just so many more layers to this.
This is not just ICT now.
We live in a world of Hal Varian had a piece in 2010 where he talked about, living in aworld now of combinatorial innovation, where many different revolutions are coming
(01:32):
together.
And Perez had a paper about this many years ago, what Carlotta Perez and there's a newbook by Jamie Metzel called Superconvergence about how many different revolutions and
technological capabilities are coming together, all mostly driven by data science, machinelearning, artificial intelligence.
That makes this a bigger debate than the one that you and I have been covering for manyyears now.
(01:57):
There are a lot more players, a lot more issues.
And it's not really right to just say, tell me what you think about AI policy.
You have to almost say AI and fill in the blank, AI and elections, AI and education, AIand health, AI and driverless cars, drones, military law enforcement.
Then it begins to be a more sensible debate.
(02:20):
But the problem is people are still talking about it in a more aggregate macro sense, andthat's a very hard governance conversation.
So are there areas of AI and fill in the blank that do call for government action at thispoint?
Absolutely.
And there actually is a lot more than people realize.
I always try to make efforts to explain to people, we have 439 federal departments in ourfederal government, 2 .2 million workers at them, covering a whole wide swath of ground in
(02:53):
many different fields and sectors.
And AI is already regulated in, again, field of fields, and it's separate fields orsectors in very different ways.
So I've spent many years focusing on artificial intelligence in the field of driving andflying and driverless cars and drones and the way that NHTSA looks at driverless cars or
(03:16):
DMVs at the state level.
And then at the federal level, how the FAA governs drones.
These are autonomous systems.
They're driven by machine learning, compute computational technologies.
They're very different and they're very different than something that happens in otherfields like another regulated one like healthcare.
I just released a...
paper last year with Arizona State University Law School looking at how the FDA isregulating AI ML enabled medical devices.
(03:41):
The FDA, many people don't realize this, had their first major study on computerizedmedicine in 1981.
So they've been looking at it a long time.
And of course, we could go into the fields of FinTech and many others, and of course, lawenforcement and military, and find examples of how governments and government agencies
have been looking at or even regulating
(04:03):
very specific concrete examples of AI.
But that's very, very different.
And it's an important distinction between that and the idea of sort of regulating AI at ageneral purpose level as a general purpose technology.
That's a very different discussion.
Now, that's not what we've not had that in America, just as we've not had that forcomputation in general or the internet in general.
(04:24):
We've really been more sectoral about it with computing and the internet as well.
And this is, think, the grand debate of our time.
Like how do you go about AI governance?
Do you go about it as AI in a macro broad based way?
Or do you take a more targeted risk -based sectoral focused approach of a more sort ofbottom up approach to it?
(04:47):
And a lot of people don't like doing that ladder because it's messier, it's moretechnical.
But I think A, we're already doing it and B, it's probably more pragmatic to come at AIgovernance in that way.
Isn't there a tension though between that sectoral approach and the first thing you said,which is that there's this explosion of combinatorial innovation and AI is driving the
(05:09):
field of fields across everything.
Absolutely, that's a brilliant point because the question there is does technologicalconvergence drive regulatory convergence?
I often make this point in my writing about driverless car technology, which is that onone hand, a driverless car is something very new and different.
It's a computer on wheels and therefore it should be regulated, some people say.
Other people say, no, no, no, it's still got wheels and it goes down a road.
(05:31):
It's a car and cars are regulated at many different layers, in fact, federal, state,municipal.
And so I think this is going to be the challenge for us as a governance when we talk aboutgovernance going forward, which is, you know, when you have old sectors and new colliding,
what gives?
I've used the phrases in my past work of technologies or sectors that are quote unquoteborn free versus born into regulatory captivity.
(05:57):
And, you know, if you're into 3D printing or virtual reality or something like that,you're kind of born free and you don't have an overarching law or regulatory edifice
around you.
But if you're a medical device maker or a car maker, you know, a drone maker, you areprobably born into regulatory captivity.
You're going to be pigeonholed into yesterday's analog era governance system, whether youlike it or not.
(06:20):
I'd like to think that those old systems can change and adapt to the new technologicalrealities.
But the problem of the so -called pacing problem haunts us at every juncture.
The idea that technology moves very fast, sometimes even exponentially.
And yet policy tends to evolve incrementally sometimes at best, especially in ourlegislative arena, which is, know, Kevin has been very, very dysfunctional when it comes
(06:45):
to technology policy.
I look at how many years we've been debating a national baseline privacy bill.
Right.
And look at how long it took to get, I mean, when was the last thing that, you know, inour lifetime that we could name like a major technology policy piece of regular it's would
we go back to the telecom act or
You and I are around for the cable act and the telecom act and gosh, it was the internettax freedom act.
(07:06):
But there's not been a lot of comprehensive technology policy since then.
There's been incremental things, right?
So this is a fundamental challenge is like how governments confront the pacing problem.
Okay, so if comprehensive regulation is likely off the table or really challenging in theUS and you find that problematic to do it across the board, how should policymakers think
(07:31):
about this?
What are the mechanisms to get even those sectoral regulators to do a better job ofaddressing the new technology?
Yeah, so here's where we have to divide our governance conversation into layers,jurisdictional layers.
We could talk about federal, state, even local, and then we could talk about internationalas well.
And yeah, absolutely.
So at the federal level, I think for better or for worse, Congress has largely checked outof the situation for the reasons I just identified.
(07:58):
It's become almost a non -actor in the field of technology policy.
I mean, there's a lot of huffing and puffing.
There's a lot of jawbone in too and things like that.
In reality, we don't get a lot of traction when we try to move to legislate at the federallevel.
Instead, it falls to the executive branch and different executive branch agencies toprobably fill in a lot of these gaps.
(08:20):
And for AI, a huge amount of this is being done in the form of executive orders and majorstatements from the White House, and then filling in of the gaps by individual regulatory
agencies or
Technology oriented oversight bodies, most specifically two within the Department ofCommerce in the form of NTIN and NIST.
And the centrality of NTIN NIST, especially NIST to the AI governance story, cannot beoverstated.
(08:45):
Every day, its importance grows in terms of being a governance coordinator, especially nowthat the Biden executive order passed along to NIST a new AI safety institute to stand up
and to move forward with, even though I should just point out.
Congress didn't authorize that.
There was no AI Act passed that said, you we should have an AI safety institute in AndNIST has no regulatory authority.
(09:08):
So isn't that interesting from a perspective of a political scientist who coversgovernance and worries about like, a stickler for like the black letter of the law and APA
processes and foe ability of things.
We have sort of a very different emergent governance coming about through NIST that'smore, far more informal.
I've written articles about so -called soft law governance.
(09:28):
of emerging technology becoming the dominant form of technological governance, especiallyfor things like AI and machine learning.
So that's where the high level governance of AI is coming in.
NIST already has things like its AI RMF, the Risk Management Framework, which it alwayshas one, also has one for privacy and cybersecurity.
And then NTIA for many years has convened so -called multi -stakeholder processes and workto effectuate
(09:54):
various types of best practices or industry guidances or frameworks.
Those things continue to trickle out and they accelerated under the Obama administration.
But then Trump continued them as well.
Even though they were kind of reluctant to, they just basically picked up and said, we'lldo more of that kind of thing.
For example, on driverless cars under Trump, NHTSA continued to come out with driverlesscar frameworks that were versioned like software that the Obama administration had
(10:21):
started.
The Obama administration had
driverless car guidance 1 .0.
And then Trump had guidance 2 .0, 3 .0, 4 .0.
It's really interesting how they're even versioning it like software, right?
But these things were not formal government enactments.
They were just sort of, in one case, it were a PowerPoint presentation.
They weren't even a formal publication.
That's a really interesting governance thing that we're monitoring now for AI.
(10:45):
And that's the framework we're using.
But at the same time, in sectors that have been traditionally historically heavilyregulated,
They are moving forward with very specific targeted rules for drones, driverless cars,fintech, medical devices, so on and so forth, and stretching their authority as far as
they can.
And we'll see how far that goes.
(11:05):
And especially in age of post -chevron deference, major questions, there are open -endedquestions about how far that governance regime could be stretched in the executive branch
in the absence of congressional action.
And all the wishful thinking in the world can't make Congress just all of a sudden justact and do something.
we've got to get around those politics that prevent that.
So that's the federal level picture.
(11:27):
And if I could briefly jump to state, I mean, go ahead if you want to ask it.
OK, sure.
to get to state absolutely, but just, you you talked about the soft law and some of thoseinitiatives like what the NIST is doing, the Commerce Department.
What are the pros and cons of the US at the federal level addressing many AI issuespredominantly through that modality?
Yeah, absolutely.
(11:48):
That's a great question.
So it's funny because the positive is also a negative in a sense.
The positive is that this is in some ways exactly the more agile, iterative, flexible kindof governance framework that is in line with modern technological realities.
It's not static, one size fits all, one time governance that you don't set it and forgetit is what I like to call it.
(12:09):
That's not what's happening.
mean, NIST and NTI has been much more creative than that in trying to adapt its documents,again, going back to the idea of versioning these things over time like their software
themselves.
And the FDA has been doing this in a form of iterative governance for AI and medicaldevices and so on.
In one sense, that's great.
On the other hand, it's really a little bit messy to the point of knowing, is this reallyin line with the rule of law and accountability and like,
(12:38):
procedures that we depend upon in a constitutional republic of saying, you should go buythe book.
You should have an NPRM that's in the federal register and that clearly is citingstatutory authorization.
So I got to be honest with you, Kevin.
I'm a bit conflicted because I don't want set it and forget it one time governance if itmeans that it holds back important innovation or doesn't address real time risks.
(13:04):
At the same time,
There is a little bit of a make it up as you go along element to AI governance today atthe federal level for better or for worse.
And I guess part of the question is how much faith you have in certain agencies orofficials to sort of get that right.
And I think that remains to be seen.
One important add on here, even though Congress isn't acting, they are proposing laws.
(13:25):
There's about 115 right now bills pending.
And several of them in the US Senate would actually delegate more authority to NIST.
and NTIA, but specifically NIST, to try to exercise more of this governance role in thismore flexible capacity.
And a couple of those bills even start to give NIST some enforcement teeth and say, youcan move beyond just the realm of best practices into maybe like best practices plus, plus
(13:53):
a little bit of like finding authority if companies don't go along with these guidelinesthat they help formulate.
Again, that has not passed.
But that's an interesting development because it's once again Congress saying, well, we'renot going to figure it out.
We're going to kick it back to the agency, including agencies that have not traditionallyhad a regulatory role over the digital economy, but now might because of what they've done
(14:16):
with their multi -stakeholder processes specifically in commerce.
it's to be determined where that goes from here.
And of course, the next election could affect this as well.
Absolutely.
No, it's fascinating because we are seeing this kind of exploration and experimentationwith different regulatory models here, which at some level we should want because I agree
(14:38):
with you, this is a new technology and certainly we know there's lots of problems with theexisting regulatory structures we have.
and I might not 100 % agree on what the problems are, but I think we all agree that weneed to try these new models.
Yeah, Phil Weiser, the attorney general of Colorado, had a piece when he was still in thelaw school, dean of law school in Colorado, called entrepreneurial administration.
(15:00):
And I love that phrase, and I think that's exactly right.
We want entrepreneurialism in the marketplace.
We want entrepreneurialism in government.
We want to see creativity, experimentation, trial and error, right?
At the same time, we want accountability, and we want transparency.
And sometimes, our goals come into conflict.
You know, we have different priorities and that's really, really hard.
(15:20):
Something's going to have to give and I think this remains to be seen and it's going toplay out in a very messy way over the next couple of years.
Okay, so speaking of messy ways, let's talk about the states and then we'll go and talkabout the European Union.
what do you see happening at the state level?
Right.
So in the absence of, in the sort of vacuum that's been left by Congress or Congressionalinaction, a lot of states have been moving forward on digital technology policy over the
(15:44):
past decade in a major way.
Of course, they've done this with things like privacy and cybersecurity and other issues,but in AI, they're really moving fast.
The last count I've seen from a tracking organization called multistate .ai is that as oftoday, there are 678
AI related bills pending in the United States.
That's a big number.
(16:07):
That's an almost unprecedented level of legislative activity for any technology in mylifetime.
And I've been doing this for 33 years.
Now, not all of them are passing it, but a lot are passing.
Just two weeks ago, Illinois passed four different AI measures alone in one week.
And just this week, just yesterday, before we're taping this podcast, Californialegislature passed a major bill.
(16:30):
on AI safety that's headed to the governor's desk.
So the states are moving.
And so the question is now what happens when you have sort of a patchwork of patchworksdevelop that is really unprecedented?
Let's just jump back to our youthful days, Kevin, when we were doing these issues in thepast for the internet, we saw Congress actually, and the Clinton and Gore administration
(16:52):
in particular, take a more preemptive role in calling for a national framework for theinternet and e -commerce.
And you don't see that kind of appetite anymore at the federal level.
There could be a number of reasons for that.
Maybe they're just tired of dealing with these issues.
Maybe they just all hate big tech so much.
Who knows?
But of all those bills that are pending at the federal level, as far as I can tell,absolutely none of them have any preemptive language whatsoever.
(17:16):
And meanwhile, state leaders, like the sponsor of that California bill I mentioned, arespecifically citing congressional inaction as their rationale for moving and saying, if
Congress won't do this, we will.
Well, then you have to add in the layers problem that I identified earlier.
You've got the macro level, big picture regulation of AI, quah AI, like California istrying to do with its model, frontier model regulation.
(17:42):
But then you have all the sector rules as well, and not just at the state level.
I mentioned the 678 bills at the state level.
That does not count municipal enactments.
The most important AI bill I would argue passed potentially so far in the United Stateswas New York City's AI bill on algorithmic hiring.
applications.
I mean, that's New York City.
(18:03):
I mean, there's a lot of companies that do business in New York City.
Well, whatever one thinks about New York City's enactment, and there have been many othercities that have followed suit, and even some counties, Miami -Dade County, moved a bill
on AI policy.
I mean, one has to ask at some level, I don't even know what the number is anymore.
I've heard estimates of like, there's 90 ,000 different types of governmentaljurisdictions in the United States.
I'll tell you, who knows what it is.
(18:23):
The bottom line is, I think we could all agree if you had thousands of AI policies at somelevel,
just the confusing compliance cost of that creates some real challenges for the future ofthat marketplace.
So that's where things stand at the state level.
Why though does this not play out the way we saw with privacy or network neutrality whereCongress didn't act, California did, California is bigger than most countries and
(18:50):
essentially became a kind of de facto standard.
It doesn't necessarily have to be California, but it's not.
And there's lots of different state privacy laws that have been passed, but there's a lotof overlap between them.
So it's not that companies have to do 50 different state privacy policies.
Yeah, I think that's that's exactly right.
I mean, this is how it's likely to play out in the short term for better or for worse.
(19:12):
And it will be California that probably drives most of it, although it was Colorado thathad the first major broad based AI law that Governor Polis signed, which, by the way,
included a very interesting signing statement that read more almost like a veto statementwhere he was almost begging Congress like you need to do something, folks.
I used to be one of you.
I used to be a congressman.
You need to act here.
And yet all of his pleading is not going to necessarily get anywhere.
(19:36):
And so I think we're going to continue to see more and more state laws pass.
And I don't know how they'll be harmonized if at all.
That patchwork could be very bad.
I think a lot of people in industry and people who care about digital technologyinnovation like myself, we're trying to do our best to come up with frameworks that are
better for the states to move as we've done in the world of privacy while we wait to seeif Congress can enact a baseline privacy bill, which would probably be preferable.
(20:01):
But if you don't get it, you don't get it.
And you're going to have to go statehouse to statehouse and figure this out.
and make the best of it.
And I think that's the world we're going to live in next year again.
And I think we'll see just as many rules.
And it raises the question, will AI innovators be sort of squeezed between the Californiaeffect and the Brussels effect as we now turn to like international?
(20:21):
I think you wanted to ask about.
I think that's a real possibility.
Having to contend with like what California or other leading states are doing while alsocontending if you're a larger player with what Europe wants.
And it's very interesting that California has modeled some of its privacy legislationafter some things in the European Union's GDPR.
And now they have an EU office in San Francisco to consult with California legislators andinnovators about how to comply with EU policy.
(20:50):
Scott Weiner, who is the sponsor of the California bill that moved this week, was at thatwelcoming ceremony for the EU office in San Francisco.
So that suggests there's almost a new axis of these two forces between international andstate.
with Congress just being asleep at the wheel in the middle.
For better or for worse, I think that's the state of AI governance in the United Statestoday and probably for the foreseeable future.
(21:14):
So what is your take on the European AI Act?
know, it's a big topic, but how do you see it?
Well, on a big plus, the EU -AI Act is a risk -based kind of framework that attempts tosay, let's take a hard look at different categories of risk.
That's smart.
I think that's the way we need to go about AI policy, generally speaking, risk -based,targeted, focused, and again, preferably more sectoral.
(21:39):
I think on the downside, the Europeans have a tendency to say, well, everything is prettyrisky.
You know, here's a high -risk bucket.
Let's put everything in it.
And that can't be the way it is because then there's nothing that's left for you toinnovate at all.
There needs to be better, more targeted type of segmentation of high risk, medium risk,low risk, whatever the boundaries are.
(22:00):
the bottom line is that this is going to be something that's going to be all encompassingbecause the Europeans are now seeking to essentially make regulation their leading export.
I think there's a recognition in the EU that they've lost
a lot of ground in the sort digital technology world.
It's hard to name a lot of digital technology companies that are headquartered in Europe.
(22:23):
They're not happy about that.
I could make an argument about the relationship between the regulation and that reality,but I won't go in, I won't digress.
The bottom line is it is what it is.
And so the Europeans are very adamant now about like, everybody needs to follow our laws.
But the problem is it's not just the EU.
There's plenty of other countries that are looking to act on this front now and move theirown types of frameworks.
(22:43):
And, you know, meanwhile, you've got a whole different governance regime emerging in Chinaand a juggernaut in China in terms of how they could be a leading competitor to the United
States and Europe in the world of algorithmic and computational technology.
So I'm very concerned about this.
I think this is a problem.
I think one of the things that used to be a nonpartisan thing going back to the ClintonGore days and the Obama days was like the idea of digital trade and global free speech
(23:08):
being a unifying theme that brought strange bedfellows together.
And there was an effort to sort of uphold American values and defend American interests.
But now we're stuck in these catfights and culture wars about these things.
like, you know, just two weeks ago, you had a top EU official coming after Elon Musk withan open letter on Twitter about him hosting Donald Trump on the X platform for a debate.
(23:32):
I'm like, wow, that is really something when EU officials are coming after Americancitizens or technology platforms, even one we might not like in the form of Musk Twitter
and saying, you know,
You should follow EU law for your election communications.
I'm sorry, that's hugely problematic.
That's hugely problematic.
And in the old days, I think that's where like the Obama administration or Clinton wouldhave stand up and say, OK, you've crossed a line here.
(23:55):
That's not your right.
But the idea of like harmonized digital open free trade and borders in the cyber world,know, this sounds archaic now.
It almost sounds like, my gosh, you're just preaching the gospel of John Perry Barlowanymore.
But it used to be more of a unifying theme.
There were people who really believed in that vision of a more unified global speech andcommerce world.
(24:16):
And it's fading by the day, unfortunately.
I'm surprised you're that negative about it because again, going back to where we started,if this is this incredibly powerful revolutionary technology around the world, then
clearly companies around the world are going to want to innovate and be successful in thisarea.
(24:36):
And governments around the world are going to want to have their own markets be successfulin this area.
So why is the trajectory not that we have this
set of different approaches that get proposed around the world.
And then essentially we hash it out in the marketplace and in other kinds of forms.
(24:57):
Well, hopefully, you make it sound pretty easy, that hashing out to take place.
We have a conflict of first principles on a lot of key matters, right?
Let's just start with the First Amendment, right?
And we could talk, say, it's American exceptionalism, whatever else.
But clearly, the Europeans have their own priorities with privacy and data security.
And the Chinese have a completely different set of priorities.
(25:22):
we could go into the global south and look what happens.
today in Brazil about them trying to ban Starlink down there because of a conflict aboutspeech codes.
These things are getting harder and harder to work out as these technologies become morepervasive in global markets.
And it certainly could be the case that some countries do try to make a different, morepositive play and attract more investment and innovators.
(25:46):
We live in a world of global innovation arbitrage, and people are able to ease.
move around more easily than ever.
It's not like you have to pick up a factory and move a smokestack into a foreign country.
You're just moving code and coders.
And this is kind of what the story of what happened to Europe and how they lost a lot oftheir best innovators and investors to the United States in the 90s and 2000s.
I hope that doesn't happen and that we bleed talent and investors to China or whereverelse.
(26:11):
I think it's more realistic that we just see this conflicting.
of policies, again, we're already seeing it in the States.
I think we see it more internationally.
Take a look at what's happening with frontier model makers for AI today.
They are signing on to agreements across the globe in the name of AI safety.
And one could say, well, the more the better.
(26:31):
It's great.
Let all these safety agreements and accords and rules pile up.
But at some point,
You're going to have to have a spreadsheet to keep track of what did we agree to when wewere at the summit in Japan versus the UK one versus the US one in California.
That's not exactly the model of what we thought we were getting into with regards to aglobal internet, a freer internet world.
(26:56):
It's one of a patchwork of many, many different conflicting policies for better.
Let me just play devil's advocate and push you on that a little bit.
I mean, we have spreadsheets and we've also got much newer technology like large languagemodels and things that can do complex analysis.
What you're saying, you know, without dismissing it sounds sort of like I remember in thethe nineties when the Internet Tax Freedom Act came about, Amazon would come in and say,
(27:23):
look, there's a thousand taxing jurisdictions in the United States and it's going to bepossible for us when you make an order on Amazon anywhere in the U .S.
to figure out what tax to collect.
And nowadays, like every time I make an order on Amazon, they're collecting sales tax ornot or whatever else.
It's just a database.
With today's technology, that's not a fundamentally impossible problem that killed Amazon.
(27:44):
So why is just a little bit more burden on, especially if we're talking about the bigfrontier model developers who are pretty substantial companies, why is that such a
tragedy?
Well, you know, I'm not going to shed too many tears for Amazon.
They're a big company.
And the question is, obviously, like, what about everybody else?
You know, mean, compliance costs are a real thing, and they absolutely do drivecompetitive advantage in some continents and countries.
(28:08):
you know, America has been better than most about trying to keep those in check and makesure that we have a vibrant marketplace where companies and innovators of all different
sizes can thrive and start the next great thing.
I think there is a real danger, you know, even with the best
spreadsheets and software in the world that you think you can like.
We can figure it all out, like our standards are for every jurisdiction in the world.
(28:30):
But I don't think it's that easy.
mean, first of all, your example involves Amazon and sales taxes, which are a numericvalue, right?
I mean, if you translate this into algorithmic fairness and safety mandates, these are farmore amorphous and fuzzy values.
You and I came about in an age where we cut our teeth on FCC quote unquote
(28:51):
public interest regulation, right?
And then there was the famous thoughts about obscenity, know it when we see it and things.
That's speech, but even outside of speech, when you get to things like the alignmentdebate in AI, we all want AI to be aligned with human values, but you need to define
exactly what that means for purposes of concrete deliverables to a regulatory body.
(29:13):
And I think if you have dozens or hundreds of different standards for AI alignment, AIsafety, whatever else, okay, sure, maybe Microsoft.
Anthropic, OpenAI, Google, and Amazon, among others, can thrive in that environment.
But maybe, as they have with GDPR, they do at the expense of everybody else.
(29:35):
At the expense of competition, innovation, new startups.
Some of the small European AI makers had started pushing back during the time of the EUAct formulation, especially the open source providers, pointing this out, saying, look, we
just
probably can't comply with this because our compliance shop is basically the founder'swife, you know, who happens to know how to run some accounting software or something.
(29:58):
We don't have an entire team.
Amazon has an entire division devoted to legal and tax compliance, right?
So these burdens do matter.
And this isn't just some rabid free market libertarian thing.
This is like something that everybody, I think, acknowledges, whether it's for speech orcommerce.
We have to make sure that we provide an environment that's conducive forentrepreneurialism of all sorts to flourish.
(30:21):
And it doesn't mean we have a complete absence of rules and regulations, but it does meanwe have a little bit more standardization and harmonization.
Again, the Clinton -Gore framework was called the framework for global electroniccommerce.
We didn't try to regulate the global level.
We pointed out, this is a global medium, and we at least ought to have some greaterconsistency there.
And there was an
effort by that administration and then the Bush and then the Obama to like follow throughon that vision.
(30:46):
I don't think that that's present anymore.
I think there's more of a willingness not only just to regulate more, but to sort of likelet the chips fall where they may and not worry as much about these consequences,
unfortunately, because I think of an exhaustion of just like so much going on globally onthis front.
Yeah, I was on the working group that created that framework for global commerce.
(31:09):
So I'm definitely familiar with that set of issues and that set of developments.
But let me ask you, you make a really good point that some of these questions don't haveprecise answers.
So what is the right approach to alignment of frontier models?
What are the right things to do about fairness?
even you can quantify, there's different ways to assess fairness and so forth.
(31:33):
How then do those issues get resolved?
Because one alternative is to say, just the companies do their best and we should justtrust them.
That runs into the problem that, of course, these companies, their incentives are notnecessarily, whatever they might say about, we care about safety, their incentives are not
always aligned.
why we have regulation.
So what would be the best path forward?
(31:53):
Well, the best path forward is to circle back to where we started, which is a discussionabout focus.
I think you can get at something akin to AI alignment in the narrow, but you really almostcan in the broad.
And what I mean by that is that if we talk about AI alignment with regards to like, doesthis AI ML enabled medical device harm you or not?
We have a concrete deliverable there.
(32:14):
The Food and Drug Administration exists to enforce laws on that front.
And that's where we can
get very concrete about what we mean by AI safety.
My algorithmically enabled device should not harm me or my health.
The same way we can do that for finance, the same way we can do that for a driverless car,for whatever.
When you get back to the broader picture, which is what I think a lot of people who writeabout AI alignment in the abstract want, they want that high level, I believe, of like,
(32:42):
well, we're moving towards a world of AGI, and so the underlying computational systems andmodels have to be
fair, non -discriminatory, unbiased, and safe.
And like, who could disagree with that?
But concretely, what that means in practice is a totally different ballgame.
And they often just fall way short of specific deliverables about this.
(33:04):
We can take broad steps to get us there.
I mean, we all preach the same gospel of having humans in the loop and trying to make surewe you know, baking in quote unquote, ethics by design.
And some of those ethics and best practices are easy.
But a lot of the others are hard.
And we see this in the debate about free speech, right?
I mean, look at how we fight like cats and dogs about what we even mean by misinformationor disinformation that's algorithmically empowered.
(33:26):
We've gone to the Supreme Court about this now multiple times.
And Congress has entire standing committees now devoted to like, you know, weaponizationof the federal government and like disinformation and blah, blah.
We're not getting anywhere there.
AI has entered the age of the culture wars now.
And like we're fighting over those broad abstract things.
The only way we get anywhere in the world of AI governance and regulation.
(33:49):
is to get far more concrete to drill down into very specific types of harms in specificsectors in a risk -based way with cost -benefit analysis.
It's regulating on the output side, like what comes out of a model and how it concretelyaffects us in a specific human context, not on the input side, the theoretical, like
(34:10):
what's, I hear, I understand what critics say when they say, what about the black box?
What's going on there?
I'm like, you know what, I don't know.
I actually don't know.
I don't know everything about what happens in the black box.
I don't know if we ever can.
People who make the black boxes don't understand everything that's happening inside of it.
But what we do understand is what happens on the output side of it, when it actually getsto interact with us in the real world.
And if it causes the harm, then we pull it off the market, or we fine people, or we jailthem, or whatever.
(34:34):
We've got fines and penalties, and maybe we need more.
We can talk about that.
But that's the only way concretely we're going to get anywhere in the world of
Okay, last question.
You painted a picture of this increasing messiness and confusion.
If we do ever pull out of that, if we go to a different trajectory that's more like whatyou're describing, what's the most likely scenario?
(34:58):
Not necessarily that you think it's going to happen.
What's the most likely scenario that gets the US or the world onto a different path?
Well, if we would have this conversation a year ago, Kevin, I would have said there'sstill a possibility that the US could adopt a new broad -based algorithmic regulatory body
and licensing scheme or maybe new liability scheme.
I think the vibes have shifted over the last year.
(35:21):
And those proposals that were being put out there by people like Senators Blumenthal andHawley and many academics, they're not getting as much traction anymore.
I think there's a recognition that that's probably not the right place to start.
It doesn't mean it's going to go away entirely, but this idea of a
broad -based new algorithmic AI agency, think, for now is dead.
I think we could end up getting legislation in Congress next session that basically doeswhat I mentioned before of signing off on executive branch authority to go out and do more
(35:50):
to try to guarantee safety by design through existing processes, both in the narrowedindividual agencies and then in the broad through the NIST and TIA structure that's been
created.
potentially,
with maybe some Federal Trade Commission enforcement on the back end of that.
That is a discussion for another day, and it's a very heated one.
But the role of the FTC and other agencies, enforcement agencies, consumer protectionagencies, as well as civil rights agencies to enter this is, think, the wild card question
(36:20):
for 2025 and beyond.
How do those bodies at the federal and state level enter the picture and try to enforce
general laws governing consumer protection or civil rights in the context of algorithmicfairness, algorithmic transparency, whatever you want to call it.
And this this animated a lot of the Biden administration's AI Bill of Rights.
(36:42):
And there was a suggestion in some of these hearings recently that like there should bemore of that.
It could be we do get legislation that does that.
I wouldn't rule that out, even if I probably would rule out now that we're going to have afederal AI commission or a new licensing regime for all things.
All right, we'll see how things develop.
Adam, thank you so much for fascinating, very informative discussion.
(37:05):
These are incredibly important issues and I appreciate your time.
Thanks, Kevin, I appreciate it.