All Episodes

April 17, 2025 40 mins

Kevin Werbach interviews Lauren Wagner, a builder and advocate for market-driven approaches to AI governance. Lauren shares insights from her experiences at Google and Meta, emphasizing the critical intersection of technology, policy, and trust-building. She describes the private AI governance model, and the incentives for private-sector incentives and transparency measures, such as enhanced model cards, to guide responsible AI development without heavy-handed regulation. Lauren also explores ongoing challenges around liability, insurance, and government involvement, highlighting the potential of public procurement policies to set influential standards. Reflecting on California's SB 1047 AI bill, she discusses its drawbacks and praises the inclusive debate it sparked. Lauren concludes by promoting productive collaborations between private enterprises and governments, stressing the importance of transparent, accountable, and pragmatic AI governance approaches.

Lauren Wagner is a researcher, operator and investor creating new markets for trustworthy technology. She is currently a Term Member at the Council on Foreign Relations, a Technical & AI Policy Advisor to the Data & Trust Alliance, and an angel investor in startups with a trust & safety edge, particularly AI-driven solutions for regulated markets. She has been a Senior Advisor to Responsible Innovation Labs, an early-stage investor at Link Ventures, and held senior product and marketing roles at Meta and Google. 

Transcript

AI Governance Through Markets (February 2025)

How Tech Created the Online Fact-Checking Industry (March 2025)

Responsible Innovation Labs

Data & Trust Alliance

 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Lauren, welcome.
Great to have you on the podcast.
Thank you for having me.
Tell us a little bit about your journey.
You've done a number of different things in your career.
How did you make your way into AI governance?
Sure, so I'm a computational social scientist by training back in 2010.
I was looking at online social networks, so at the time mostly Facebook and Twitter, andfiguring out if we could learn anything from the network structure about how it influenced

(00:30):
individuals.
So since then, 2010 and before that, I've always been thinking about how can you usetechnology for good and to make people feel more supportive, more productive.
things like that.
So after this research, I ended up moving into private sector.
So digital health, Google working on the Google assistant in the early days, and then Metabuilding products to combat misinformation, also some projects on privacy protected data

(00:59):
sharing.
So the through line between all of those was figuring, there was a governance element toall of these different projects, starting with what I was doing in the academy.
And then that really
really put a finer point on it when I was working at Metta, combating misinformation,online harms, everything was related.

(01:21):
There was an intersection between what we were building on the tech side, policy side, howthis impacted users.
So some of it really was a play in, I don't love this word, but self-governance.
And how do we do the right thing or how do we take steps to build trust with users outsideof

(01:41):
of regulation and the law explicitly mandating it.
So after leaving Metta, I saw the power of these levers.
I mean, I was in this big company and I thought, there any way to do this kind of workwhere you're compelling companies to build more trustworthy systems outside of regulation
and outside of the big platforms?

(02:01):
So over the past two or so years, I've been working on various AI governance projectsalong these same lines with startups, VCs, Fortune 100.
and also some direct policy advocacy as well.
What was your experience like at Meta and Google in terms of the commitment to addressingthese risks and harms of AI and how much of a challenge it was to actually make progress

(02:25):
inside those organizations?
Yeah.
So at Google, I was very early on the Google assistant.
So this was 2017.
You had the assistant versus Alexa wars, and this was pre-transformer.
So we weren't really thinking about these existential risks in the, confines of theproduct because they didn't really exist.
We were just, I was helping to build the developer ecosystem.

(02:48):
So we had some guard rails there, but it was, it was a zero to one product.
So we were just trying to get to the first.
the first phase, but obviously I was thinking a bit more about human computer interactionand what this would look like moving forward for users as I I called it like ambient AI.
So as AI was kind of infused in everything and every interaction and every experience,what would be the knock-on effects there?

(03:15):
So it was kind of a fun thought experiment at the time in terms of the practicality of theproducts that we were working with.
At Meta, it was a bit different.
AI was already infused in all of the products I was working on, whether it was news andranking, and then combating online harm.

(03:37):
So these fact-checking products, we used AI in all of those.
so that kind of governance element was part of the day-to-day work.
What are the guardrails?
What are the guardrails for these products?
How are we developing them?
How are we putting them in production?
And then monitoring what happened when things went live.

(04:02):
So we worked with a lot of different external organizations, whether it's civil society,academia, policymakers, think tanks, et cetera, to help us with all of these different
activities.
And then you mentioned some of the things that you've done subsequently, but part of yourwork has been working with investors and VCs.

(04:24):
So what is their perspective or perception, do you think, around...
Well, it obviously is investor specific.
The first project I worked with Responsible Innovation Labs, which is a nonprofit that wasstarted by the founder of General Catalyst.

(04:47):
And so it's funny that these days, responsible has a lot of, it's a weighted term, if wereplace responsible with trustworthy, whatever it is.
Having worked on AI in regulated or self-regulated industries,
I saw the importance of thinking about governance very early on and how developers,engineers, founders, they're making decisions every single day about what the product is

(05:12):
and what it isn't.
And so you could say it's just product design and development and like day-to-dayengineering.
But if you really look at it, it's essentially you're making governance decisions aboutthe product.
And so the idea there was, can we...
take some of the resources that have been put forward.
I mean, this was back in 2023, which now seems like a decade ago, even though it wasn't.

(05:38):
And so if you're building in a regulated space or a space where you really have to buildtrust with customers, users, know, it's AI, maybe it's higher risk, whatever it is, is
there any way that we can help guide these companies to be more thoughtful, maybe havesome more structured thinking around it?
have some more documentation.

(05:59):
And so I think that that yields more sustainable, durable, successful companies forspecific kinds of companies.
And I've seen that firsthand.
And then when also speaking with later stage investors, what some of them are concernedabout is there are a lot of unknowns with AI right now.
And so a growth stage investor, an investor that would take a company public later on,they wanna make sure that governance process are.

(06:25):
are in place early so that you don't have to retrofit as as like they come on board andthey're looking at the company and looking to invest and looking to grow it and help
accelerate.
They're like, we want to make sure that you're kind of set up to succeed.
So we're not having to change all of these things from a governance perspective as as wemove in.
And because AI at this point is really a paradigm shift, there is a need to pull more moreresources and and people and kind of like thought behind solving that that

(06:56):
So the main reason I was interested in speaking with you is you've been promoting the ideaof market driven or private AI governance.
And it's somewhat implicit in some of the things you've already talked about, but can youjust describe what does that mean to you?
What is that concept?
Sure.
So for me, what market-driven AI governance is, outside of laws being passed or agovernment or some standards body saying you have to do this or you have to do that, is

(07:25):
there any way to align incentives in the private sector to compel companies to build moretrustworthy AI systems?
And so I did this, as I mentioned, from various different perspectives.
It was kind of hypothesis that I had because I saw it within the big companies.
I saw it with some of the early stage startups that I invested in and what led to sick,companies that really were, were successful and kind of the decisions they made early on.

(07:49):
So I ended up working with, as I mentioned, investors, so venture capitalists, startupsthemselves, and then fortune 100 companies.
And so they, as the, customers of these products are saying, how do we evaluate what'scoming through the door, whether it's a big company or a startup.
And so I thought, okay, if I can play in that space, that's a really big lever.

(08:14):
So if we develop criteria that makes sense for both the enterprise and the startups, youcan really have kind of an ecosystem wide push to move companies.
And I don't want to say a different direction, where governance is considered earlier on.

(08:35):
the incentive mismatches.
Presumably you're talking about something more than just for-profit companies don't wantto be held back in deploying new products by regulation or by limit.
Yeah, I don't, I think that's a very nice, like it's a neat narrative.
say, they're moving too fast.

(08:55):
They don't want to do this.
But the fact of the matter is these teams are so small.
I've worked at early stage startups.
I work with startups now.
They don't necessarily think about it.
It's like, you don't know what you don't know.
And so what I'm, what I was trying to do here is really understand what the pain pointswere for these enterprise customers.
And so it turns out that when they're evaluating third party AI systems, a lot of the kindof the checklist that they have is very risk oriented.

(09:24):
And so it does come out of this compliance space, even though there may not, there arelaws on the books for AI, but they're really adapting from other checklists they have.
And it doesn't help them understand, is this going to generate value?
And is this going to present risks to our organization that we maybe didn't anticipate orcould mitigate?
if only we knew and can kind of put the pieces together and also bake this into acontract.

(09:49):
So the liability, know you want to talk about that, but that's a big question.
So these companies are looking at startups, they're looking at big AI companies, there area lot of unknowns, and they're saying, okay, if this goes wrong, what company is less
risky for us to work with?
And ends up being a big company.
they think, all right, if it's not baked out of like, who's at fault for this, for this?

(10:12):
At least we can sue someone or at least like there's a company with a big pile of moneyand we can kind of figure it out or sort it out.
And so I think like one of my personal ethos is that to mitigate the risk of technology,we need more people with eyes on the problem who can build and potentially build new

(10:36):
technological solutions.
So when I see startups that are potentially stuck
have a great technology, but potentially they're stuck in procurement and they can't getthose design partnerships.
And they're wondering why an enterprise isn't moving forward.
And I know it could be just because of this like liability issue.
They don't quite trust it and they don't know how to evaluate it, but they don't want tosay these things.
And they're just going to default to going with a bigger company.

(10:59):
I see that present negative effects for the wider ecosystem outside of, I invest in earlystage startups.
I want them to be huge and I want them to be successful.
So there are lot of different equities at play and I see this as benefiting the wholeecosystem and users.
What exactly is missing that the big companies need to get that kind

(11:25):
there I mean, at a very high level, as I mentioned, the checklist today are very riskoriented.
There's very little that really touches on value and also risk mitigation.
so one, some of this is going to sound very simplistic and it is an oversimplificationbecause I've spoken to, you know, dozens of companies.

(11:48):
but at a high level, like, let's get back to basics.
How is your product built?
What's the tech stack?
Where is AI actually used and where is it not used?
So a lot of companies, startups, especially, you know, they're selling it's AI.
It's shiny.
We're doing this.
We're doing this, but that black box element to it, which people would say, it's thesecret sauce that could actually hurt a company.

(12:11):
And so when I talked to early stage startups, is it walk them through like the end to enduser journey and show them exactly where AI is being used.
Because all that an enterprise at times, like when an enterprise sees is like, no, Idon't, I don't understand this.
Like this seems risky.
Maybe we should build something ourselves, which like, are they capable of that?
Who knows?

(12:31):
It depends on the company.
And so something as simplistic as like, what is the tech stack look like?
What is the user journey?
Where is AI where, where does AI not used?
understanding what we would call fourth party integration.
So.
What models are you building on top of?
Are you fine tuning things like that?
Copyright certainly comes into play and data ownership.

(12:54):
it's not even, are you legally allowed to train on certain types of information?
Are we going to get in trouble as a company if we use this?
And it turns out it's not yours, but when our employees or whomever use the product, whoowns that data?
Or is it going back into the product?
Do you own the data?
And so working out these major questions and then coming to an agreement that's

(13:16):
codified in a contract and there's actually accountability there is kind of where I'd liketo see the ecosystem move in addition to more proactive transparency from the lab and AI
company side, which I'm happy to talk more about.
Yeah, why don't you?
Where does transparency come into play and what exactly do you mean by transparency inthis kind

(13:39):
Yeah, so I think there is a lot to learn.
Some people might say, from the late, the last era of social media, it's not even the lastarrow.
Like now things are running in parallel and AI is part of these companies.
But I think you can learn a lot.
we, I mean, I worked on Facebook, open research and transparency.
So we did a lot of work on transparency, whether it was reporting or sharing privacyprotected data with academics and policymakers so they can understand what was happening

(14:06):
with the platform.
And so looking at the transparency measures that have been adopted as standard practiceamong the AI labs, you have papers that have been published, which great, the more people
understand the technology, the better.
So I think that that's been working pretty well.
I'd like to see more things around peer review when you're publishing outside of theacademy, but it seems that that's coming.

(14:30):
And then model cards have become standard.
And so those were developed years and years ago, the team that did it, you it was anuphill battle and it's amazing to see how it's really diffused throughout the ecosystem.
But the genesis of Model Cards was for technical audiences.
I mean, at the time you didn't have people in the White House or, you know, businesspeople within these big companies really having serious conversations about AI.

(14:55):
And so the information in Model Cards is being used to make business decisions and policydecisions.
So is it time for us to have maybe a compliment to a model card, whether it's an AI vendorcard that has other types of information that can support that decision-making?
And so what I see there is it's kind of like, I don't want to say a consulting product,but it's a project of like understand the needs, what information do people need?

(15:23):
What information would an AI lab or a startup...
feel comfortable sharing, but it's also in their benefit to share and be proactivelytransparent around it.
And from speaking with a lot of different kinds of companies, it just seems that peopledon't know quite know what to share and they don't want to set themselves back.
They don't want to be transparent and have them end up being like behind if they're incompetition to win a deal or something and they're more transparent.

(15:49):
But if we have a project where we can align from the enterprise side.
or the policies that like, this is actually what we need and it's in your benefit to shareit and then kind of have it be an extension of a model card.
So it's both a process of understanding needs, aligning incentives, but then also what's aseamless go to market here?
And to me, that's an extension of a model card.

(16:11):
You built a muscle to have this kind of transparency, like let's just extend it.
So it's not a net new lift for companies.
Does it make sense to standardize those kinds of elements across organizations?
Obviously, there's a collective action problem.
If each company needs to come up with its own set of requirements and each startup isresponding to that.

(16:35):
So there's benefit of having a lot of commonality on the other hand, then it's a heavierlift to get everyone on board and there are differences among them.
Yeah, I I think companies should be able to share what they want or what they don't want.
If there are incentives to share certain kinds of information, great.
Hopefully more companies orient around that.
But we see it in the last, I want to call it era.

(16:57):
I mean, these timelines are so short, can we call it an era?
But if we looked at companies working in trust and safety when they were making rulesaround content policies and it was about specific pieces of content and things like this,
I also did research when I left Metta where I spoke to dozens of people across differentcompanies.
And what ended up happening was Facebook ended up being the gold standard because they hadto put the most, well, a lot of resources behind both doing this work and also making this

(17:27):
work public and communicating it effectively.
And so when a startup who had trust and safety considerations or had to manage onlineharms,
They would go, you know, that person would go online and say, like, what's this companydoing?
What's this company doing?
Meta seems to really robust policies.
We're just going to take what Meta is doing and we're going to adopt that into our ownwork.

(17:47):
So we've seen, I've seen that diffusion happen on the ground.
And I, I do think that if certain big labs are bought into this and start doing this,other companies will look and kind of adopt this as a, as a framework.
And there are civil society organizations, academics, cetera, and folks who can also testthis and publish papers and say,
this is comprehensive, this is not comprehensive.

(18:09):
So by directing attention to it and giving enough guidance of this is what we think itshould look like and this is what we think is effective and here's why, you get more eyes
on the problem and people iterating and refining.
The obvious question with any private governance activities is whether there's a strongenough incentive without a government mandate, some enforcement possibility for

(18:34):
non-compliance.
So what makes you confident that there is enough if this is coming privately fromorganizations?
So there is incentive on the enterprise side to get this done.
I know this from working with these companies and they may, everyone on their earningscalls and said, AI this, AI that, this is the future.

(18:57):
So they may not wanna say publicly like, we have a strategy but we're not sure it's theright strategy.
We're open to speaking to a lot of different startups.
We're open to shifting, we need thought partners.
And so the consultancies are coming in like.
a McKinsey and then you also have like an Accenture and they're both doing helping withthat strategy work and also partnering with startups and other kinds of organizations to

(19:20):
make them aware of the products that are available.
But there is a question of like, who are those who are on those teams and like are like Ithink there's an opportunity to just uplift all of these different entities.
And so
It's already happening on the enterprise side.
On the startup side, these companies want to be successful and sell.

(19:43):
The key here is codifying it in a contract.
And so you could say all you want, like, yes, we should know.
And there are like technical issues with some of this too.
Like, can you actually know all of the data model is trained on?
But if you have a formal attestation of the company is saying XYZ, then lawyers areinvolved.

(20:04):
And so.
you need a way to monitor what's happening on an ongoing basis.
So this isn't just a one time you sign the deal, you're in, people are using your product,but what's happening on an ongoing basis, that's something else that comes up a lot in
these procurement conversations, which is actually astonishing because these productschange as new, you know, research techniques come into play.

(20:28):
So the products are changing, the data is changing, and you have a contract that's likefor one point in time.
So that doesn't work.
So
How do you create formal attestation so that you can then hold companies accountable?
So this is all, it's happening, but it's still, as I mentioned, like things are moving sofast.
I don't wanna say it's an experiment because it's not quite at experiment level, but wehave to see how the, I think we should see how this plays out before some of this is made

(21:00):
into a law or I know tons of states are proposing.
new laws, but this is this is my perspective and what I'm most excited about.
And you alluded to this before, but what role does the prospect of private legal liabilityplay in all
Um, quite a challenging area, as you know.

(21:23):
Um, so as of right now, the light, like there's still a lot of question marks aroundliability and laws have tried to push this in many different directions.
And so we haven't seen, I like to look at like,
cyber attacks, cybersecurity, cyber insurance, things like that.
And if there are any lessons we can draw from that world into now what we're seeing withAI assurance.

(21:49):
And so I think there are just a lot of questions.
Like we haven't had a major incident.
Cybersecurity sure, and there are like some AI elements associated with that, but wehaven't had even not necessarily the sci-fi situation, but something that was hugely
damaging.
And it raises all of these new questions around.

(22:09):
liability.
So where we are is not quite that people are walking on eggshells.
They're like, okay, we'll try this.
We'll do this.
The contract says this.
And we'll kind of just see what happens.
So I'm waiting for that.
We'll see if a major incident arises and kind of what what happens there.

(22:29):
But people are buying insurance to account for these AI risks.
It's not, if you look at the different policies, it's really not that comprehensive.
Some have uncapped, but that's basically just like a Hail Mary.
Like we hope nothing happens.
So companies really aren't covered.

(22:50):
So I'm hoping that by developing these new kinds of contracts and having theseconsiderations, we're starting to put some stakes in the ground around what that could
look like moving forward.
Right, contract is obviously one way to deal with liability to the extent that you're ableto contract around things and courts are willing to honor that.
Insurance markets in theory are the efficient solution because then you've got a privateactor that has its own financial incentives to work on the customers and so forth.

(23:19):
What do you think it will take to help those insurance markets mature in this area?
Yeah, I'm curious what your thoughts too are on any parallels to cyber insurance because Ido remember, I mean, I was most engaged with this space years ago at this point, but
pricing those plans was incredibly challenging.
You had entities, lot of like,

(23:41):
public entities, whether it's hospitals or utilities and things like that wereexperiencing cyber attacks and trying to buy insurance from these municipalities.
And it was very challenging for them to have plans developed.
It's not just like the big Fortune 100 companies of the world.
is like municipalities who are being affected.
And so there's a pricing issue and like what is included in these plans.

(24:04):
I think right now I've seen plans that really cover kind of the copyright training piecethat like
content provenance.
And so it seems that insurers can kind of get their heads around this, like we canguarantee through insurance and we can work out liability and what the payment would look
like around content provenance.
But beyond that, haven't seen, I haven't seen much.

(24:28):
Yeah, I haven't either and that's why I'm curious because presumably that would be a goodoutcome but the question is what can be done to hasten it.
Well, the other piece here is the measurement piece and benchmarks.
And so I work with ARC Prize, which is an AI benchmarking organization.
And in order to develop insurance policy, you need to actually know what's going on withthe technology and set thresholds.

(24:51):
And I think that's something that the EU AI Act is trying to kind of like bring, makereality through developing the code of practice.
And I speak with those folks and there are a lot of challenges around that.
given that the benchmarks are in flux, the evals are in flux, who does the evaluations,who does the auditing and things like that.

(25:12):
And so we need to have a stronger measurement ecosystem so that you have precision to beable to build these kinds of policies in this insurance market.
kinds of things can governments do more effectively to promote these kinds of market-basedapproaches?
You mentioned the EU AI Act, that's obviously a more heavyweight government set ofmandates that's then pushing that code of practice.

(25:38):
But are there ways to get to these kinds of results without that kind of prescriptiveregulation?
I think that having a safe space for knowledge sharing is key.
So I did this work through the Data and Trust Alliance, which is a Fortune 100 industryconsortium made up of lot of different executives of various companies.

(25:59):
And so if there's any way for government to support more of these, if not just industrygroups, but really public-private partnerships.
So when I think about
the measurement side of things and talking about these risks, whether it's somethingthat's an egregious risk, like a catastrophic risk or something that's a little bit more

(26:21):
kind of like less egregious, I guess I would say.
Like what role can the government play in supporting that ecosystem and bringing somerigor to it?
And when I think about USG and the unique kind of like differentiated knowledge base, Idon't work in the government, I've always worked in the public sector, but when we think
about

(26:41):
bio weapon race, chemical race, things like this, like the government knows the most aboutthat from from everyone.
I mean, they have the labs.
And so is there a way in terms of prioritization for the government to say, like, look atwhat they do best, where they have differentiated knowledge, and then partner with these
outside experts to kind of up level some of that and also help grow the ecosystem and allof these key considerations, whether it's measurement evals and benchmarking or

(27:10):
let's say like what's happening in the insurance market that's a little bit furtherafield, but like what can the government do today to both codify, standardize, bring more
rigor to this world and then also help like accelerate it.
Do we do need governments to adapt AI specific rules?

(27:31):
So there are a lot of laws on the books that already deal with AI.
What I think is really interesting is public sector procurement.
And so I was looking at this a lot during the Biden administration as I started this work,this procurement work with the data and trust alliance.
So what I believe ends up happening is that the government sets standards for AIprocurement and...

(27:57):
private sector will look at these and say, they're like, all right, we have thischecklist, like, we're not sure this is comprehensive and this is working.
Let's see what the government's doing.
Like they buy a lot of stuff.
And so they look at this and then adopt, like adopt some of those, those principles intotheir own procurement work.
And then on the lab side, what happens is if a company think of like, this is just anexample, but like a Microsoft sells essentially to everyone.

(28:21):
And so they're selling to the government.
And they use these procurement criteria to build out their go-to-market strategy, buildout their sales strategy.
And if it ends up working, they might say, let's take some of that and pull it into ourprivate sector practice and take some of those.
So I see public sector procurement, whatever those guidelines are that are developed.

(28:42):
And now we have the action plan out of the Trump administration.
So we'll see what comes out of that.
And I know folks that are working on it.
I see what Biden had put forward.
It was like fairly comprehensive, but there was still a lot that needed to be worked out.
But I'm really looking to that as something that will likely set standards and setindustry practice outside of just the government.

(29:08):
How can companies deal with the variety of different rules that are out there, especiallyglobally, if you've got a Fortune 100 company, they're going to be operating in Europe,
they're going to be operating in the US, Asia, which right now there doesn't seem to belot of prospects of complete alignment in terms of AI policy.
It is very complicated.

(29:30):
don't, it's why when, mean, you know, that I worked on this, this AI bill in Californiaover the summer, SB 1047, and something that we spoke about a lot was,
what would happen?
mean, in this instance, California would have led and like that may have been adopted by alot of different states and it would have had outside impact because it wasn't just
companies based in California, was operating in California.

(29:53):
But when you have a patchwork of state legislation that you have to comply with, what doesthat look like?
When I was at Meadow, we had CCPA where it was just California.
So we would have different rules for California and some other states, but
It wasn't that we had to go state by state and change our strategy for every seat.
And then you look at it globally and what's happening with the EU, we had GDPR, CCPA.

(30:17):
Like those were the kind of the big ones that we had to adhere to at the time.
This was years ago.
But if you have completely different rules, it is quite challenging.
And then also with the EU and the code of practice that they're building there, I mean,this really is for
frontier models, so the biggest models, but how do you have to evaluate them a certainway?

(30:43):
You have to report out certain information.
You need to have auditors.
So it's not just you as a company having to report out information.
You have to engage like all of these different parties to do this.
And so it's worth, mean, I'm sure someone's, someone somewhere is building like a map oflike how you actually operationalize this.
But what I've seen coming from

(31:04):
I'm someone who builds and who builds products and ships products.
I've worked with a lot of people at think tanks who put forward these ideas of regulationshould look like this and look like this.
And sometimes they don't quite think through like, how do you actually operationalizethis?
And that to me is always like the key challenge and what I go to first.
So short answer, I don't know, it's really hard.

(31:27):
And just, you you talked about SB 1047.
Just say a little bit, you were really on the opposition side to that bill, whicheventually was vetoed by the governor.
What was so problematic from your perspective?
So let me just tell you how I got involved in this bill.
So I had been working on these AI governance projects.
And yes, governance was a big part of my work when I worked at tech companies.

(31:51):
But I thought, okay, if I'm saying I do AI governance, I should know how AI bills arebeing created and passed on the ground.
And so I started tracking SB 1047 just as a bill that sounded interesting back in January,February.
And then that turned out to be the bill that got a lot of traction.
like a lot of attention, which I think rightfully so, but I guess that's how politicsgoes.

(32:17):
You kind of tie yourself to one thing and like sometimes it blows up and sometimes it justfizzles away.
And so I started writing about this bill.
A lot of chatter was happening online, whether it was on X or blogs or podcasts.
And I just thought, how can anyone even make some, I was obsessed.
like, okay, I'm going to pull all this together and start writing some.

(32:40):
some articles just to help educate people about what's coming and what the potentialimplications are here.
And so that bill changed a lot from when it was created to when it was ultimately vetoedand it was over a short period of time.
where it ended up in a very different place from where it started, where it started was tohave a new entity or a subsection of an entity within the government, the state of

(33:07):
California, they would be...
have models submit information, they would be able to say this is a good model and a badmodel.
And there would be an also an eval component of it where you would have auditors thatwould be evaluating this, the different AI labs.
And so it's not helpful to like dissect the details of what the bill started out as versuswhere it ended.

(33:34):
But for me creating these kind of like net new organizations that would have
oversight over companies and would codify it in law.
There was also issues with downstream liability.
So the developers would be liable if the model was used for something that was againstwhat the law had laid out.

(33:58):
So these kinds of issues kept bubbling up as something that would set precedent and thatwould change the ecosystem.
There was talk of,
It would hinder open source development, would make developers liable.
So these, and this like centralization element, this centralized oversight and empoweringthis new group for all of these reasons, I just, I thought it was premature.

(34:24):
Yeah, I one positive, at least from that experience is, as you said, there was a hugeamount of debate from both sides and back and forth.
And, you know, as with anything, some of it was not very well thought out.
But I think a lot of cases there was really deep engagement, you know, both at a technicallevel and a policy level on that bill, even though ultimately it was vetoed.

(34:45):
So the last thing I want to ask you is just, you know, going forward, are you optimisticthat we will be able to have
productive conversations on both how to define this market-based AI governance and how tofind the right intersection between what the government does and what companies do.
Yeah, so I think after the bill, after SB 1047 was vetoed, Newsom pulled together thisgroup of individuals who would be looking at what AI governance should be for the future

(35:17):
of the state of California.
And that report just came out.
And I was very heartened by it because I thought it was he assembled a group like youcould tell quite thoughtfully people who had a diverse range of views, but were incredibly
technically sophisticated and could have a productive
conversation.
And so the recommendations that came out of that, I support many of them.

(35:39):
I will write something with some, you know, more ideas.
But I think that's a great example of taking someone and who knew if you knew some likedelegated this to like, I doubt that he knew all these people personally at the time, but
whoever was in charge of kind of bringing this group together, did it exactly right.
And you can't have it's not that one voice kind of overshadows or one point of viewovershadows.

(36:04):
everyone else, but how do we work together to figure out what the future looks like?
And so there are a few things, this private AI governance work, I'm hopeful seeing whatorganizations like the Data and Trust Alliance are doing.
I think it requires being able to work with a lot of different kinds of stakeholders andthen also really thinking about, and this is like the tech person in me, but like, is the

(36:27):
go-to-market here?
Like we don't need more frameworks.
How do we actually fuel?
adoption and make this palatable and useful for all of these different kinds of groups.
So that's one piece.
I'm also really interested on the policy side, like seeing what happened with SB 1047 andthe chatter that was happening online.

(36:48):
It was challenging for me as someone who was just personally obsessed and engaged to kindof understand all of these moving parts and all of the commentary that was happening and
then the bill would change.
And so is there any way to support the end-to-end policy making process by actuallyleveraging AI tools?
So we don't get into a situation where a bill is made public.

(37:12):
And this happened kind of with SB 1047 where tech people are like doing their jobs in SFor in California and elsewhere.
I mean, it got international attention.
They're like, wait a second, what was proposed?
Like, what is getting traction?
And so...
Is there any way to actually use AI to make this clearer both to legislators,policymakers, staffers of like what the interests of the community are and how they

(37:38):
actually feel about things so that it's less of a massive surprise and we can kind ofbring people along.
So that's what I'm really excited about it.
I'm working with RAND, Tony Blair Institute and Stimson Center on that.
Great, Lauren, thank you so much for your time.
Thank you.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.