All Episodes

November 21, 2024 27 mins

In this episode, Kevin speaks with with the influential tech thinker Tim O’Reilly, founder and CEO of O’Reilly Media and popularizer of terms such as open source and Web 2.0. O'Reilly, who co-leads the AI Disclosures Project at the Social Science Research Council, offers an insightful and historically-informed take on AI governance. Tim and Kevin first explore the evolution of AI, tracing its roots from early computing innovations like ENIAC to its current transformative role Tim notes the centralization of AI development, the critical role of data access, and the costs of creating advanced models. The conversation then delves into AI ethics and safety, covering issues like fairness, transparency, bias, and the need for robust regulatory frameworks. They also examine the potential for distributed AI systems, cooperative models, and industry-specific applications that leverage specialized datasets. Finally, Tim and Kevin highlight the opportunities and risks inherent in AI's rapid growth, urging collaboration, accountability, and innovative thinking to shape a sustainable and equitable future for the technology.

Tim O’Reilly is the founder, CEO, and Chairman of O’Reilly Media, which delivers online learning, publishes books, and runs conferences about cutting-edge technology, and has a history of convening conversations that reshape the computer industry. Tim is also a partner at early stage venture firm O’Reilly AlphaTech Ventures (OATV), and on the boards of Code for America, PeerJ, Civis Analytics, and PopVox. He is the author of many technical books published by O’Reilly Media, and most recently WTF? What’s the Future and Why It’s Up to Us (Harper Business, 2017). 

SSRC, AI Disclosures Project

Asimov's Addendum Substack

The First Step to Proper AI Regulation Is to Make Companies Fully Disclose the Risks

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Tim, welcome.
Always great to talk to you.
Great talk with you too, Kevin.
What can I do for you?
emerging technology for a long time.
Let me just first ask you, how do you think about AI?
What kind of innovation is it?
How significant do you think it is?
Well, I would start by putting all sort of great waves of technology innovation into oneframe, computer technology innovation.

(00:27):
And that is the increase in the ease with which humans are able to communicate withcomputers and get them to do our bidding.
It's as simple as that.
You think back to ENIAC where they programmed it by making physical circuits.
And then we got to store program computers, but you put it in the program one bit at atime.

(00:50):
And they got the switches on the front and then, you know, punch cards and, you know,Magtape for storing programs.
And there was this sort of priesthood who could do these very difficult things.
And then you have this big breakthrough.
Well, first of all, you had a breakthrough with assembly language and then higher levellanguages that output machine instructions without people needing to write them directly.

(01:14):
And then you got interpreted languages that made it really easy.
And then you had things like the PC, which made it possible for anyone to have a computer.
And suddenly there were millions of people doing these things, writing programs.
And that exploded.
And then you really got something really new, I think, with the web, which was where youcould actually create documents that called programs.

(01:39):
And so that was like, could basically make an interface out of a document that was thekind of thing that you know, the humans exchange with each other.
And people don't recognize that what a big step forward in interfaces that was in someways it was even bigger than the GUI, you know, which are the graphical user interface,
which is of course made things simpler.
And so now we're at a place where we can just talk to these things in plain language andthey can start to do what we want.

(02:05):
And if you look at the history, every time we've had one of these advances,
More people could use computers and they could do more things with them.
And so it's profound.
It really is profound.
It's basically going to grow the market.
And I just mean in the number of people who can use it, but in the number of things thatthey can do.

(02:25):
And again, you can look at other technologies.
You read some of the early history of the automobile and it's like if you want to be anautomobile owner, you had to be a mechanic.
Mm-hmm.
now you're at a point where you can't even be a mechanic because they've got that's goingtoo far, maybe.
you know, there are certainly some interesting parallels.

(02:50):
Yeah.
Well, so what is there anything we can learn from at least in computer technology, those,those prior shifts that can help us in engaging with the issues that are coming up now
with AI.
absolutely.
First off, know, people underestimate them and then they overestimate them.

(03:10):
And and in some ways, the you know, this is sort of thermostatic kind of process, justlike in politics, I guess, in which, you know, people try to have this grand narrative and
and the world sort of stumbles forward in

(03:31):
unexpected ways.
think if you look at, example, again, I think back particularly on my early career wasvery shaped around the World Wide Web.
I mean, it was there before that, an open source software, people got a lot of thingswrong.
My whole battle with the, I convened the meeting where the term open source software wasadopted, but I...

(03:55):
was kind of an outlier and everybody was focused on licenses.
And I said, I don't think licenses are the issue.
It's actually, we have network enabled collaboration.
We have what I called the architecture of participation, the design of systems that peoplecould make small pieces of and they work together, sort of open architectures and kind of

(04:19):
look at the difference.
There's this great line I remember seeing on the internet early on.
The difference between theory and practice is always greater in practice than it is intheory.
And I don't know who said it, but it's brilliant.
And because always you have people who have this sort of theoretical construct, you know,the same thing with hypertext.
You know, it's like when the World Wide Web came out, you know, all the pundits said thehypertext pundits said this won't work because it only has one way links.

(04:45):
It doesn't have two way links.
It has to have two way links.
And of course the one way link thing was part of what made it grow so explosively that itdidn't, you could just get a 404.
And everything wasn't tight and tightly bound.
so I think the thing we learn is that, you we stumble forward and there'll be someinnovation and then somebody else will build on that innovation.

(05:17):
And I guess this goes back, I think it's interesting because Ethan Mollick, who's one ofmy favorite observers of AI, has become a fan like I have of James Besson's work on the
Industrial Revolution.
You know, where he basically said, why does it take so long for new innovations to spread?
says, because people have to learn how to use them and you need to build communities ofpractice and you have to have people pushing on and innovating.

(05:41):
And that's certainly true.
you know, with, again, I think back to the worldwide web, you know, was sort of like theoriginal web was static documents.
Then some guy had the bright idea of like, Hey, we can actually call a database from oneof these things.
And then we can have, you know, led the dynamic websites.
And then you had all these people who are having APIs right into the server and Apachesaid, no, no, we're not going to do it that way.

(06:03):
We're going to have an architecture that builds on top.
And, you know, it was this evolutionary process that was people.
learning and practicing.
Brian Pinkerton, the first web crawler, and then Google figures it out and how to do itbetter.
And Overture figures out pay per click, but they do it with just a crude auction andGoogle figures out how to make a really good auction system.

(06:29):
And bit by bit, the world that we became familiar with evolved.
Steve Jobs with the iPhone, et cetera, et cetera.
So we're in this stage where, you know, we're very early and, you know, I do think thatthere's an issue now which we didn't have before.

(06:55):
And I first wrote about this with regard to where I think Silicon Valley went wrong withits blitz scaling.
Reed often calls it blitz scaling.
This idea.
that companies should race to get market share and the VCs should basically invest in themso they can do that.
And I describe this with Uber and Lyft.

(07:16):
In some sense, the market didn't pick the winners.
The right winning business model didn't evolve.
What happened was the VCs flooded the market with capital.
They picked a couple of winners and a couple of early business models.
They got their money out because these companies went public.
And then companies were like, actually, we have to raise prices.

(07:37):
And now we're starting to have that period of experimentation.
And you have that same problem of the winners have already been chosen by the massiveamounts of capital.
So it's kind of like we celebrate this idea that we have this innovation market.
But increasingly, that innovation market has become a kind of central planning by a

(08:05):
a small number of very deep pocketed companies.
And so that's the biggest, one of the biggest worries that I have that we, it will beharder for us to have, you know, some of the kinds of experimentation that we had in the
past that led to the real innovations that we needed.

(08:26):
Right, and given the extremely high cost throughout the entire stack for developing someof these AI models, what can we do now to avoid that kind of very concentrated future for
AI?
Well, I think one of the first things that we need to do is to think about this idea thatI talked about back in the days of open source of the architecture of participation.

(08:51):
You know, I've been giving some talks where I riff on this.
There was this article or actually it was a podcast, the New York Times, where it wascalled AI's Original Sin.
And in it, they quoted this
it was basically focused on the copyright issues around AI.

(09:16):
And they quoted this lawyer for Andreessen Horowitz who said, you know, if we don't letthese companies, you know, this is the only possible way to build these giant models.
And I thought, yeah, that's a lot like 1992 when the only possible way to get your contentonline was, you know, was, it was AOL.

(09:38):
And then a little later, the Microsoft network.
And this thing was coming out of left field, the worldwide web, where anybody could gettheir content online.
And it won because it built a real market.
And right now we're in the AOL stage of AI, you know, as a way to think about it.
You know, we've got these big centralized players and what's waiting in the wings, Ithink, is this alternate world in which you have cooperating AIs that are, you know,

(10:09):
you potentially trained on specialized data.
And people are starting to say this.
There was an article I saw recently that said, you know, Sam Altman's real rival is JamieDimon, you know, because JP Morgan sitting on this huge, they're working a lot on AI,
they're sitting on this huge class of specialized data that's not available to any ofthese guys.

(10:31):
And so they can now.
got the capital.
They've got a $10 billion ERIT budget.
So, yeah.
know, there's sort of an interesting question there.
Does that, you know, play out?
But in general, it goes to the sort of questions that I think about, because, of course,we started building things at O'Reilly.
We have a much smaller than JPMorgan Chase.

(10:57):
But we have a business where we have a body of intellectual property that in theory hasnot been accessible to
the models for training, think in practice, it probably has been, you know, they probably,probably, you know, got it by hooker by crook, but we don't know that for sure.

(11:17):
which is of course why I do think that the company should be required to disclose theirtraining data, at least, you know, like if only in the form of, you have my, you know, do
you have my, my content queries just, just like we have with, with, privacy, you know,
But more than that though, I just think that we need to think about an AI architecturethat isn't, okay, we have a winner takes all player and then we just have to basically be

(11:51):
satellite to them.
I think if that's the world we're building for, those guys ought to be regulatedutilities.
And then you go, okay, you're a foundation model.
You can be a foundation model.
This is how you get paid, but you can't compete with everybody else.
Yeah, that would be one way to go.

(12:12):
It's kind of what we ended up with with telecommunications in a certain way.
Yep.
But I mean, your point before, which was a really good one was, you know, so many thingsseem inevitable in hindsight.
And, and, you know, I'm sure you've talked to lots of people who weren't there in theearly days of the web or even the days of web two, just to assume it couldn't have been
otherwise.
Now we're in this period where it seems like there's some possibility.

(12:33):
So are there things that regulators should be doing that companies should be pushed to dothat might make it more likely to have that more distributed future?
Well, one of the things that I think we could and should do is have more disclosure.
Now, when I say that, lot of people, go, disclosures don't work.

(12:55):
And what they're thinking about are things like, you get your prescription, there's thislong piece of paper that you peel off and you throw away that has a bunch of illegal
gobbledygook and...
you know, or whatever, you know, or even food labeling, you know, there's a little bit ofvalue.
OK, how many calories is this and so on?

(13:17):
What are the ingredients?
But there's a different kind of disclosures that I think people don't really think aboutthat really is a lot closer to kind of communication standards.
You know, like TCPIP is a kind of disclosure.
You know, it says this is a

(13:40):
the format for a particular kind of thing and you can build to this.
And in a similar way, I've been thinking a lot about the analogy to accounting standards,which is really how do you manage money?
And we standardized that back at the beginning of the 20th century.

(14:02):
But the standards were based on what people actually did.
Everybody kind of
go, okay, how much did I take in?
How much did I spend?
And double entry accounting had been around for, you know, since the 13th century and hadbeen refined quite a bit.
But it wasn't, you know, there were a lot of people who were a little dodgy about it.

(14:23):
And so in the early days of, of, you know, public companies and securities, they went,wait a minute, we got to actually make sure that people follow the same rules in
describing the finances of their business.
And when
In the last few years, I worked on this project around trying to think about what I wascalling algorithmic rents and how big tech companies use their algorithms to extract rents

(14:49):
from their marketplace and their users.
The idea of control over attention being used to then extract money from various otherparties and sometimes unfairly.
But in the course of that, one of the things that we thought about, we kind of came acrosswas this idea
from accounting of segment reporting, which is this idea that was introduced in the 1960sthat it was in the age of first industrial conglomerates where they were saying, okay, if

(15:20):
you have some segment of your business, it's more than 10 % of your profit.
You need to report it separately.
And so when we started thinking about internet companies and how opaque they are,
You know, you, you go, well, you know, how, interesting would it be if, you know, we saidto Google, okay, you need to tell us certain things about your nine properties that have

(15:48):
more than a billion users.
And we don't care that you tell us they're not, they don't have revenue associated withthem, you know, cause that's the old measure was revenue, you know?
And the question, and it's pretty clear to us now that market power.
in these internet conglomerates is not associated with some of the old inputs and outputs,right?

(16:10):
Google Maps is a great case in point.
It actually is probably a bit larger in its revenue than, it's like, it's about $4 billionby most estimates.
So it's bigger than Garmin, it's bigger than Esri, it's bigger than Autodesk mappingstuff.

(16:31):
You know, so it's big in its market, but it's a rounding error for Google.
But regardless, it's got 2 billion users worldwide.
It's an enormous source of market power.
And I thought about that recently when I put on the, you know, you know, the Meta Ray-Banglasses and I wanted to ask it how to get somewhere.
I go, they don't have Google Maps.

(16:53):
You know, you know, and you think about the cost of reproducing something like that.
And anyway, the whole point about
There's this lesson from accounting, which we could bring over to AI.
And this goes back to some of my thinking about AI, I mean, back in the web day and bigdata, where we're building a kind of operating system in which the subsystems are data,

(17:22):
like location, identity.
It's like Facebook.
took control of a certain kind of identity.
And of course, then Google and others and Apple, they all had to build their identitysubsystem.
And so you start thinking about that in the context of AI and you kind of go, OK, well,there's an interesting piece of leverage.

(17:43):
How would you think about what are the standards for interoperability in these things?
And I think this all, so that anyway, that notion that we have to
change our thinking about what is useful to disclose, not because, I guess I hadn't fitfollow through with the accounting analogy as fully as I should have.

(18:08):
You if you think about what's the purpose of those financial disclosures, what do they do?
They enable what I think in, I love this term from Jack Clark of Anthropic and JillianHadfield of regulatory markets.
They wrote a paper on this.
And I probably take in their idea a little further.
Maybe I'm going beyond the way they thought about it.

(18:30):
But if you think about part of what enables the regulatory market of finance are generallyaccepted accounting principles and the European equivalent, is the fact that there's now a
market of accountants and auditors who all do the same thing.
And that's a lot like, as I said, when you build a

(18:52):
networking equipment, it all does the same thing.
And that was how the internet grew.
I remember back when my friend Dan Lynch started Interop, the literal was a conferencewhere people came and all the computers had to talk to each other.
And ITF was like, you propose a standard, you got to show us three independentinteroperable implementations.

(19:17):
I kind of feel like that notion of interoperability, the notion of how standards
inform interoperability is really, I think,
I guess is where I would go because I don't think, I think that part of what.

(19:44):
You know, I always have in the back of my mind this great quote from the famous computerscientist Donald Kloonuth, who once said, premature optimization is the root of all evil.
And, and there's a way that, you know, a lot of the regulations that being proposed arepremature optimization and over specification.

(20:06):
And again, I go back to my early background in networking and you look at the ISOstandards.
where they had the seven layer model and everything was specified.
And TCPIP was the classic version of, what's his name?
The guy who wrote, the name of the book is Systemantics, John Gall.

(20:33):
No working complex system was ever designed that way.
It evolved from a simple system that works.
And TCPIP was a simple system that works and it far outperformed this system where peoplehad tried to come up with the entire stack in committee.
let me stop you there.
I want to get back and ask you about the disclosure base.

(20:55):
This is fascinating and important.
But would you say that something like the European AI Act is a premature regulatoryoptimization for AI?
only in the sense that, let me put it this way.
If regulation was as easy to refactor and change and update as you learn new things as, asyou see what happens, it would probably be okay.

(21:26):
The reason it isn't is you, you're sort of incur a kind of societal technical debt, youknow, with
when you have a set of rules that don't actually work and are hard to change.
And that's one of the advantage that we have with technical standards because they'refundamentally focused on outcomes.

(21:49):
And if they don't work, they lose in the marketplace or they're adapted.
It's one of the things that I've always loved in a certain way about, I mean, not saythere's a lot of politicking.
you know, as the industry has gotten, you know, more front and center.
But I still think back to the early days of not just the internet, open standards of allkinds.

(22:17):
I remember a meeting of this thing called the X Consortium, which I was an affiliatemember.
X was an open source windowing system for Unix Linux and still there.
It was really this consortium where all the voting members were the big companies thatwere developing the technology.

(22:37):
And I still remember a meeting where these guys, would be just, you know, I'm Steve, thedeveloper, I'm, know, and then suddenly they would kind of like sit up a little
differently and they would say Apollo computer believes that.
And this one time, this guy was, I don't remember if it was Apollo or DAC, DigitalEquipment Corporation, and he started that.

(23:02):
that he started that.
Well, digital believes that.
then he says, I don't care what digital believes.
It's not the right answer.
I was like, yes.
I wish it was preserved.
That moment was, if this was in the YouTube era, would have been.

(23:22):
But it was just such a moment of the engineer goes, no, this is not the right answer.
I don't care what my corporate masters are telling me.
Right, right.
But back to what we talking about before, given how much money is at stake and is alreadyinvolved in this AI space, is there any hope to get back to that, the technologists are

(23:45):
just talking about what the right answer is?
first off, there are people who are.
I mean, I think it's interesting, you know, despite what I said earlier about the, youknow, the big money players making it a less competitive market than it might otherwise
be.
You know, we have some really interesting, you know, cross currents.

(24:06):
You know, I think Anthropic is, you know, the idea that a bunch of people said.
Open AI has lost its way with regard to AI safety.
We're going to do better.
But then they're also doing a lot of other things better, too.
So you've got some really interesting competition there.
think that Google has been, like, if I rank the big players here, Open AI is definitely inthe move fast and break things camp.

(24:36):
We're going to be dominant.
We're going to do it however we can.
is a really interesting alternative and Lama is a really interesting alternative.
so that whole sort of, and then of course, there's all the smaller open source models andother models being developed.

(25:03):
I do, so I do think we are having some competition there.
But it's still not at the moment.
Again, I think back to my computer history.
And you have the PC.
And there were a couple of different battles playing out.

(25:26):
And a lot of people were focused because the first generation of computing dominance withIBM had been through hardware.
Everybody was focused on the hardware battles.
And they were only paying attention to that.
you know, here's AT &T, they're going to have a PC and, know, and then, you know, alongcomes, you know, Michael, you know, and the big companies all had their personal computers

(25:48):
and there were, it was a lot of innovation, but like some of them were just like, MichaelDell comes along, he's like a kid in a Texas, you know, college dorm room.
And he's like, I'm going to start assembling these things in a salient by mail order and,and becomes a real wild card.
And I think there's some, there is at least the possibility as
you when you think about possible futures, you know, one of them is the one that's beingadvertised, which is we have to keep training on more and more and more and more data, and

(26:15):
it's going to be more and more and more expensive.
And then there's the fact that the smaller models are catching up.
And that maybe, you know, you're seeing the benefits of the current, you know, spendinglevel off, and this stuff will be commoditized.
And then we'll start to see the real innovations happen because it's not going to beconstrained by a few companies.

(26:38):
You know, who are like, okay, we have to dominate and then we have to monetize.
And I guess that goes to something else that regulators could do.
And again, this, I think about this not in terms of regulation that specifies what you canand can't do, but regulation that specifies what you have to tell us.
You know, and so for example, if you were like caring about, you know, certain, you know,one axis of AI safety, which is, you know, addictiveness for, for kids.

(27:07):
You you wouldn't necessarily say you can't.
Well, you could.
mean, China has literally limited social media use for kids.
You could take that approach, but you could also say you got to report to us, you know,what kind of engagement patterns you have and what you're doing to maximize it.
You know, so that it was pretty clear that like, OK, you know, like that case with the kidwho committed suicide, it's like, could they have known that, you know, that the fact

(27:35):
here's this kid who's clearly addicted?
And should they have guardrails against stuff like that?
how would we know if they were egging him on, they're going, great, this is a really greatengaged customer.
What are their optimizations for engagement, for example?
And so I think there's a lot we learn from

(27:59):
You know, I guess this goes back to my notion of why disclosures, this is a littledifferent than the kind of sense of disclosures in the form of, you know, the metrics
that, you know, everybody uses, and that becomes a standard for interoperability.
But still it's related is if you look at social media, you know, once it started tooptimize for engagement,

(28:27):
you know, certain bad things happen, you know, and you look at, you know, the work that wedid on algorithmic rents, it was sort of like, okay, we have this knowledge because of a
lot of consultants who studied search engine optimization.
And we have a lot of sense that companies like Google and Amazon got really good atsearch, i.e.
giving people the answer they really wanted.

(28:49):
And so we were able to do a study that said, okay, Amazon advertising no longer gives youtheir best result.
You know, gives you the, you know, somebody paid for this result and that result is 17 %more expensive and 33%, you know, lower in ranking than what their actual best result is.
And so whether, you know, what you want to do about that, don't know, but knowing it ispretty useful, you know, and knowing that, you know, that, that, companies are, you know,

(29:21):
like if
you when you have a, you know, like if companies have to sort of show, yeah, we werereally putting the pedal to the metal on this, this risk factor, as opposed to we're
actually moderating it and managing it.
I think there would be some, know, like, you know, you don't have to kind of have the lawcome down and say you are going to be punished, because the market will punish people

(29:43):
because somebody is going to sue them.
And you'll have the information to say, yeah, you know, you were doing this bad thing andwe can see it in your numbers.
And it caused this harm.
And so I kind of feel like, right now we have this sort of crazy ass approach, which issort of fed by all the existential risk people that's akin to saying, okay, let's regulate

(30:10):
cars that can go over 150 miles an hour, but no other cars.
And let's do a bunch of crash test dummy kind of testing, but have no other regulations.
You when you think about auto safety, yeah, HTSA does crash test dummies and how long doesit take for a car to break at a particular speed and what's its crumple zone, all this

(30:33):
kind of really useful stuff, which is a kind of model safety.
But we also think about driver education and licensing, speed limits, you know, and thespeed limits aren't
You know, one size fits all.
They're different on different kinds of roads.
And you could say, okay, well, this is how, you know, we, what we think is safe in thismarket or that application.

(30:58):
you could be thinking about, you know, again, I say this idea of, licensing, JillianHadfield, I think it's really been big on this notion of, Hey, AI models ought to be
registered, just like, you know, you know, cars are registered.
have a license plate, you know, and,
know, guns have a serial number and, you know, models, no, not so much.

(31:20):
You know, again, there's a lot of lessons we could take from what is the overallinfrastructure of, you know, that allows, first of all, allows you to inquire when
something goes wrong.
Again, think about black boxes and, you know, airplane crashes.
You know, there's a lot of interesting infrastructure.

(31:46):
that makes a system regulatable as we start to see problems emerge.
And I think that's an interesting conversation I've been having with Vint Cerf.
It's like one of really interesting questions that we ought to be asking is, is thissystem regulatable at all?
And how would we make it regulatable if it isn't?

(32:07):
Because it's certainly possible that
You know, you can build technologies that are fundamentally not regulatable and then wehave to ask ourselves, do we want to do that?
Yep.
Yep.
Yeah.
think Larry Larson called it regulable back in the, the early internet days.
And, know, we have these same conversations in blockchain and so forth.
And it's an important point.
Let me ask you one more thing, which is where does open source fit into this?

(32:31):
Because as you know, there's, there's this big debate about, open weight models and therisks that they provide.
And in some ways that seems like it's more disclosing, but on the other hand, it's out ofthe control, potentially the developer, what people are doing.
Yeah, I guess I'm not sure I have a clear answer on that.

(32:54):
In general, I am a fan of the way that open source makes a market more competitive.
I do, know, obviously there are people who have a lot of concerns about national security,for example.
you know, I guess the point is

(33:16):
that Metta has made is, hey, look, this stuff is, we have a lot of industrial espionage.
And as in every version of cybersecurity and everything else, a notion that you're justgoing to build a wall and keep people out is not generally that successful in the end.

(33:38):
So you're better off building a robust system that can handle the fact that people knowthings.
Okay, well, so last thing is we don't have too much more time is, so are you optimisticthat we will come up with an approach that makes these technologies appropriately
regulable?
And if so, what gives you that optimism?

(34:00):
Well, first off, I don't think they're as dangerous as the people who make the bigexistential risk arguments think.
I don't think we're on a path to AGI right now.
Now, we might be.
I could be wrong about that.
But I think that

(34:24):
You know, a lot of the risks are, you know, classic.
well, mean, well, there's a bunch of different risks.
Yeah.
Some of them are, you know, if you look at the, you know, the whole, CBRN area, you know,that's, you know, there's a whole community that's on top of that, you know, like does it,

(34:47):
to what extent does this amplify existing risks?
you know, in cybersecurity, know, bio, bioterror, et cetera, et cetera.
That's all, you know, really important work.
And, you know, but it's not specific to AI.
It's just it's like, OK, we have to we have these frameworks for thinking about this.
And even there, you know, like, you know, and the Ryan and Kapoor, you know, and I snakeoil talk about, you know, the notion that, hey, well, if you're worried about bio, you

(35:19):
know,
know, bioweapons, know, physical biosecurity and access to the kind of equipment that youneed is actually probably more important than, yeah, people can get information about how
to do it.
You know, so again, it's where we got the wrong, wrong focus there.
But I think there's also a class of risks that

(35:41):
I, and this is really the focus of my work in this area, which is this notion of, ofcommercialization risk.
And there's two parts to that.
And one is, do companies have the incentive to do the wrong thing?
You know, so the move fast and break things risks, you know, so, you know, here is, youknow, this race for monopoly and, you know, like companies.

(36:09):
You know, like they're basically going, yeah, we're all about AI safety.
And then they fire their AI safety team, you know, because really they want to win.
And I think that is a real risk, but there's another risk.
And the example, I try to use examples from the past and like great, two great ones comefrom the social media era and one of the versus the Myanmar massacre.

(36:36):
And it was not.
The thing, the big takeaway I have there is not the Facebook had guardrails against hatespeech.
They just didn't work in Myanmar because they didn't understand the language.
Their systems weren't tuned for the language.
so that I call that deployment risk.
You know, so and again, it goes back to this.

(36:58):
The difference between theory and practice is always greater in practice is in theory.
In theory, we have guardrails.
Do we have them in practice?
You know?
Yep.
And that's the question that regulators should be asking.
You know, it's like we, and this goes back to this analogy.
Okay.
We tested the, you know, we did the crash test dummy stuff, but now we're not looking atthe data from the real world.

(37:20):
You know, and we don't even thinking about the data from the real world.
And so like to me, and again, that goes back to disclosures.
You'd kind of go, Hey, you know, you say you do AI safety.
What does it look like?
What are you actually doing?
You know, and that's again, a lot of what our.
a project is trying to focus on, which is what should regulators be asking to see and toknow about what a company is actually doing?

(37:48):
And that requires understanding, okay, what is safety engineering for AI look like?
And how much is a company spending on it?
Are they doing it in all the markets they operate in?
You know, or just some of them, you know, like they, you know, they go, yeah, well, we'reoffering our services around the world, but we're only doing
all the safety engineering in English, you know, in the US, you know, or whatever, youknow, because that's our biggest market.

(38:11):
Yeah, that's the kind of thing that I would be thinking about.
You know, a similar one is this is huge, you know, set of questions around what do yourthird party developers do?
And this, this came out to me.
And again, I haven't dug into it, what are, you know, companies doing across the board,but it's a super interesting area to me, which is

(38:33):
There was this report from Proof News, Julie Angwin's outfit about electionmisinformation.
And the main result was sort of obvious.
These things were pretty shitty.
They basically did red teaming with election workers who knew what questions peopletypically asked.

(38:55):
They said, well, you're giving misinformation about half the time.
But the most interesting result in the paper to me was that
When pressed, the model developers said, you know, and there was a, they tested a bunch ofmodels in parallel using an AI API harness, right?
Where they would submit all the same questions and they were told, your results don't showour real guard rails because you were using the API and with the API, it's the

(39:23):
responsibility of the developer.
And I go,
That's like, yeah, we have these protections against every privacy protections, but guesswhat?
Cambridge analytic is this usually, you know, it's like, so if you're just saying it's theresponsibility of the developer, is that really, that's just like you're wide open, you

(39:44):
know?
So, like either you have guardrails or you don't.
And so again, this, question is like that, that I think should be regulated should bereally thinking about deployment.
Yep.
just the thing in theory.
Yep.
All right.
So much more we could talk about, but I think that's a good place to land.

(40:04):
Always fascinating to speak with you, Tim.
Thanks so much.
right, thanks a lot.
All right, great talk with you too.
Bye bye.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.