All Episodes

July 4, 2024 35 mins

Join Kevin and Suresh as they discuss the latest tools and frameworks that companies can use to effectively combat algorithmic bias, all while navigating the complexities of integrating AI into organizational strategies. Suresh describes his experiences at the White House Office of Science and Technology Policy and the creation of the Blueprint for an AI Bill of Rights, including its five fundamental principles—safety and effectiveness, non-discrimination, data minimization, transparency, and accountability. Suresh and Kevin dig into the economic and logistical challenges that academics face in government roles and highlight the importance of collaborative efforts alongside clear rules to follow in fostering ethical AI. The discussion highlights the importance of education, cultural shifts, and the role of the European Union's AI Act in shaping global regulatory frameworks. Suresh discusses his creation of Brown University's Center on Technological Responsibility, Reimagination, and Redesign, and why trust and accountability are paramount, especially with the rise of Large Language Models.

 

Suresh Venkatasubramanian is a Professor of Data Science and Computer Science at Brown University. Suresh's background is in algorithms and computational geometry, as well as data mining and machine learning. His current research interests lie in algorithmic fairness, and more generally the impact of automated decision-making systems in society. Prior to Brown University, Suresh was at the University of Utah, where he received a CAREER award from the NSF for his work in the geometry of probability. He has received a test-of-time award at ICDE 2017 for his work in privacy. His research on algorithmic fairness has received press coverage across North America and Europe, including NPR’s Science Friday, NBC, and CNN, as well as in other media outlets. For the 2021–2022 academic year, he served as Assistant Director for Science and Justice in the White House Office of Science and Technology Policy. 

 

Blueprint for an AI Bill of Rights

Brown University's Center on Technological Responsibility, Reimagination, and Redesign

Brown professor Suresh Venkatasubramanian tackles societal impact of computer science at White House

 

Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.

 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Suresh, welcome to the Road to AccountableAI.
Thanks for having me, Kevin.
You've worked for a number of years as acomputer scientist, among other things, on
issues of algorithmic fairness.
So how far along are we in terms oftechnical capabilities?
If a company uses best practices today interms of devising training data and models

(00:26):
and outputs, should they be reasonablyconfident that problems about bias can be
addressed?
I think most companies today could bereasonably confident that with the right
people in place and the right teams,there's a whole suite of tools, ideas,
concepts they can rely on to do a prettygood job in putting out products that they

(00:47):
can be proud of that are indeed mitigatingvarious forms of bias.
I think we're not, I would say, 100 % ofthe way there yet.
Where are the biggest gaps?
I think the gap is such as really in...
in going from what I'll call more sort ofbespoke boutique sort of specific things

(01:13):
that have been known to work in differentcases to a general set of prescriptions
that is both general enough to cover allthe different issues that might come up,
but specific enough that can be tailoredto any organization's own needs.
And I think that's where I think I justcame back from a workshop where we were

(01:33):
talking about, you know, this idea of
bridging predictions and interventions.
And this issue of where the rubber meetsthe road and how you actually
operationalize these ideas that are outthere is I think where the last mile sort
of problem is, I guess.
What do you think that will get there?

(01:55):
Is it a research problem or is it moreabout translating what computer scientists
are doing to actually be useful fororganizations?
I don't think it's a translation.
I think of translation as I've said thecorrect thing and now I need to translate
into your language.
I think it's more of a conversation.
I think we are having those conversationsin various places, but the conversation

(02:17):
has to be something like, okay, here's myorganization with my specific concerns, or
rather here's my organization and I'veheard about these issues, I'm worried
about them.
And then someone in their organization cansay, all right, let's break it down.
What are your concerns?
Where are the things you want toprioritize?
Okay, now here's a bunch of tools we canbring in.
Let's deploy them in, let's build agovernance structure on them.

(02:39):
Let's see what results are getting, let'sadapt.
So that conversation, we have the piecesfor each of it, but I find myself
constantly saying, okay, this is just whatI just told you right now.
This is what you need to do.
You need to build a team, you need tothink about values, you need to think
about implementations, you need to thinkabout monitoring and so on.
So I think that's, so there is researchthat is going to be teed up from those

(03:00):
conversations.
And I...
As an academic, I want to hear more aboutthis conversation so that we know what
questions we should be asking and tryingto answer.
But those conversations are key.
And having those conversations in a waythat some of it can be surfaced in the
public domain is also key.
How much has the rise of generative AIchanged the way that you would answer
those questions?

(03:22):
At one level, it has not at all.
Another level, it has confused everything.
And I think it has confused everything atsay the C -suite level, where suddenly
executives are very panicked aboutgenerative AI and want to make sure
they're ahead of the curve or they're notbeing left behind.
So there's a lot of push withinorganizations.
So we need to, you know, we need to gen AIour whole company.
So put it in everywhere, right?
And I think the problem is that no onereally knows what that means.

(03:44):
And I think I've been hearing from peoplethat once they actually sort of get down
to it, roll the sleeves and say, okay,what does it mean to bring generative AI
into our organization?
They realize a whole bunch of risks, awhole bunch of concerns, a whole bunch of
vulnerabilities that they can get exposedto if they do that.
And so then they have to say, well, wait asecond.
I have to think about this.
In the meantime, though, all the otherplaces where organizations have been using

(04:04):
AI or what I'll call predictive AI asopposed to generative AI, those issues
still remain and we still need to beconcerned with them.
But because all of these get lumpedtogether under the rubric of AI, the
confusion and the panic about generativeAI is causing...
unreasonable confusion and panic aboutpredictive AI where we know a ton more.
And I think that's something where I findmyself often having to say there's this

(04:27):
kind of AI tools as that kind, you'reusing a lot of that kind and we can do a
lot there.
You're getting panicked about this kind,that's a whole different discussion we
should have.
Yeah, so I mentioned in that questionabout, you said we know a lot more about
the predictive models.
So if I am deploying a supervised machinelearning system, I've got data scientists

(04:50):
who can use various algorithms and toolsto have some sort of fairness function be
put on the outputs.
What's different if I'm using a largelanguage model or a foundation model like
a GPT?
Fundamentally, if you're using a largelanguage model or some other foundation
model, unless you've built the modelyourself, which is highly unlikely, you're

(05:13):
using a model that's come from somewhereelse.
Either it's an open source model that youbrought into your system, or it's an API
-based access to, you know, chat GPD orGPD 4 or 4 .0 or something like that.
You no longer have full...
If you haven't built in -house, which forthe vast majority of people is true, you
no longer have control over the systemthat you're actually working with, which

(05:33):
in...
In contrast to say a supervised learningsystem that you build in -house, you may
not understand all the details of why it'sproducing outputs, but you have full
control over the data, over the training,over the model.
So that's one problem.
Secondly, we don't have really good toolsto do any kind of mitigation or
intervention in generative AI systems,which means if I try to evaluate the

(05:59):
behavior of such a system and I want tocheck for various kinds of biases,
It's a lot harder to do that because thespace of possibilities where it can
perform a certain way is much largerbecause it's really about natural language
prompts.
Unless I have a very controlled set ofprompts and I only allow those access to
it and I test that rigorously, it's reallyhard to understand what the system is
doing.

(06:19):
And if I discover some problems, it isreally hard to figure out what to do about
it at that point.
So most of the framework and the toolswe've built up that work well with
predictive systems, whether it's...
fairness mitigations, whether it'sexplanations, whether it's bridging this
gap between prediction interventions,those don't work anywhere near as well, or

(06:40):
at all in some cases, with foundationmodels.
And we're in a position now where we haveto rebuild our understanding of those
systems.
So you spent two years, I believe, at theWhite House in the Office of Science and
Technology Policy working on AI issues.
Let me ask you first, generally, what wasyour experience as an academic coming into

(07:01):
that environment?
Well, you say two years, it was actuallyone and a quarter, but it felt like 10.
So who knows what that means.
It was quite a different experience.
I mean, I'd done a little bit of, I guess,dipping my toe into the lake of policy
work at the state and local level.
So I had a teeny bit of exposure, but itwas nothing like what had actually

(07:23):
happened when I showed up in DC at theWhite House.
There are many ways in which policy workis like academic work, and there are many
ways in which it's
totally different.
And I had to figure out which of my skillsI could transfer over and which I had to
rebuild and just come up and learn brandnew things.
So in one way, it's great.
You're a professor for a while.
You think, OK, every day is going to lookthe same as every next day.

(07:46):
And then suddenly, you get thrown into anenvironment where you feel like a first
year.
I kept telling people I felt like a firstyear grad student again.
And that was kind of fun and intense.
But yeah, you learn a lot very quickly.
Otherwise, if you want to survive, youhave to.
Yeah, I mean, one of the things that Ifind curious is your take that people
don't necessarily understand about the USand the White House is obviously the White

(08:07):
House is this incredibly powerfulinstitution.
And there are many people who are workingfor different parts of the White House or
the executive branch.
But relative to the incredible breadth ofissues that come before it, it's actually
really tiny.
The number of people that are engaged inany particular area.

(08:28):
is pretty small.
So I'm curious what you felt like beingpart of OSTP engaging on these AI issues
and how much the people that you were withwere able to actually move the needle.
Yeah, I mean, it's very true.
The White House is very small.
I think OSTP in its role as a convener ofscience and tech policy work across the

(08:50):
federal government had a much broader viewof who was doing what where.
So in that sense, I was, you know, dealingvery quickly with the full complexity of
the entire US government, or at leastthose people thinking about AI, which in
every agency was so many people who werethinking about this and trying to do a
good job of it.
So, so I got a much bigger view, butyou're right that within the White House
itself in terms of change making and

(09:12):
advocating for change, it's a small groupof people you have to work with and
understand the network structure within.
So that takes some getting used to andjust understanding all the relationships
that we built up, all the key players.
And often titles tell you a little bit,but don't tell you a lot about who's an
important actor in the space and whose earyou need to bend, you know, for some

(09:34):
issues.
So that takes some learning to do.
But yeah, the breadth of issues beingcovered is fast, which means that
Again, and this is different from being anacademic where you, as an academic you
specialize in some area and you becomevery, very good in a special area.
When you're working in government andyou're working at that level, you have to
be a generalist.
So you have to be able to suppress thatconstant feeling of, I don't quite know

(09:57):
exactly what I'm doing here.
I don't know, I'm not the expert on thisand someone else must be.
And do the best you can.
Reach out to expertise when it'sappropriate.
Sometimes you can reach out to outsideexperts.
Sometimes you can't based on the rulesaround what you're doing.
But you have to do the best you can.
I think a lot of policy work isengineering that way.
You do the best you can with what you haveright now at the moment.
You can't wait for this perfect solutionto appear.

(10:20):
That's not a luxury you have in the policyworld.
What was the function or vision for theblueprint for an AI Bill of Rights, which
was really one of the central things thatyou were involved with developing?
The premise, I think, was that coming intothe Biden administration, it was becoming

(10:42):
very clear that questions on AI,especially questions on AI in society,
were going to be central to theadministration's thinking about race and
equity within the government, about how tothink about tech policy more broadly as
these questions were surfacing.
But there wasn't a very clear articulationof

(11:04):
What is it we should be caring about?
There were different ways to think aboutit.
I mean, the EU had started their processby then already, and they were well into
their process for the AI Act.
And different groups were putting outtheir own arguments about what to be
thinking about.
A lot of that was very centered around thetechnology itself.
A lot of us said, OK, here's a piece oftech.
Here's what you should do.
I think the vision inside OSTP that folkswere thinking about was, what does it mean

(11:29):
to center people's rights and impacts onpeople?
first and think about the technologysecond.
And that, I think, was in many ways ashift from how the discourse was happening
around tech, but a shift that needed tohappen.
And so the overall effort to generate theblueprint was to say, if government isn't

(11:52):
the job of understanding what people need,understanding and protecting people, even
from themselves, and what does that looklike in an AI -powered society?
What rights should people have?
as we barrel towards this algorithm in thefuture.
And that was the genesis of the work thatled to the blueprint.

(12:12):
Could you summarize what the blueprintaccomplished?
Well, the blueprint had sort of fivethings in it, easy to state things that,
you know, anything, any AI systems or anytechnology that affects our lives in any
material way, our rights, ouropportunities, our, you know, our access
to vital services should have, should besafe and effective.

(12:35):
It should work and it should work for thepurpose it was intended.
They shouldn't discriminate.
They should use our data very sparingly,if at all.
They should be transparent and accountableand we should know how they work and why.
And we should have, you know, what I liketo call dial zero for operator, right?
If you have a backup option, have a way tosort of talk to an actual person.

(12:55):
So you're not constantly relying on goingback and forth between different piece of
technology or chatbots or whatever.
That's it.
The rest of the document really is abouthow you go about doing this and why we
need these in the first place.
So I think there are two key parts to whatmakes this document, you know, not three
pages, but 75 pages.
which is really making the case for whythese principles are important.
A lot of people are like, well, that allsounds great, but who cares?

(13:17):
And then, so one of the things thatdocument does is document very clearly
what happens or what has happened in thepast with evidence when one of these
principles, these rights gets abrogated.
And so that was a part of the document.
Another part, and this was very importantto us since we were, you know, computer
scientists sitting in OSTP saying, none ofthis is a pie in the sky ideal.
None of this is, you know, crazy stuffthat no one can do.

(13:39):
This is real.
This is practical.
this could be done right now, and here'show.
And so listing out a bunch of differentthings that you could do and questions you
need to focus on, some of which would neednew research, but some of which could be
done today.
And so that was what the document had.
In terms of impact, it's been almost twoyears in now.
And I think I would say that the majorimpact the document has had, and that's

(14:00):
been personally pleasantly surprising tome really also, is how it has changed the
discussions around how we talk about.
AI in society, right?
It has done exactly what it set out to do,which is put a marker for how the
government views the role of thegovernment of regulation of protections
for people in the age of AI.

(14:21):
And you can see the DNA of the blueprintin the executive order and the OMB
guidance and a bunch of state legislationand the number of states that have called
for their own AI Bill of Rights.
And that's sort of a pleasing thing tosee.
Is it still relevant and useful afterBiden administration executive order, NISQ

(14:42):
risk management framework, potentiallegislation and so forth?
Is there still a value in the blueprint?
definitely.
Because I think one thing that has becomevery clear and that was clear when the
document was being written was that it wasa first step, not a last step.
Right?
So it laid out these principles.

(15:03):
And if you notice when I said technology,I wasn't very specific about AI.
It applies to all kinds of systems, whichmeans that in some degree, not in every
degree, but in some degrees, it's futureproof.
It says, you know, when you have newsystems that are going to make decisions
about our lives or assist in decisions ofour lives,
These are the principles you need to have.
I don't care if it's generative AI.
I don't care if it's predictive.
I don't care what it is.

(15:23):
I think to that extent, as new technologyis coming out, as new things are coming
out, it's very easy to go back to theblueprints, OK, how should we care about
this?
It's still there, and it's still valuable.
So I think it is continuing to be areference point for organizations, for
governments, and for students, and forpeople thinking about AI and society
broadly.

(15:44):
And that's very, very powerful.
What's your view about the executive orderthat was subsequently issued by the Biden
administration?
I think it's quite good.
I think it covered a lot of differentareas.
It was a very long executive order, Ibelieve almost the longest in history.
That is what it is, but it also shows thatyou needed to have something elaborate

(16:07):
because this area is so complicated andtouches so many different parts of
government.
So I think the administration, and I wouldsay I had nothing to do with the putting
out of the executive order or the draftingof it.
So saying that upfront, I think it did agood job in covering all the different
areas that people had been concernedabout.
and trying to make progress on them.
And we've seen since then, you know, the90 day deadlines, the 180 day deadlines,

(16:30):
all the things that have been happening.
We've seen the OMB guidance come out, it'sbeen quite solid in its own right.
I think there are obviously, again, thereare places where it could have done
better.
And there are places where we would hopethat more would come.
I think there are big gaps, for example,in thinking around criminal justice and
thinking about law enforcement morebroadly and its intersection with national
security.
There's obviously the national securitymemo that will eventually in some form,

(16:53):
you know,
Some of it will be classified, some of itmight appear.
So there's things that can be done.
But I think that's the point, right?
It's a constant process of refinement.
And the blueprint said, OK, here's what weneed to do.
The executive works, like, here's how weneed to concretize this.
The one began says, here's how we need toconcretize it even more specifically.
And I expect this to keep happening.
I think if we just sit back and say we'redone, that'll be the mistake.

(17:14):
We're not done.
Yeah, and it frustrates me somewhat and Iwould guess probably you too when people
dismiss what's happened in the US and say,well, there's no enforceable comprehensive
AI law and therefore all of this is justprinciples.
But it sounds like what you're saying isthat at least there has been a good
response to the blueprint andunderstanding that there's a lot more to

(17:35):
it.
Yeah, and I think, you know, I mean,Congress is working on what Congress will
work on.
States are working on things.
Colorado just passed, I think, what itwould be regarded as the first
comprehensive AI accountabilitylegislation, which is not bad at all.
Other states are doing this.
California is working on one of their own.
And, you know, so it's happening, I think.
And that momentum is very important.

(17:56):
I think people don't realize that we tendto want the shiny thing, the one thing
that will solve all our problems, youknow.
Government is complicated, society iscomplicated, you shouldn't expect, in
fact, the only way for these things towork is to bring everyone along, which
means bring governments along, bring theprivate sector along, bring companies
along who put out their own completelyvoluntary guidelines that they will

(18:18):
follow, where people could say, well, noone's forcing you to do it, why are you
gonna do it?
But they're gonna do it anyway, becausecompanies wanna make sure that they are
not doing things where they could get intotrouble later or where it could affect
their brand or affect their image, as youknow so well.
I mean...
People are going to do this and the trickis to bring everyone on board and bring
them along for the ride rather than sayingthat we need the one, you know, shiny

(18:40):
thing that everyone will follow.
And that's complicated.
It looks multi -dimensional.
It looks tricky.
It looks like missed.
It looks like, you know, responsible airpractices within, you know, a company.
It looks like what the IAPP is puttingout.
It looks like all these things together,each one of them is not going to solve all
the problems, but together they can.

(19:00):
I mean, what's most important to createthat alignment of incentives?
Because you're right, just the fact thatcompanies don't have legally enforceable
obligations doesn't mean that many of themdon't actually want to and take serious
action.
But there are cases where there are gapswhen there isn't enforcement.
So do you have any thoughts about what,from a design standpoint or from what

(19:22):
comes out of places like the White Housethat can align those incentives properly?
I think.
It's hard to align incentives only withcarrots.
You do need some sticks.
There's no way around that.
I know there are a lot of the tech sectorsadvocating for purely voluntary
guidelines.

(19:42):
They will not work.
Not because companies are, you know, eviland are going to do bad things.
It's just that, again, ultimately, ifyou're in the business of running, if
you're the business of running a business,your goal is to generate profits for your
shareholders.
And that is going to take priority.
And if you don't have an
a forced incentive to do the right thing,you're going to try to find ways to do as

(20:05):
little as possible.
And that's not, it doesn't make you bad,it just makes you a natural creature of
your own incentive.
So you do need some sticks.
So I think the role of government iscritical.
The role of the regulatory agencies iscritical.
But I think that itself won't be enough,right?
If the only thing stopping us from robbingour neighbors was the police, that would
not work.
There aren't enough police to stop us fromrobbing our neighbors.
It's also just that we don't want, hope.

(20:27):
All of us don't want to do that.
We just don't think it's the right thingto do.
And I think as an academic teachingstudents who are going out and working at
these companies, part of my theory ofchange is we just need to play the long
game as well and just change how we thinkabout deployment of AI systems so that in
a future very close by now, not puttingout a system that has safeguards and bias

(20:52):
mitigation will seem like the dumb thingto do.
And that anyone putting on a system like,what, are you crazy?
You didn't put bias mitigation, which iswhat we do now with other kinds of
testing.
If you put out a system that doesn't dosome basic unit testing, your software,
your project management are like, are younuts?
I want to have the same are you nutsreaction for guardrails and responsible AI
practices as well.
So that just becomes a matter of course.

(21:12):
And I think we can get there.
Obviously, the European Union has asomewhat different approach, or at least
in terms of the order, they have the AIAct that has been adopted.
It's a very detailed comprehensiveprescriptive law.
So what do you think they got right andwhat do you think they got wrong?
I think there's a lot they got right.

(21:33):
I think they were, you know, you've got togive them credit.
They were the first ones to pass a law andeveryone outside the US and maybe even
within the US is going to sort of adjustwhat they're doing now based on the AI
Act.
A lot of countries are looking at what theAI Act looks like and they're trying to
shape their own laws.
They're going to say, okay, we're going towork with the EU.
We need to adjust accordingly.
So there's a lot that's good in there.
I think they had some struggles withtheir, you know, scoping the definition of

(21:56):
AI.
I think there's a lot more to be seenabout how their own...
government office that they're going toset up to do some of this evaluation
auditing plays out.
So I think it's early to say how this willwork.
I think there's some good signs and wejust have to wait and see.
What does it take to, going forward whenthere is legislation, get academics and

(22:20):
others from the private sector to engagein working through some of those issues?
I mean, it's hard, right?
Because for one thing, you would likeacademics, for example, to work in
government, like for people to work atNIST, at the AI Safety Institute, right?
Congress has put up this ad for people tohelp with AI safety work.
And if you're getting a salary of roughly$3 gajillion to go work in Silicon Valley,

(22:43):
it's hard to go work in the government formuch less.
So that's one problem, I think, at leastin the US.
The economic incentives for undergrads, Imean, I talked to a lot of students, they
would love to go work in AI for society,but...
The kind of salaries they see just workingin Silicon Valley are very high.
That's one thing.
I think just timing, if you're working asan academic in an institution, if you're

(23:05):
pre -tenure, for example, it's very hardto do this kind of work.
If you're post -tenure, maybe you can.
But then the funding sources, figuring allthese logistical details can be tricky to
do.
I think also it's hard because, as I said,you need to be very specific for an
organization as to what they need.
which means almost you need to be embeddedwithin an organization or have very close
understanding of that organization, whichmeans either that you're working for that

(23:29):
company or you're consulting with them insome way, but then you also have to work
with government or talk to people who workin government.
And so some of these communicationpathways are hard to navigate.
You can't always speak freely about whatyou want to sort of understand.
So there are a lot of, I think, challengesto creating the...
the open lines of communication that areneeded to share knowledge about what has

(23:51):
worked.
I'd love to sort of say, I'm working atcompany A doing this.
Hey, you at company B, what have you beendoing?
Can we share knowledge about this?
And there was a little bit of thathappening.
Even about a couple of years ago, therewere folks in Silicon Valley, all the
responsible AI teams that were talking toeach other about these things.
But then again, you know, various thingshappened.
The tech companies decided to retrench andthose teams are dispersed.

(24:13):
And right now, I obviously don't know howwell those teams are connected now.
But you need those kinds of that knowledgesharing as well.
So since returning to academia, youstarted a center at Brown on some of these
issues.
And I am curious.
It's the Center on TechnologicalResponsibility, Reimagination, and
Redesign.

(24:34):
So responsibility, I think I understand.
What's the reimagination and redesign?
I'm very glad you asked that becausethat's very important to me, the other
parts of this.
So I think we all understand what techresponsibility looks like and putting in
the right guardrails, the rightprotections.
But as a computer scientist, I thinkthat...

(24:56):
we have an opportunity to not just designprotections around systems that are
already out there, but reimagine andredesign systems that are sensitive to and
address the needs of all people from theget -go.

(25:19):
For decades, I think, computer science hasviewed itself as a discipline that is sort
of people -free.
We build the tools, we throw them outthere, we don't worry about what happens
once we throw them out there.
We don't even worry about why those toolsare being asked of us.
And I think we can no longer do that.
It's very clear that, you know, even asimple, let me give you a simple example.

(25:40):
There's nothing to do with technology.
So ramps, ADA mandated ramps for peoplewith, you know, in wheelchairs and whoever
mobility issues.
You could think of it as a solution thatonly helps people.
with disabilities.
And it's true, it does help people withdisabilities.
But every time you're lugging a largesuitcase up a system and you need to take,

(26:03):
and you can take the elevator because ithas to be there in a, in a, in a New York
subway because it's ADA mandated, everyoneis benefiting from this, not just the
group that ostensibly this was designedfor.
If we could have designed those kinds ofsystems thinking, okay,
We are building systems that are going toaffect a lot of people.

(26:23):
People have all kinds of different needs.
How do we reimagine and redesign ourtechnology from the get -go?
So we are thinking about access for alland not just access for the most or access
for some.
That opens up a whole new set ofinnovations in computer science, a whole
new way of designing tools that is a bitmore sensitive to what people need.

(26:46):
I mean, we've had...
fields like human computer interaction,we've had people centered computing, we've
had these concepts for a while.
We need to, I think, go beyond humancentered and individual centered to people
centered and society centered computing tobuild assistance that actually help us do
what we want to do as opposed to buildingsystems that are making profits for some

(27:11):
companies that we sort of shoehornourselves into.
So that re -imagine and redesign.
I think is a way for computer science tobe useful as opposed to fighting and being
playing on the defensive.
And I think that's really important.
I think, you know, we it's been importantto identify all the places where we need
to fix errors in technology, but we haveto think positively about the future as

(27:32):
well.
We can't just be playing defense.
And that's, I think the core of what, youknow, I would like the center to
represent, not just.
issues of responsibility, which are veryimportant, but also a better future.
What might that actually look like?
Is it essentially talking to lots ofstakeholders in designing systems?

(27:55):
It must be something more than that.
So what in practical terms are youthinking about?
So that's the thing, right?
I mean, it's hard to give it, becauseevery example looks so different and looks
so distinct, right?
So there's only so many things I can talkabout right now that we're still doing
work in progress.
We haven't got papers out yet to talkabout yet, so I don't want to sort of

(28:16):
indicate vapor where it doesn't exist.
But I think something as simple, I mean,you mentioned stakeholders, but it is an
important point, right?
We did a workshop last year where we weretrying to hear from community partners and
what they want from technology.
And there was a
constant refrain of, you know, we wantmore joy.
We want to be able to use technology tohelp us, help us build that.

(28:36):
So sometimes it involves just not beingthe lead, but sort of standing back and
helping others craft their vision of whattechnology can do for them and helping
them build it.
And yes, so it does involve a lot ofstakeholder engagement, but it's not
stakeholder engagement to say, okay, wehave this product.
We want to know what you think of it.
It's more like, you said you wanted to dothis thing.
How can we help you?

(28:59):
How can we help you build what you want?
And how can we help that, you know, scaleout?
I mean, there's, you know, I think of, forexample, the work that folks are doing
building, you know, machine translationand NLP tools for processing the Maori
language, which was an effort that wasdone very locally within New Zealand, you
know, with a local group that wasn'thooked up to OpenAI or hooked up with

(29:20):
these larger companies, but that was donewith stakeholder engagement and has
produced a tool that's very, very helpfulfor people who need it in the way that
they want it.
And I think we can do more of that, andthat will actually help us do better
computer science as well.
So I'm still figuring this out exactlywhat this looks like.
I think we're working on individualprojects with different groups.

(29:40):
Stay tuned.
Yeah, well, it would seem like potentiallygenerative AI offers some of that
opportunity.
And now it's not just a matter of, I wantto build this product or the system.
Let me get a bunch of data scientists togo and work on it.
It's I talk to the machine and tell itwhat I want.

(30:02):
Yeah, except who built the machine.
I think you're right.
I think there is a new sort of platformsbeing created around the use of generative
AI.
I'm not sure OpenAI should get the benefitof all of this though.
I'm not sure Google should get the benefitof all of this.
Even for all its openness, I'm not sureMeta should get the benefit of all this.

(30:25):
So I'm a little worried about that idea.
I think on the one hand, I see the energyand the excitement and it's really cool to
play with, you know.
chatbots and play with the GPD4 and playwith GPD itself and sort of see what it
can do for you.
It's kind of fun.
And that fun is important.
I just want to make sure we're notcontributing to the capture of a yet
another environment by a few companies whoare working very hard to make sure they

(30:48):
have, they're locked in and no one elsecan play.
I think they're coming up with, you know,smaller language models, things that
anyone can access ways, you know, like forexample, national AI research resource
having,
access to compute, to build small modelsthat anyone can play with, that would be
good.
And we should focus on that.

(31:10):
There's also this institutional trustquestion, right?
In that, right now, people are skepticalof big companies, but they're skeptical of
traditional intermediaries and governmentsand other kinds of sources of trust.
So how can people get the kind ofconfidence they need in these systems in

(31:32):
this kind of environment?
Yeah, that's difficult.
And I think I don't want to drown in a seaof negativity either, because that also
doesn't get you anywhere.
And I think a constant suspicion ofeverything is not a way to live either.
We do have to have some level of, if nottrust, but at least a suspension of
disbelief for a while while we play withsome tools.

(31:56):
I think we need more experimenting.
I think we need less hype.
We need to be able to, I think if we couldfilter out.
what 99 % of the hype around these tools.
We've been a place where we could actuallycredibly evaluate them and think about
what they could do for us and what wewant.
The hype just makes it so difficult thatwe either jump into the hype and lean into
it very heavily, which is, I think, amistake, or we try to run away from

(32:19):
anything to do with these tools.
It is also a mistake.
And I think so the first part ofrebuilding trust is to recognize that the
hype is just that.
is to say if we ignored the hype, what canwe do with an LLM?
What can it do individually for us?
And then learn from that.
I mean, very wishy washy here.

(32:40):
I don't have a good answer for you, to behonest.
But the first step is to separate the hypefrom the tools.
And that's really important.
No, absolutely.
And we've got to take the first steps inorder to get to the next steps.
That's a really important point.
also struggle with the issue of trustbecause I find myself reflexively
distrusting some of the claims made aboutLLMs when perhaps even I shouldn't be

(33:04):
doing that.
I should at least take them at face valueto a degree.
But we don't really have goodmethodologies.
We don't have rigorous frameworks.
We don't have what I would call strongscientific evidence for so many of the
claims made by LLMs that there's aconstant eye roll like, my God, here's
another paper making claim that tomorrowwill be disproved.
And you know, it's just...
You just have to kind of ride those wavesa little bit.

(33:27):
No, absolutely.
Incredibly challenging, but alsoincredibly important.
Suresh, thank you so much.
It's been a really wonderful conversation.
very much.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.