Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Thank you so much for joining me on the podcast.
Thanks so much for having me.
You have somewhat of an interesting background with degrees in both law and statistics.
How does that combination contribute to the work that you've done on AIF?
So I think AI ethics is a particularly great space for folks that have diverse backgroundsand that it is such a socio-technical space.
(00:29):
On the one hand, there's a strong technical component of course, in terms of how AItechnologies work, how various mitigation strategies work in terms of how you can address
issues at the data layer, at the model layer.
But at the same time, especially nowadays, there is a significant legal and regulatory
to AI ethics.
(00:49):
It's not just a matter of what does any individual developer think would be better from anethical perspective, but we increasingly do have different rules to comply with as well.
And so figuring out how to bridge those two elements is a lot of the challenge in the AIethics space, but one that really attracted me to it since it's one of the rare.
(01:14):
subfields where you can actually leverage both of these areas fully in your day job.
So that's something I really appreciate about it.
Yeah, and similarly, you spent most of your career as a research scientist, but then movedover to Sony in this role that you're in now.
What made you decide to make that switch?
(01:35):
Yeah, so a few things.
So for one, it was just a very unique role, both in terms of leading a research team, butalso leading the AI governance function in terms of more of the legal and compliance side
as well.
And I thought that was really exciting because one without the other can often feel a bitsiloed.
(01:57):
So if you're just doing research, but you don't have any connection to the governanceside.
of the organization, then your research goes out into the world, but there's not a clearfunnel in terms of how does it make impact within the company?
How do you actually implement the research you're doing?
And also, how do you actually learn about what the real world challenges are that folksare experiencing?
(02:18):
And then on the flip side, just legal and compliance without any connection to theresearch side can often be sort of miles behind in terms of what the latest and greatest
research is around what
can you actually do from an AI ethics perspective.
The field is moving so fast that it can, you know, every single month we have newpublications coming out on different techniques.
(02:42):
And so I think it's really great to have both of these hats so we can move more at thecutting edge when it comes to AI ethics.
How much do you think that that's a cultural issue or a company issue?
There's some companies that have a very technical culture.
There are others where it's more management driven.
(03:03):
Do you think that this kind of approach is the right one for everyone or is it somethingparticular about Sony and companies like it?
it's, that's a good question.
I think it varies a bit.
Like not every company can afford to have a research team that's doing more like kind ofbasic research in a way, in terms of trying to push for the frontiers of what is possible
(03:24):
in AI ethics.
It is much more common to have folks that are more focused on just implementation when itcomes to kind of the compliance side of things.
That said, I think there's been an interesting shift in recent years in terms of how AIethics in general has been considered.
used to only be the major tech companies that would have anybody whose full-time job wason AI ethics, but especially with recent laws coming into place now, increasingly,
(03:54):
especially in terms of AI lawyers, you're seeing more positions like that at companiesbecause...
It is quite difficult now if you are an AI developer to ensure that you're in compliancewith all of these laws and having someone with that specialized expertise is quite
important.
So from that perspective, I have seen a lot of growth in that area while the research ismaybe still concentrated in a few major companies plus academia.
(04:23):
And you've been in this role at Sony for, think, close to four years now.
How has the AI governance structure and activity developed in that time?
Yeah, so we've made tremendous strides during that time.
in terms of how our structure looks overall, so Sony is a huge company with very diversebusinesses, spanning electronics, entertainment, financial services.
(04:48):
And so given that diversity, we have in each business unit, a CEO appointed AI ethicsexecutive to work with my team, which is the AI ethics office within headquarters.
And so we work with each of them to develop AI ethics structures within their businessunits that can assess AI cases, flow to high risk cases, and also work with them on
(05:16):
compliance with relevant laws plus internal policies on how AI is developed or used.
And so, you know, with some of our business units, are, we have a more hands-on
where we actually are very involved with many of the day-to-day assessments.
for example, a lot of our early work was with our electronics business units where therewas already a lot of kind of structure around how they evaluated their technologies
(05:44):
because in the hardware space, of course, there is a lot of consideration and qualityassurance that's necessary before you ship out any product.
And so we were able to layer on AI ethics as part of that.
and be very involved with them in developing that structure.
And now we're working more in the entertainment space in terms of looking at how thatmight, how we can structure ethics assessments there.
(06:11):
But I do think we've done an impressive job in terms of just like the scope and depth ofthe type of AI governance that we have in the organization, especially just given how
diverse the different AI applications are.
It's an interesting challenge, because presumably you want to address the needs of theunique businesses, but you want to have some consistency in the approach.
(06:36):
So how do you make that balance?
Yeah, it's always a constant kind of back and forth.
So a key part of my team's role is harmonization.
So for example, when it comes to internal policy, sometimes we'll put out a first draft orat least whenever different businesses come out with their versions, we'll ensure that it
(06:57):
is harmonized with our across businesses as well.
So there's the allowance, of course, for businesses to have specific
approaches that are necessary for their own business idiosyncrasies.
But at the same time, we want to make sure there's a sufficiently unified Sony approachsuch that it's not very confusing for employees.
(07:21):
It doesn't create unmitigated categories of risk, all of that.
yeah, it definitely keeps my team busy in terms of trying to be that central node.
And you have something called the AI ethics flagship project.
What does that mean?
Yeah, so that's on the research side.
So Sony research is structured around these major research initiatives, which we callflagships.
(07:47):
And on the ethics side, a lot of our focus today has been on topics of ethical datacollection and algorithmic bias, since these are some major areas that
Even though there's been a lot of awareness and discussion of these issues in recentyears, there's still a lot of challenges in terms of how do you folks actually do this in
(08:10):
practice?
So ethical data collection is a great example of that.
When we first started working in this space, it was a bit more of a niche issue.
Nowadays, it's one of the biggest topics because especially in the age of Gen.ai, folksare very concerned with the huge amounts of data that are being
ingested by models, typically uncurated data.
(08:31):
And so you have all sorts of challenges in terms of issues of consent, intellectualproperty, biases within the data, transparency around the collection process, compensation
of individuals involved in the data creation process.
So there's so many of these issues and it's easy to kind of list out the issues, but it'svery hard to say how would you actually try to tackle all of those.
(08:54):
and address them and even at a small scale, let alone at a big scale.
So there's a lot of research questions within that that we've been working on as a result.
It is interesting that you were still engaged in publishing research and you were thechair of the FACT conference, it one of the major conferences in the field.
(09:15):
It's hard enough for people who are full-time researchers to keep on top of what's goingon, even in some subset now of AI ethics and governance.
So, how are you able to pull in insights from the research process in ways that arehelpful for wearing your AI governance hat?
Yeah, so it's kind of a mixture of things.
(09:37):
I will say there's a little bit of code switching.
the one hand, we're trying to push the cutting edge in terms of research.
And then on the other hand, there is the more practical implementation side.
Now, there is a little bit of overlap in this Venn diagram in that a lot of what's trickyfrom an implementation side is that last mile, which is where research tends to start
(10:00):
getting a bit more nebulous in terms of
what to do, but if you kind of dig a few layers deeper, that can also be interestingfodder for research questions.
Because again, it's sort of like very easy to kind of list out the Desiderata, but goingto like, how would you actually do this in practice?
(10:20):
There are like a bunch of challenges within that that can also provide, can be interestingresearch questions in and of themselves.
But that's one thing that we try to balance, like on the one hand with the research teamgiving them enough room to work on projects that are a bit more high level in terms of
(10:42):
what should ideally happen, but also trying to tie it to some practical elements too, suchthat we can learn from that in advising the businesses.
For example, going to the fairness evaluation side of things,
There's a lot of talk about the need to do to assess systems for bias, but it's stillquite difficult in practice.
(11:07):
Like if you talk to a business unit, there's like a lot of very basic questions they havethat aren't really quite rigorously answered yet.
So, you know, it's like, okay, what makes for a good fairness evaluation?
What is sufficient?
Like how, like what actually counts as problematic bias?
And like if there's a confounding factors, it's still
problematic, what kind of magnitude and direction of bias is problematic.
(11:34):
And there's all of these sorts of questions that are actually really challenging toresolve and can't really be answered once and for all.
And so that can also be kind of interesting to drill down as well from a researchperspective.
How do you answer those questions?
So in the absence of very clear best practices or regulatory standards on this, we tend totake a case by case approach with many of these things.
(12:05):
So, for example, when it comes to these questions of like how bad is bad, like, you we tryto tie it to, what are the actual harms tied with this sort of use case?
So, you know, it's a very different question if...
If you have, for example, we don't have these applications at Sony, but if youhypothetically had facial recognition being used in a law enforcement context, then to
(12:32):
identify potential suspects, then any sort of false positives are very harmful.
And so you would really care, especially about trying to minimize those and kind of the...
the rigor with which you would want to evaluate bias issues is extremely high.
(12:53):
Whereas if it's a context where the AI system is maybe just like roughly counting peoplein a crowd to say like, there's too many people in an area, then you care a little bit
less.
I mean, you still care about fairness issues, but it's a little bit less sensitive interms of if the accuracy is a bit lower for one group versus another.
(13:17):
What would you say are some of the most important unanswered questions in the researchworld in the fairness and biospace?
Great question.
think there's a lot of things.
So I guess first off in terms of fairness and bias, I think there's a significant lack offairness benchmarks in a lot of areas.
(13:37):
you have individual papers that have assessed particular models for bias concerns, butthere's not a clear set of benchmarks that everyone can use and then compare their models
reliably.
And I think that's
is very important to actually make this part of an everyday process.
If every single model developer kind of needs to ad hoc create their own test,essentially, it's both not the best incentives and also makes it really difficult for
(14:09):
third parties to assess how biased is this model versus that model.
So I think that's one major area that's missing.
In addition, there tends to be a lack of
demographic data that would enable folks to do these sorts of checks.
And so typically, you in the literature, people assume, yeah, you know, who is male orfemale, or, you know, who is of different racial background and such.
(14:38):
And that's not really how things work in practice.
then layered on top of that, there's a lot of questions around appropriate taxonomies.
So like, for example, in fairness, it's very common for people to just have male andfemale.
categories and then of course not everyone identifies in those categories and there's alot of unanswered questions in terms of how to address the limitations of taxonomies in a
(15:08):
practical and scalable way across a wide variety of applications, especially given thatthat sort of data is very sensitive.
Like really any data that's used for fairness assessments tends to be very sensitive.
So these are some areas that like researchers on my team have worked on and that I alsoencourage other folks to explore as well because I think what's really interesting about
(15:31):
AI ethics is we're never going to fully solve any of these issues.
The best we can do is put out solutions that are, that make incremental improvement incertain dimensions, but there's never, there's no such thing as like a perfectly ethical
solution and everything has trade-offs in the ethics space.
Yeah, absolutely.
Well, so let me ask you a related question that's more on the practical side.
(15:53):
Let's say someone's listening to this from an organization and they recognize, let's say,take fairness.
It's an important issue.
But as you know, it's an issue that crops up everywhere from the training data to thetraining to the outputs and everywhere in between.
So where should an organization start?
It's yeah, I think in terms of I think the most important thing is to start somewhereactually that that one step of just starting is where is where things usually don't happen
(16:23):
because you know in practice You know I've seen on the ground is typically people getstuck at that first layer They're like well, you know I don't really know like where I
would get this data to do this fairness evaluation Or I don't really know what would bethe best way to test my model and
I think what's important is to at least start doing something, getting development teamsused to thinking about these issues and whether they're applying the latest and greatest
(16:52):
methodologies or not is less important than having some sort of baseline of actuallychecking for these issues.
Overall in the industry, I think the bigger problem is that folks just don't even checkfor these things.
they, it's very common just to kind of.
for folks to throw their hands up and say, well, you I gave you all the information aboutlike general performance and such.
(17:15):
And that's what, you know, downstream users or clients care about.
But if we don't even start thinking about how to do this in practice, it's hard to makeany improvements.
And that's something we really, you know, stress on our end, like even if you can't do thebest fairness evaluation, just try to at least check.
(17:35):
for something, because it can be very eye-opening as well from the developer'sperspective, because inevitably there's pretty much always some degree of bias.
And it then leads to those questions of what is problematic and such.
But those are very useful questions for folks to start interrogating as a first step.
(17:57):
then how do you get the conversation to go from there, given as you say, it's not that, wefound the bias and we just take it out because nothing is perfect.
How does an organization use that information constructively?
So this goes into more of the kind diagnosis process.
So as I was mentioning, how big of a bias and what kind of bias matters depends on theparticular application, because it's not always just a technical fix.
(18:28):
Sometimes the fix can be more about the human-computer interaction, for instance.
So if you have an AI model that you are worried about certain shortcomings of bias orotherwise,
one mitigation strategy is like, okay, if a human is checking the output, for example, ora human that is like well-trained, or alternatively, the output of AI model is only one
(18:54):
component, or you provide more transparency about the shortcomings of the model.
So there's these additional levers that can be pulled in addition to the technical ones.
And of course, like with the technical ones, if you know about the...
bias from a quantitative perspective, that's an important first step there.
So I think the first step tends to be the hardest and then it kind of opens up the door tothese other possibilities in terms of how you can address these issues.
(19:23):
But this whole conversation stops if folks just throw up their hands and say, can't evencheck for this property.
How do you overcome that resistance or skepticism, whatever it is, whether it's fromdevelopers or business people who say, of course, bias is bad.
These are all problems.
But my job is just to make the system work according to certain performance.
(19:46):
It's a very challenging question.
it's definitely, I would say, an area where buy-in is extremely important and executivebuy-in and internal policy making is quite important here.
Because basically you want this sort of thing to become habit.
And once it becomes habit, it's not so onerous for people because it's always hard whenyou first start out and you come in in the middle of someone's development cycle and you
(20:13):
say, hey,
to do like a fairness assessment or I want you to fill out this model card or I want youto tell me how you interact with your stakeholders and what you told them about the
functionality of this model.
that always is tricky at the beginning because there's no way around it in terms of thereis work involved in checking for these things.
(20:35):
There's no way to just, you know.
figure out all this information without somebody actually putting pen to paper andthinking about it and often running some analyses.
But once it becomes more of a habit, people factor it into their overall timelines and itbecomes just another part of the process.
(20:55):
you know, in terms of my role, I've seen it kind of both ways.
You when we first came in, it was a lot more of like the...
the pushback at every single stage and like, do we really need to do this?
Why do we need to do this?
Like, it's going to take me this much time and that sort of negotiation.
But for the, you know, for the business businesses where this has become much more of ahabit now, like we're getting much less of that and much more like kind of constructive,
(21:24):
like how do we do this better?
We want to do this at like, we want this to be useful.
You know, we want
this to be at a good level and we're not just trying to tick the boxes and move on.
And just in general, how important is this AI to Sony?
And either Kinetza in terms of machine learning generally, or in particular, I'm curiousabout generative AI.
(21:48):
so it's, let's see, how should I?
Yeah, go ahead if you need a minute to think about
Yeah, sure.
So AI does show up basically in all of our businesses in various ways.
So we have what's really interesting about working at Sony in this area is we have so manydifferent elements of the AI ecosystem represented.
(22:15):
So on the one hand, we have technology developers in both the hardware and softwarecontext, especially in our electronics business units, where really, AI can show up in all
sorts of
different places, whether it be like face verification on a phone or like facial detectionin terms of camera technologies and things like that.
(22:38):
But we also have a lot of interest in terms of broader conversations around the role of AIin the ecosystem, especially vis-a-vis creators, because we are a company that, you know,
the one thing that unifies a lot of our businesses, both entertainment
electronics is the focus on creators and that's a huge part of our corporate purpose.
(23:02):
And as anyone who's opened the news nowadays knows, it's a very active public conversationin terms of how AI is used in this space and whether it can be used to augment human
creativity versus replace it.
and that's a very important question for us from an AI ethics perspective too, thatinforms a lot of our work.
(23:25):
Like, I think it, it's a nice connection between the goals of AI ethics and the goals ofthe company that like a lot of what we're doing from an AI ethics perspective is we want
to ensure that, AI is in this more enhancing role in terms of, making people's experiencesbetter, enabling them to do more create, creatively rather than.
(23:49):
know, disenfranchising them or replacing them in terms of like, you know, leveraging theirworks as fodder.
So, that is one nice thing that motivates our work as well.
Yeah, absolutely.
And this is a really huge question that's coming up in lots of contexts in the creativeeconomy, whether it's copyright or job displacement or just changing the nature of
(24:16):
creative work.
How are we going to get to that result that the AI is additive as opposed to purely justreplacing all the human?
It's a challenging question and it's something, on the plus side, think we have a lot ofwork.
So there's not gonna be any easy resolution.
(24:41):
But in terms of my team's work, one thing we do focus a lot on kind of going back to theethical data side of things is thinking very deeply about these questions, especially of
consent, compensation, the various rights involved in data.
And there's obviously a legal, increasing legal component to that, but from an ethicsside, we are just kind of taking more of the approach of if you are trying to, as much as
(25:09):
possible, protect people's rights and make sure everyone is fairly treated in the processof data collection, what would that look like?
And kind of raising awareness of these issues and making sure that folks are checking forthese things and there's education about these issues as well.
I think that's a...
very important starting point.
(25:30):
In general, I think there needs to be a lot more focus in the research field on how tobridge this sort of gap, because on the one hand, you have the need for tremendously large
data sets, where it's very hard to envision how do you actually respect all of thedifferent rights involved with every single element of that data set.
(25:54):
And then on the other hand, you have a long list of
Desiderata of what you would ideally do.
And I think we kind of need a reimagining of how that would actually work in a scalableoperational way.
And I know some of that doesn't sound sexy because I said the word operation.
especially like from a research standpoint, there can be this attitude of like, okay,that's not really our job.
(26:19):
But I think if someone can solve that, that would be tremendous because right now,
Developers are in a very challenging situation where you can imagine how you could do thisproperly for small datasets, but it's very hard to see how you would scale up to trillions
of data points, for example.
Yeah, absolutely.
So one more question about Sony.
(26:40):
It's a very global company, a company whose roots are not in Silicon Valley, but in Japan.
how do those cultural factors play into the work that you do on AI ethics?
So one aspect that I really appreciate in terms of working at Sony is there is a hugeemphasis on diversity in our culture.
(27:02):
So that is in part due to our extremely global nature.
And that's made it a lot easier actually for us as AI ethics team, especially one thatfocuses on issues of fairness to make the case for why.
folks in the company should care about this because it's something that is already in kindof all of our corporate cultural materials, training materials, that emphasis on
(27:29):
respecting people of different cultures and trying to leverage diverse perspectives as avalue add in our organization.
So that for me has been really nice.
And it's also practically had the impact of, so my teams are extremely diverse and global.
So on the research side,
where actually at both the research team and the legal compliance team were split aroundthe world.
(27:52):
have folks in the U S, in the EU, in India and Japan, the UK.
So, you know, we're really kind of all over the place and, that's been very beneficial ininforming our work, especially because,
The AI ethics field in general tends to be very US and Europe centric.
(28:15):
And that of course imposes a certain value system and perspective that is quite limited.
you know, for example, even just kind of going back to the taxonomy question earlier, ifwe talk about issues of like racial bias, for example, what are relevant categories of
race?
people.
very commonly in the literature use like the US census categories, for instance, andthat's not really resonant to folks outside of the US.
(28:42):
And so a lot of these insights have been raised in my team in part because we just havesuch a diverse team where folks are pointing out these ways in which the perspective can
be limited if we're just going off of what is commonly used.
And one big change that has happened in recent years in this area is the coming of manylegal requirements and regulations around AI such as the
(29:12):
things all around the world.
How does that affect what you do on the ground from a governance standpoint?
Yeah, so it's made my job, I guess, easier and harder in different ways, easier in thesense that I would say, you know, when I first started my job, oftentimes a lot of what I
(29:33):
had to prove to execs is why do you why should they even think about AI ethics?
Why is this relevant if like they are not like the technology person in their org or theAI person there?
Or why is this a broader issue that like requires
kind of cross-functional collaboration in order to address.
(29:54):
Now that's, that pitch is way, way easier.
In fact, it's more a lot of the other side of like, now I tend to be more inundated withfolks asking questions like, my God, how do we comply with this?
How do we address these issues?
Like AI is now very top of mind for many folks throughout the organization.
So.
(30:14):
But of course that has also created its own form of difficulties in terms of now, like wehave to move extremely fast in terms of figuring out how to actually wrap our heads around
all of this.
it's not just like, yeah, we're building AI governance systems because like we're ahead ofthe game and we're trying to like do things that are extra and like.
(30:37):
Instead, it's like, yeah, we actually really need to do this.
So there is more external pressure for that as well.
But overall, I think it's a direction that is helpful in that.
(30:58):
Yeah, you can just pick that up.
But overall, the helpful aspect has definitely been kind of moving AI ethics from an areawhere it's really just like a nice to have and has a very strong subjective component to
it to something that is, there is much more of that demand both internally and externallyfor us to be thinking very critically about these issues.
(31:26):
And finally, what do you see as the biggest challenge that you're facing in your work inthe coming years?
The biggest challenge I would say is that AI ethics is just such a new space that's movingso fast.
so, you know, typically there doesn't need to, for example, be such a close connectionbetween like research and implementation because you can have like a 10 year lag and
(31:52):
that's completely normal.
But in the AI ethics space, the entire field of AI ethics is an extremely new field orsubfield, depending on how you consider it.
And we're going from, okay, we have a lot of high level research to suddenly now there'sthese extremely complex regulations that are complex, but not necessarily very specific
(32:15):
from a technical perspective of like, do you actually do?
And there's also a dearth of industry best practices to lean on because again, it takes along time for there to be any coalescing around what everyone is doing in the space and
what folks think is.
sufficient.
And so, you know, for my teams in this space, we're having to kind of fill that gap oflike, okay, there's, it's not so easy as we can, it's not as easy as just having a
(32:43):
checklist where we can say, okay, did you go through this checklist and do this?
Instead, there's a lot of challenges to define what the right approach is.
And, you know, on the plus side, though, it keeps the job interesting and makes it, youknow, again, like creates
the reason for why you would have the research side as well, tied into all of this.
(33:07):
But it is a significant challenge at the same time in terms of to fill that ambiguity.
Alice, it's been a pleasure speaking with you.
Thank you so much for your time.
Thank you