Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
All right, Reggie, welcome.
Thanks so much for joining us on the Road to Accountable AI.
Thank you for having me.
Tell me first what responsible AI or trustworthy AI, pick your favorite term.
What does it mean to you?
What does it mean to a company like SAS?
Yeah, so I'm much more fan of trustworthy than I am responsible.
(00:23):
I'm splitting hairs though.
I think people get the sentiment.
But I'll give you a quip.
We don't say responsible electricity or responsible cars.
We just say electricity.
We say cars, right?
And we expect electricity to be guarded sufficiently and to...
(00:45):
conduct current rate as we might need it.
We expect people to drive cars responsibly and we have standards and rules of the road, etcetera.
And I think the same should be true with AI.
Now, I do think the term trustworthy has a place because I think it's incumbent upondevelopers, providers of all sorts, people who deploy AI.
(01:15):
to be worthy of trust, particularly in a time like now, where there's a fair degree ofnegative sentiment in some regards, and certainly a large degree of concern around the
potential uses of AI.
And so I think it's particularly incumbent upon us, like us here at SAS, to make sure thatwe are worthy of folks' trust.
(01:42):
Now, we'll never.
force anyone to trust us.
You can't do that.
It's not how trust works.
But I do think that we have to prove ourselves worthy.
And one of the ways that I think is necessary with respect to proving one's trust is todemonstrate that you are attempting to provide goods and services in this particular case,
(02:06):
technologies, in ways that are going to advance
other people, right, to help other people to thrive.
And so that's why we really focus on this notion of human centricity, which we define asthe focus on an individual's agency, equity and well -being, right, to ensure that each of
(02:29):
us has the ability to thrive.
There are a few other kind of principles that we revolve around, but that's thecenterpiece, human centricity.
Of course, we focus on things like inclusivity, privacy and security, robustness.
and others, but that's how I see it.
What does implementing those principles mean in practice?
Because an analytics tool is an analytics tool.
(02:51):
How is it going to be more or less human
Yeah, so the principles to practice conversation is a big one in our industry.
And the way we embody that is we went to a fair degree of granularity to define what wemean by certain principles.
(03:13):
So I gave you a description of human centricity as an example.
We have descriptions for all of our others as well.
And so important in that is that
Our principles might sound like some other principles that you'd find in otherorganizations, but culture shapes the language all the time, right?
And so what we mean when we say transparency has to be different from what anotherorganization might mean by transparency because of the nature of the kinds of businesses
(03:43):
that we're in, right?
That said, what we attempt to do in our products, in our business processes, and in ourpeople
is to have then an expectation for how those principles show up in each of those areas.
So I'll give you an example.
When we talk about accountability, we're really talking about being able to proactivelyidentify and mitigate against adverse impacts.
(04:12):
That's how we define accountability.
Well, we recently released model cards with the idea of providing information about AImodels.
that puts an individual in the position to be able to proactively discern first what'sgoing on with the model, but then proactively discern what the best next steps are that
(04:35):
they should take on the basis of the information provided on the model car.
So model car provides you information about accuracy of the model, fairness of the modelfor the particular impact in individuals that the model might be built for.
A concept called model drift so that gives you a sense for whether the model is actuallyadhering to the expectations that were previously set established and so.
(05:01):
know, with that information, you know a data scientist or a member of the board can sayall right, this thing is performing as expected or not, and therefore, I must take a
certain set of actions right I can report.
I can do some auditing, can whatever.
(05:21):
And so that's just a proactive means.
So that's for us is bringing that principle of accountability to life.
Model cards are something that a number of organizations are starting to use.
How much more do you think those need to be standardized so that people can compare acrossand ensure that they have the right information?
(05:43):
Yeah, so we compare model cars to nutrition labels on food, right?
Nutrition labels on food are standardized to a fair degree, right?
There are certain things that in the US, the FDA requires and in other jurisdictions, theyrequire, know, basics, sugar intake, sodium.
(06:04):
Now, you can certainly argue that there are other aspects of nutrition that could orshould be on
nutrition label and the same will be true for a model card.
We've done, I think, a really good job at putting key aspects of a model for the differentstakeholders.
(06:24):
I guess the one distinction and where the analogy starts to fall apart, Kevin, is with anutrition label, all of us are human and we're intaking and for the most part, vitamins
and minerals and nutrients of all sorts have
typical impact on humans.
Whereas with AI models, models can be built for and interpreted by a variety ofstakeholders for a variety of different reasons.
(06:52):
The model itself is relatively amoral, but if you build a model that has applicability inhealthcare, that is fundamentally different than using that same model in, I don't know,
transportation.
Also within organizations, you have a variety of different stakeholders, a model card thatspeaks to the needs of a CFO or CEO are different than the needs of say a data scientist
(07:24):
or data analyst.
And so what we tried to do is build a card that was sufficiently useful and sufficientlydescriptive for range of stakeholders.
think we did a pretty good job.
Now, I'm sure there will be some who will argue on the far end of the technical capacitieswill say, hey, I need X, Y, Z.
(07:48):
Okay, great.
Well, we got the ability to go there, but maybe not on a model card.
And then there may be others on another end of the spectrum.
And so I think you get my point.
We're trying to lock in on the vast majority of consumers of a model and how...
how they may interpret the information they're associated.
(08:12):
You've talked about a few different elements of a trustworthy or responsible AI or AIgovernance initiative.
What was for you the biggest challenge in terms of rolling this out and getting buy -in
Well, the biggest challenge to be quite honest with you is just getting the people.
(08:34):
In our space, talent comes at a premium and just getting the people necessary to kind ofget heads down.
but beyond that, there was the design process to serve the needs of that range ofstakeholders that I just described.
(08:56):
In our technology, in our platform, there are a lot of pieces of information that we couldhave grabbed to display on a model card.
But we had to make some very deliberate choices about what was useful and what wasn't.
What was going to be more difficult to provide versus less.
(09:23):
what would perhaps be considered proprietary for our customers, because you never want toput a customer in a position where they're unintentionally displaying proprietary
information.
going through that level of analysis was probably the more difficult of it all.
But then the coding it all is the next step.
(09:45):
I'm not the one doing the coding, so it's going to be easy for me to say that was the easypart.
But I do know that the team
spent a fair amount of time on kind of that design upfront so that we had a really robustway of reporting out that again that we think was and we thought was memorable or
meaningful I should say and digestible all the
(10:11):
SAS is a company that's been around for almost 50 years.
It's an incredibly long time in tech and this area.
Does that make it easier, you think, to implement these kinds of responsible AIinitiatives than for a startup that's just getting off the
I don't know that I would say easy or hard.
I don't know that that's the way I would capture it.
(10:35):
Certainly we have a legacy that we proud of.
We've learned a lot over the course of 48 years.
You learn over the course of time, assuming you maintain the institutional knowledge, youlearn a lot about what works, what doesn't work.
(10:55):
what a meaningful cycle is like, you know, we're in this hype cycle of AI versus, youknow, say the hype cycle of crypto, right?
Like there's certain cycles that you move on and some that you don't.
So I think that kind of wisdom is very useful when it comes to creating things like modelcars, because now we're able to apply some of the learnings over the years of, you know,
(11:24):
how a CIO actually responds to bits of information that get provided as an example.
But if we were a younger company, a startup, in that mode, you'd kind of throw thingsagainst the wall and you see what happens.
And you have the ability to be agile and respond to the needs of the market in a much morerapid way than we might.
(11:54):
We're an organization that is going to likely not be the first to market with a lot ofthese capabilities, largely because we have a significant customer base, right?
We have people who rely on what we do, many of which are in human impacting industriesthat are highly regulated, like financial services, like healthcare, like government.
(12:20):
And so we have to be a lot more
deliberate and intentional when we make major adjustments.
We have to certainly take into account what their regulatory needs are and try toanticipate future needs.
And so that necessitates a level of analysis and design thinking as I described earlierthat, you know, if it were a startup, we probably wouldn't spend the time doing some of
(12:53):
those things.
What are your conversations like with customers talking about trustworthy AI?
Most of them right now are in the formulation stages of things like governance boards andoversight committees and the like.
I think, at least for those that I speak with, they've all come to some level ofrecognition that they will be impacted from a regulatory perspective.
(13:22):
Many of these organizations are global, and so they do business in places like the EU.
And so now that we've got the AI Act, they have to respond to that in kind.
Many are anticipating it, as well they should, that there's regulatory action coming outof jurisdictions literally around the world.
But beyond the regulatory, one of the things that I advocate, and for the most part get awarm reception to this idea, is that it's just good business practice if you wanna
(13:52):
demonstrate yourself as a responsible organization.
Something like,
think the number is 69 % of consumers are more willing to not just trust a brand, but thenbe prepared to buy from that brand at a premium if they perceive that brand to be ethical
(14:16):
with its AI.
And so it's not just about morality.
It's about market relevance.
appeal to that need in the marketplace through just good solid business practice, I thinkwe'll all be the better for
(14:37):
And how has the explosion of interest in generative AI impacted your efforts in this area?
Well, again, I think from a market relevance perspective, it's, you know, us being one ofthe, hesitate to call us leaders.
I'll let others define that for us.
(15:00):
But, you know, we certainly have been expressing our voice out in the marketplace and havegotten a very warm reception to a lot of the work that we're doing.
And so that's been very, it's been, it's had a boing effect.
I guess, is the way I would describe it.
It also importantly helps in a couple of other ways that I think tend to get overlooked.
(15:25):
One is employee retention, right?
Because employees also want to work for companies that are perceived as ethical in theirapproaches to AI.
And so that piece is very important for us, particularly in this.
in a talent scarce environment.
But then the corollary to that is it helps with recruitment.
(15:48):
So you know, when I put out a job post, I mean, I a I get hundreds of responses to, youdifferent job openings.
And so, you know, when I talk to some of my peers, that may not always be the case.
And so
I'm proud of that, to be quite honest with you.
And I think it's largely a consequence of the work that we've been doing and the way thatwe've been displaying it.
(16:15):
What do you think it takes for a company to be convincing that they're actually walkingthe walk on AI ethics?
Because it's very easy to say, but what is it that you think has been effective in thatway for
So the message matters.
The message has to be clear.
(16:36):
It has to come across as authentic, as does the messenger, right?
And so it's really important that if we say we're going to do something, that we actuallyprove it, right?
And so integrity in the marketplace is huge.
Look, people are smart.
(16:58):
And, you know, if you say you care about how AI gets used and that you want to make surethat it doesn't hurt people.
But then the next week you, you know, announce a product that may be inconsistent withyour prior statements.
(17:22):
That goes a long way toward creating distrust.
And so we're very, again, very intentional, very deliberate about what we say and how wesay it, who gets to say it, and then proving what we've said.
And so we've had a decent track record at that.
Of course, I'll record this podcast with you and then something weird will happen.
(17:48):
But important in that is it's not a matter of if there's a misstep.
corporations are large, there are thousands of people, of course there's gonna be someinconsistencies, but you've gotta be able to earn the right for people to hear you again.
(18:09):
And again, that matters not just with the message, but the messengers.
And so that authenticity that I go back to, people being able to relate and...
have some believability when they talk in forms like this and from stages and in print andyou name it.
(18:33):
We always wanna come across as authentic and approachable and interested again in howothers thrive as a consequence of our work.
Our founder and CEO, Dr.
Goodnights, says all the time, we are people here using our resources to help otherpeople.
(18:54):
That's the name of the game for us.
And that permeates our culture.
And again, it's something that I'm very proud of.
Let me ask specifically about inclusivity and bias.
That's an area where a number of companies have had serious missteps.
But as you know, there's fundamentally bias that's baked into the data.
(19:14):
so especially if we're talking about something like generative AI, it's not that you canjust flip an off switch and make it go away.
So what have you found is the best way to deal with that challenge?
Yeah, I think the conversation about bias has to broaden.
I think there is a negative connotation that comes when we use the term bias, particularlyin our space.
(19:37):
And one of the things that I try to do in this, and I'm happy we're in a podcast so I canexpand a bit as opposed to on a newspaper article or something.
So let me have permission to expand a bit.
human beings use bias for the purpose of cognitive processing, right?
(19:58):
We truncate in order to be more efficient with our brain energy, literally, right?
That's the physiology of it all.
So bias in and of itself is much more of a human function than it is a negative thing or apositive thing.
(20:19):
Now, of course, there are
biases that can have negative impacts.
And that's largely the kinds of conversation or kinds of things that we're talking aboutin this conversation.
To your point, data is always biased.
(20:42):
It either is biased based on timing, right?
So you might have recency bias
or bias based on age, you might have geographic biases of some sort, right?
And I've gotten data from only a small swath of folks in the Midwest versus West Coast,what have you.
(21:06):
Your bias may be on the basis of who, type of people, right?
Did they all look alike?
Did they all sound alike?
Did they all go to same schools and all sorts of things?
I like to say that data,
is really just our recorded existence.
So everywhere we show up, right, we're bringing our bias along with us.
(21:27):
And therefore, when the data is captured about us, the bias shows up in the data, as yousaid, right?
So then you get into the model development process, be it generative AI or any sort of AI,and you've got to use data to train the model.
(21:49):
And so the issue isn't whether bias exists.
The issue is whether the bias is having a negative effect on the model.
And if the users of the models are making decisions that are as a consequence biasednegatively, right?
(22:09):
So, you know, the hot button is race.
And so if we use
race as a piece of data, which some could argue that we should, some would argue that weshouldn't, but race is there.
And whether you use race as a variable or not, there's certainly proxies that can be usedto determine race.
(22:32):
So we know we can determine race based on many sets of data about people.
Now, if you want to use
that data to make a decision as to who you should preemptively send law enforcement to goand chat with, that would be a negative implication of the use of that data.
(22:59):
But if you want to use that same data set to decide who you should preemptively providesome assistive therapy to, because that therapy is proving itself to be more efficacious
a particular race of people, then you should do that, right?
That's for the benefit.
(23:20):
And so I think we have to be really nuanced and honest about how we use data, why we usedata, and for what purpose that data will be utilized.
so look, these are grownup conversations, right?
And if
(23:42):
jump into them with this idea that all data is biased or all bias is bad, that's just thewrong starting point.
We have to understand that what we're doing is human impacting and we have to make adecision.
And if we want to, and this is why that definition becomes important, if we want toadvance human agency, equity and wellbeing, right, to help people thrive,
(24:11):
then our orientation, the use of that data and the use of the models that result from itare entirely different potentially than if we were simply using that data to pursue some
profitable means.
No, it's a really interesting way of putting it.
(24:33):
In the world that we live in though today with so much sensitivity, is there hope thatorganizations can have those grown -up conversations?
Always hope.
There's always hope.
You know, I think that literacy is hugely important in this space and not just technicalliteracy, but socio -technical literacy.
(25:01):
You know, in our nation, we've been having a really difficult time with history lately.
And, you know, my hope is that
We will figure out a way through that.
You know, catch me on the wrong day and I can be really cynical about it.
(25:24):
But I maintain hope that we'll get to a point at which we can, you know, have smart,productive, and well -intended conversations about helping all people.
(25:45):
whatever it is that we have to provide.
In this case, we're obviously talking about AI.
But it's a tough set of conversations.
I will be very frank with you, Kevin.
But there are conversations that need to be had.
we've certainly had those conversations at SAS.
We continue to have those conversations.
(26:08):
And we've put...
procedure in place to allow us the freedom to have those discussions in ways that we thinkare useful and beneficial and consistent with our values so that we can show up in the
marketplace and compete on those values.
Can't speak for everyone else.
(26:33):
You serve on the National AI Advisory Committee for the White House.
What are some of the things that that organization is working
Yeah, so we are at this point in time focused on providing recommendations to thePresident and the White House on all matters of AI, where it intersects with society and
(26:54):
innovation and economics and the way the government works and so on.
We to date have offered about 18 or so specific recommendations, a collection of findingsand objectives and
what have you.
The executive order from last October actually reflected some of the recommendations thatwe extended.
(27:22):
And so right now, the focus is on a lot of gathering of information from the public sothat we can offer a new set of recommendations.
In fact, this week we'll be having a
a public session related to workforce.
(27:43):
Over the last few weeks, we've had similar sessions related to law enforcement,international cooperation, and a few others.
And so what we do after those exercises are done is we do a lot of kind of data, know, fatgathering and consulting of experts
(28:06):
At that point in time, we'll start to put our heads together as to what the incremental tothe previous recommendations ought to be.
that government will have the capacity that it needs
No, but I think it's pretty well documented that the government itself has said that itdoesn't have full capacity.
(28:32):
Now, we should clarify what that means.
You know, generally it's saying it doesn't have the expertise to appreciate some of thefiner points of the technology.
And so that's why it relies on what we might describe as a public -private partnership.
(28:53):
That's why I rely on advisory committees like ours that are comprised of a number of folksfrom industry, but as well as civil society and academics and what have you.
But our government is not unlike any other government around the world, right?
I just came back from Australia talking to government officials, Japan talking togovernment officials, and they all say the same thing.
(29:20):
And it kind of stands a reason, right?
I mean, these people are in the business of government.
know, bureaucrats, regulators, and I don't mean, you know, use those terms in anydisparaging way, but that's what they do.
They are not, you know, AI developers and scientists.
(29:45):
so they have to rely on people who do those sorts of things.
also don't have, we would probably argue lower our taxes if they did, they don't have themoney to pay for a lot of those folks in mass.
And so it only stands to reason that they have to go where the expertise exists so thatthey can adequately understand, so that they can then adequately put the necessary
(30:13):
guardrails in place to make sure that the markets that they're,
know, here to help to serve are operating in ways that are beneficial for the, in ourcase, the American public.
Let me just ask you in closing, what gives you the greatest hope that AI development willproceed in a healthy
(30:36):
Well, you know, I'm a bit of a fan of history and, you know, when we look at the historyof technology development, Kevin.
By and large, technology has been a net good for the humankind.
Fluoride toothpaste, at one point in time, was an advanced technology.
(31:03):
bathrooms, Inside bathrooms and refrigeration and vehicles and electricity and theinternet and you go on and on.
And in all of those cases, there was a moment in time where people doubted it, wherepeople rejected and pushed against it.
(31:25):
Of course, they were always the advocates who wanted to go faster, go further.
But there's always a transition time.
I mean, I can remember when the internet was coming on board and this thing called WorldWide Web was taking everybody by storm.
And you might remember this.
(31:48):
Will we ever be able to transact online?
Like, is it secure?
Yeah, who would put their credit card number into a computer?
What kind of idiot, right?
And we look at now, you know, we wouldn't have Uber, you know, we certainly wouldn't haveAmazon to the degree that it exists today.
(32:12):
If if that weren't now true.
And so I think AI, which, by the way, is a relatively old technology, as you know, it'sbeen around for years.
Of course, the generative.
The way we use generative is somewhat novel, but the underlying tech is well proven.
(32:33):
Neural network has been around for a long, long time.
So I think this AI transition that we're in will pass, just like any other technicaldevelopment has passed.
What's critical in this moment in time, however, is that because of its ubiquitous nature,
(32:55):
I believe we've got an opportunity to reimagine structures.
And so if decisions that we've made in the past or a function of whatever our thinking wasin the past that eventualize in the structures of our past and present, we're at a moment
(33:19):
in time where we can say, all right, we're going to start automating decisions throughthis thing called AI.
And we're going to eventually create some new structures.
And so do we want those new structures to be greatly beneficial for all or be greatlybeneficial for a few?
(33:41):
And I think the consternation that we feel and the concern that we see and feel out in themarketplace, out in the world really, is really getting at that fundamental question.
Is this being built for me with my needs in mind or not?
(34:03):
And so I think we have an opportunity in this moment in time to say, yes, it is beingbuilt with your needs in mind.
Yes, it will help you stay secure.
Yes, it will help you thrive into the future.
But we've got to be able to say that in an authentic way.
(34:24):
We've got to mean it, and then we've got to prove it.
And that's all of us, those of us who are developing, deploying, governing, using, youname it.
And so we have a unique moment in time, and this moment will pass.
So I just hope that we take the moment and we use it to the greatest good.
(34:48):
Reggie, thanks so much for taking this moment to have this conversation with
I do appreciate you.
Thank