All Episodes

July 11, 2024 34 mins

Join Kevin and Nuala as they discuss Walmart's approach to AI governance, emphasizing the application of existing corporate principles to new technologies. She explains the Walmart Responsible AI Pledge, its collaborative creation process, and the importance of continuous monitoring to ensure AI tools align with corporate values. Nuala reveals her  commitment to responsible AI with a focus on customer centricity at Walmart with the mantra “Inform, Educate, Entertain” and examples like the "Ask Sam" tool that aids associates. They address the complexities of AI implementation, including bias, accuracy, and trust, and the challenges of standardizing AI frameworks. Kevin and Nuala conclude with reflections on the need for humility and agility in the evolving AI landscape, emphasizing the ongoing responsibility of technology providers to ensure positive impacts.

Nuala O’Connor is the SVP and chief counsel, digital citizenship, at Walmart. Nuala leads the company’s Digital Citizenship organization, which advances the ethical use of data and responsible use of technology. Before joining Walmart, Nuala served as president and CEO of the Center for Democracy and Technology. In the private sector, Nuala has served in a variety of privacy leadership and legal counsel roles at Amazon, GE and DoubleClick. In the public sector, Nuala served as the first chief privacy officer at the U.S. Department of Homeland Security. She also served as deputy director of the Office of Policy and Strategic Planning, and later as chief counsel for technology at the U.S. Department of Commerce. Nuala holds a B.A. from Princeton University, an M.Ed. from Harvard University and a J.D. from Georgetown University Law Center. 

 

Nuala O'Connor to Join Walmart in New Digital Citizenship Role

Walmart launches its own voice assistant, ‘Ask Sam,’ initially for employee use

Our Responsible AI Pledge: Setting the Bar for Ethical AI


Want to learn more? ​​
Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
You're welcome.
It's so great to talk to you.
It's so great to be here, Kevin.
Thanks for having me.
You've been involved for a long time at anumber of different organizations, public
sector, private sector, nonprofit, onissues around internet policy, privacy,
and so forth.
How is it different or is it different inyour perspective what's happening now with

(00:24):
AI?
I think that I just heard a ding, so I'mgoing to have to start over again to
silence my notifications.
Okay.
You know, Kevin, I think there are a lotof conversations right now about whether
this is truly different and kind of thereally monumental change in how we live

(00:46):
our lives.
I'm a little bit of a skeptic about thatin that I guess perhaps because you and I
have been working in this area for solong, we've seen
the trains of large language models andautomated decision making and just
software that runs really fast in thebackground.
Why I think this feels really different topeople is that, first of all, this has

(01:09):
come into the wild.
I mean, it's come into the public use veryquickly and that it seems to be alive.
People seem to feel like this is makingdecisions about them or it's talking back
to them.
And although we know cognitively it's notalive, having a converse with you, having
an inanimate object suddenly talk yourlanguage feels different than previous

(01:34):
iterations of technology.
Why I think it's important that wescrutinize it is, and I've said this
before, it's not only the data that we'reputting into it, it's the decisions it's
making about us in the world.
What about the development of AIgovernance and policy?
How have you seen the regulatorydevelopments in this area around AI

(01:55):
relative to what we saw a decade ago ortwo decades ago with the internet and
privacy?
I think because it's moved so quickly fromnot known to known in the public, the
policymaking world is trying to catch upas usual to technology and doing it
quickly.
What I've also seen is that goodcompanies, I think, are ready to apply

(02:17):
governance principles to new technologiesbecause this is not the first one.
We have internet, we have...
social media, we have lots of policiesaround other uses of technologies by our
own, what we call employees, ourassociates.
And so we really, we in particular, mycompany, we're ready.
We had already thought about and startedto build and started to benchmark with

(02:41):
other companies.
What does good governance look like?
for these tools.
And of course, you know, just say AI, it'salmost like saying sidewalks, right?
It encompasses so much, right?
But to be general, kind of looking atemerging technologies, biometrics, AI, and
a whole bunch of other de novo uses oftechnology in the real world, proximate to

(03:03):
human beings, we did set about looking atwhat the tech companies were offering,
looking at what other companies wererecommending, looking at what trade
associations and think tanks were sayingabout,
governance and the possibilities ofgovernance.
I also sometimes like to call it trying togovern the ungovernable, right?
Because it's so opaque in so many ways,but it's also, we shouldn't engage in that

(03:26):
kind of tech exceptionalism that we are sospecial and so new and so different that
we shouldn't play by some reasonable rulesor that it's just too hard to figure out.
If it's too hard to figure out, weshouldn't be using it, right?
So what we did was look at ordinaryprocesses that companies put into place.
what the company's principles are, andthen how we apply those as people.

(03:50):
And it's usually our engineering, our datascientists, our tech folks who want to
deploy new tools that will affect humanbeings.
We pull those out of the work streams andsay, if this tool is going to be making
consequential decisions about a humanbeing, their rights, their knowledge,
their abilities to move forward, in mostparticular,

(04:12):
their employment and advancement, thattool is going to require additional close
scrutiny by technologists and by lawyersand policy people before it goes live.
And then at waypoints, you know, sixmonths a year, and then perhaps annually
after that, because as you know, thesetools drift, they change, that, you know,

(04:33):
it is a constant governance model.
And we're still building towardsautomation in our governance and
monitoring of tools of all kinds.
And, but again, I don't think just becauseit's new and funky and cool, it should
escape some kind of scrutiny.
In fact, maybe, maybe more so because ofthat.
So you issued a Walmart Responsible AIpledge in October of 2023.

(04:57):
And first of all, I'm curious, it'sinteresting that you call it a pledge.
It's not just a set of principles.
Tell me a little bit about the process ofdeveloping that.
And like all of our kind of internalpolicies, statements, pledges, it's really
a multifunctional endeavor.
I don't think the lawyers should sit in aroom and write things.

(05:18):
I don't think the technologists should sitin a room and write things.
The comms people shouldn't be working bythemselves.
It was really a kind of collaborativeapproach that a number of our teams came
to my team and said, we'd like to saysomething publicly about the guardrails
and the guidelines and really the valuesthat we live by as a company.
And I said, really, who's going to listento that?

(05:40):
Again, somewhat the skeptic.
But in all seriousness, the pledge isreally to our customers and associates,
people who we are engaging with to both betransparent and accountable in our uses.
And that sort of really reflects one ofthe things I say to my team all the time.
When we are introducing new technologiesinto the environment of human beings,

(06:03):
whether it's online or in the stores, inthe real world,
in offices and in warehouses anywhere ourassociates work, we have a duty to explain
because in many cases it may be the firsttime someone's come into contact with
this.
And so I always say to our team, inform,educate, entertain.
And let me tell you what I mean by that.

(06:24):
Inform, obviously notice that, you know,that something is happening in the
environment that is different.
It's particularly in the stores, whichwere, you know, 20 years ago, you could
walk in, you could walk out, you could paycash, you would be unobserved to any
system in the world.
That's changing, obviously, with your cellphones, with electric cash registers, with
cameras in the stores for security andsafety.

(06:44):
So I think we have a duty to inform.
We are meeting people where they are intheir comfort level with technology, in
their educational level, in their languageabilities.
And we have to speak to all of thosepeople.
So that's inform.
Educate.
Tell them a little more.
Tell them a little more about what thetechnology does, how it works.
When I grew up in New York, there was acompany called Sims.

(07:07):
I actually had a summer job there onesummer and their phrase was, an educated
consumer is our best customer.
I believe the same thing about technologyand the technology and responsible
technology use of our company.
If they know the responsible choices we'remaking about these technologies, it will
engender trust and mutual respect andhopefully really benefit the ongoing

(07:28):
customer store relationship.
And the last part is entertain.
And honestly,
This is fun.
You and I were talking a little while ago,you know, I, in all of the years we've
been working, we, I get to learn somethingnew almost every day.
And, you know, certainly every yearthere's a new, you know, or every few
years there's some monumental change inthe technology.
That's a joy.

(07:48):
It's really a joy to be a part of that andto learn and to be surrounded by people
who are just so, so smart and dedicatedand both want to do the right thing and
also want to use the latest technology todo whatever it is their job.
requires.
So and it should be fun.
We had a long time ago, we had thesesweepers in some of the stores.

(08:09):
And if you go into Sam's Club, you mightstill see them.
And they forget the brand name, but theywere about the height of a, you know, I
don't know, 10 or 12 year old boy.
I'm thinking about my son is probably, youknow, a few years ago.
And but almost almost eye level.
And they had cameras.
And the purpose of the camera was actuallyto scan the shelves to say what's out of

(08:30):
stock so that
people in the back store room would go,okay, on aisle seven, we need to get more
cereal.
But a reasonable person would think, wasthat camera looking at me?
What's that camera?
What's it recording?
What's it gonna do with that information?
So I said to the guys, first of all, Ithought we should have put vests on them
and made them, anthropomorphize them andalso have disclosures right there on the

(08:54):
machine that says, I'm not looking at you.
In fact, we had blurred out any facial.
features of any human being to just belooking at boxes and QR codes and the
like.
But it can be fun.
I have no problem with anthropomorphizingthe technology if it helps people
understand and feel comfortable with it.

(09:16):
me just stop a second.
Do you want to see if you can shut offthat other thing that's dinging?
If it's too much of a pain, it's not a bigdeal.
This is not Hollywood, but yeah.
I don't know what that is.
okay.
how do I shut off the notifications?
Let's take a look on my

(09:37):
Okay, I think it's gone.
I think it's all off now.
Alright, thank you.
Yeah, it was not a big deal, but...
No, no, we don't want it in thebackground.
Okay.
Back at it.
Okay.
Hope that works.
No, OK, we'll see.
All right.

(09:58):
That's a good.
first.
no, no, you're okay.
Yeah, it's, I mean, whatever this, thistool Riverside is recording each of us
individually and uploading it.
So it should be fine.
Even if the.

(10:18):
Okay.
Yeah, it's interesting that one of thevalues that is in that pledge is customer
centricity, which is not typically whatpeople see in terms of AI.
You've got things like fairness and soforth.
But why is customer centricity for Walmarta value in terms of responsible AI?

(10:39):
I think so many times companies roll outtechnologies that they say are for the
customer, but are really for the companyor really have some other purpose.
And in this case, when we say customercentricity, it really goes to what I was
saying before.
Does the customer understand?
Do they feel comfortable with it?
Is it a use case that enhances theirexperience?

(11:00):
That's, I think, some of the values Ithink about when I think about customer
centricity.
But ultimately, it's about does thisdeployment...
serve the humans with which it isproximate.
And if it doesn't, why are we doing it?
And if those customers or associates don'tunderstand it, then we have failed in our
duty of care, our duty to inform, educate,and explain.

(11:25):
So I think it really goes to, we haverecognized that people will be impacted
even just having it in their environment,having it nearby, and that...
There may be different levels of comfortwith it and that we have a responsibility
to those, to all of our customers,regardless of their comfort.
What else do you think is distinctiveabout the way you approach these AI

(11:50):
issues, given that Walmart is a retailerthat touches literally hundreds of
millions of customers directly as opposedto enterprise service providers, for
example?
Well, again, it's because it's a frontlinehuman machine interface, right?
We get to live in the real world andphysical world as well as the internet and
see how people feel comfortable with thesetools.

(12:13):
It's funny that we were focused oncustomers because I would say our most
effective rollouts initially were in toolsthat support our associates in what
they're doing.
You know, I give the example and I don'tthink it's a real one, maybe a real one
now.
when I was doing a Sam's Club tour andthey let me decorate cupcakes, which I

(12:33):
think is hilarious.
Now, anyone who's had a young child knowswhen you're ordering a birthday cake, you
want to make sure you get the red one orthe blue one or the right number of roses
or the right number of decorations.
And so there are guidebooks that tellassociates, this is how many roses you put
on an eight by 11 sheet cake and this iswhat it should look like for a new

(12:53):
associate or even for someone experiencedwho hasn't done this particular task
before.
That means they run to the back room, theyhave to flip through the thing, they have
to page through and say, you know, whatis, and then read and then decorate and
then read.
Just one example of all the many thousandsof things associates do in a day.
With AI, there is a potential and Ibelieve it's deployed already in some

(13:14):
places in the country to simply say, andwe have a tool called Ask Sam, we have
another tool, we have tools named a numberof things that are kind of in the Walmart
legend and language.
And say, you know, Sam, how do I decoratean eight and a half by 11 cheesecake for a
young, you know, for a boy, for a girl,for, you know, in a fire truck, whatever.
And that will save time, saves effort.

(13:38):
They can say, you know, repeat that.
It will really be an efficiency builderfor an associate.
And again, that's a very, I think, fun andupbeat example, but there are lots of, you
know, paper based or knowledge workerexamples as well that improve efficiency.
One of the things we have put online are anumber of our employee manuals, our

(14:00):
benefits manuals, basic information.
When you've got millions of associates,many of them are going to be asking the
same question, right?
So saves time, makes sure the informationis accurate, every associate is getting
the same information.
That's a good example of where regular oldAI is a good enough tool.
do not need generative AI because youactually don't want it learning.
You don't want it learning and changingthe content.

(14:21):
You want that content to be static andaccurate and correct for everybody.
That's, I think, one of the other points Iwould make is people have gotten really
excited about generative AI.
And it's a fascinating tool.
And there's lots of good and bad about it.
But you don't necessarily, it's notnecessarily the right tool for every job
that you might think of.
And so fit for purpose, to me, is a reallybig question.

(14:43):
you know, and accuracy and truth of theinformation to me, those are some of the
biggest worries I have about gender AI.
How do you structure the work onresponsible AI in your organization or
your organization relative to the rest ofthe company?
Thanks for asking.
So digital citizenship is, as we defineit, the responsible use of technology and

(15:07):
data, period, full stop.
That's the mission.
We have now got the privilege of leading ateam of several hundred people.
And that includes lawyers, technologists,data scientists, policy people.
It's basically half kind of legal and halfcompliance and operational.
And so we went about setting up a team.
I really think that the woman who has tolead that team, when we created it,

(15:30):
had one of the best jobs in the companybecause her first year was doing all the
research and the benchmarking and findingall the best practices and then standing
up the team for us.
And again, it is the people who createdthe original policy went right up to the
leadership of the company to say, theseare the rules for rolling out an AI tool

(15:50):
that has an impact on a human being.
And then created a kind of a gate in ourrollout processes to
to try to explain it without too muchjargon, that says, is this an AI tool?
Does it impact humans?
If so, you've got to be reviewed andanswer some questions and engage with the

(16:11):
Digital Values Team.
That's the name of the team.
And again, that team includes, it is ledright now by a data scientist, and they
will kind of probe and scrutinize thealgorithm itself, the purpose for the
tool, who's rolling out, who is theresponsible kind of business owner of it.
and then schedule, not just do the initialscrutiny, but schedule kind of follow up

(16:33):
monitoring and oversight.
So kind of all the tools of a traditionalcompliance program, but applied to this
new technology.
And so far, so good.
And we've worked with quite a few teams.
I don't have an exact number of how manytools they've scrutinized, but the team is
busy and it's growing.
So I think the metric is, you know, as wedo more in this area, they have more work

(16:55):
to do.
But the biggest, I think the biggestchallenge and the biggest opportunity is
really one of education, of getting outand making sure that all of our
technologists understand what's expectedof them and how we expect them to think
about the tools that they're deployingfrom the beginning.
So, you know, if we educate everybody,actually it will cut down on the work for

(17:16):
this team, but all to the good becausethere's plenty of other work to do.
And I really want...
each of our associates to think ofthemselves as a responsible user of the
technology and data.
This is always a challenge with compliancethough.
How do you convince them that this isvaluable and it's not going to slow down
what they're trying to do?

(17:37):
Such a good question.
And it's funny because I don't say Ibristle at those questions.
It's always been a perennial, like isprivacy getting in the way of productivity
or of profits or whatever.
We really do believe in customer trust andassociate trust and putting people first.
So I really posited as an issue of trustand responsibility and that that engenders

(18:03):
a better relationship with.
the company and that we are in acompetitive environment where this is a
benefit.
It's considered a good thing, I think, byour customers.
We see more and more educated customersabout technology and asking the questions
about how it's being used.
So I really don't try.
I don't think of it as as antithetical,but rather as kind of enhancing our

(18:28):
relationship with our customers.
And we do try to be lean.
Obviously, we try to be really efficientand not
every AI tool that's just deciding theexample I always use and it is a real one
is, are the bananas on the conveyor beltright?
We've got AI that looks at fruit andvegetables and meat and things and says,

(18:48):
based on what it has learned over the lasthow many years and what a good thing,
right?
We want good quality food to be eaten andsafe and that sort of thing.
That we do not consider, it's impactful tohumans obviously indirectly, but we're not
making judgments about humans.
They do not need to go through thedigital.
value scrutiny.
But I think frankly, in my experience,most people want to do the right thing.

(19:11):
They're not always sure how their job, youknow, affects that right thing and the
service to the customer and associate.
So it's a big company.
And so education and outreach acrossmultiple languages, multiple parts of the
world, multiple job descriptions, that tome is an exciting educational challenge.
Yeah, I mean, it sounds like you've got ascreen that is similar to what many of the

(19:36):
governments are coming up with in terms ofthere's high -risk AI or whatever it's
described at.
This is what we really need to worryabout.
Then there's all kinds of other uses of AIthat maybe don't need the same scrutiny.
I hope people do look at it as a spectrumthat not everything deserves or merits or
needs the same level of scrutiny, but someof it very much does.

(19:58):
And in the corporate world, the impactfuldecision -making that we would have would
obviously be in hiring, advancement,people's rights and responsibilities as an
employee.
In the customer world,
it's actually been even more fun becauseit's things like our AI chat bots and

(20:18):
putting together different fashion pieceson the website and saying, you bought a
red sweater, maybe this blue skirt goeswith it.
This is fun.
Obviously, we want to be careful.
We don't want to get it right.
But it is not a consequential decision,depending on where you're going that day,
whether the skirt matches the sweater.
But again, of a different ilk thanfinancial or career or other things.

(20:45):
How does the team stay up to date with thefast changing nature of technology in this
area?
That is such a great question.
And it is, I think, a constant part ofeveryone's job to be reading, to be
engaged.
We are members of quite a few, not onlyprofessional trade associations in retail,
in law, and that sort of thing.
I do find the IAPP, the InternationalAssociation of Privacy Professionals, to

(21:09):
be increasingly aware.
And they just today put out a big reporton AI governance.
and we're members of something called theData and Trust Alliance, which was out
front very early on.
We were one of the founding members inlooking at cross -industry guidance.
So it's not just one area of industry, notjust one size of institution, but how do

(21:30):
we make governance real for all companies?
And in fact, we, as well as many small andmedium enterprises, sometimes are
procuring things from third parties, fromvendors.
And...
just like every other new technology, wesee a flourishing of vendors coming in and
saying, this is the key thing, this is theperfect tool for everything, solution for

(21:53):
everything you've ever needed.
And many people don't know how toscrutinize that, right?
And we've had salespeople in our officesaying, we can't really disclose the
algorithm and we can't really explain howit works.
And we're like, well, come back when youcan, because that's not gonna be good
enough for our customers or associates.
So.
So rules around how to engage with vendorsand how to scrutinize those tools as well

(22:16):
as the ones you're building internally Ithink are really important and that's a
role that DTA has played.
Do you have benchmarks or goals or a wayto assess whether you're being successful
in these efforts?
I think benchmarking, especially when it'sin the policy world, is hard.
But certainly some of our benchmarking ishow many tools have gone through, how many

(22:37):
secondary reviews we've done.
Customer complaints are also a metric ifthings are making the wrong decision or
the right decision.
And so that's what we're looking at rightnow.
But if you've got any ideas, I'm all ears.
Well, no, it's an interesting challengebecause on the one hand, there's value in

(23:00):
coordination and standardization.
And on the other hand, what makes sensefor an organization at the scale of
Walmart is not going to be at the rightthing for a startup or for a company
that's in a completely different industry.
And so that's why I'm curious about howdifferent organizations think about and
see progress in those areas.
Yeah, that's such a good point.
I think there's some core values thatprobably cut across in terms of

(23:24):
transparency and kind of fit for purposeor, you know, scrutinizing high risk tools
more than others.
But we recognize also the place ofprivilege that I sit in and being able to
stand up an entire team to think about andthen go do and create a whole new
compliance program around this.
And so well aware.
And that's why DTA set about creating kindof products and workbooks and things for

(23:47):
smaller medium enterprises that.
They're kind of off the shelf guidance aswell.
Are there any major gaps that you seeeither in terms of what resources or tools
are available to you or in any other area?
I think we can always do more.
Again, back to the comment about governingthe ungovernable, we can all do more to

(24:08):
really think about how to automate andreally probe the tools in a consistent
fashion.
And like we were saying before, there'sobviously a lot of concern around bias and
about accuracy.
But how do you prove that it is doing thething that it's supposed to do and that

(24:28):
it's not doing the thing it's not supposedto do?
There's not a lot of standardization inthat yet.
And so standard setting bodies, where areyou?
I'm sure someone's working on a frameworkfor us to all follow at some point.
And I know you and I were going to talkabout legislation and government
interaction.

(24:49):
I get very concerned when I thinkgovernment actors are saying, we have to
regulate this one thing.
Again, as we were saying, AI is, you know.
it's like the air or the, you know, thesidewalks.
It's just a tool and a platform for doingsomething else.
And I'm usually the one saying, is there alaw that prevents this already?
You know, is there something on the booksthat simply hasn't been enforced in this

(25:13):
area or hasn't been applied to tools likethis yet?
If it's really, really that new, you know,certainly it may need some level of
regulation or legislation.
But I'm usually the skeptic saying,
We may have rules about this already,about being unfair to people based on race

(25:34):
or gender.
And let's look at what's on the booksalready.
Yeah, I mean, that's a big challenge thatwe're seeing a lot where I talk to lots of
people who say, well, companies can't betrusted and their incentives are not
aligned.
And we know this technology is so new.
So how do you convince people that theresources and the mechanisms are there to

(25:58):
address the problems that arise?
I certainly hear the same things and Icertainly know that big companies are just
whatever, big companies are made out ofpeople, right?
And so...
It's not been my experience, not here andnot in other big and small places I've
worked that people want to do the wrongthing.
In fact, I think our incentives areentirely aligned because if we lose the

(26:21):
trust of our customers, if we lose thetrust of regulators in the countries in
which we do business, then theconsequences are much greater than just
failing in one program or product.
It's a...
you know, an enterprise wide impact.
So I, again, I'm skeptical.
I understand it's good to have a healthyskepticism of large institutions in
general.
And I, and I respect that.

(26:42):
And that's why part of back to why we cameout with the AI pledge is like, that is
something that we feel is worth talkingabout that we are committed to walking our
values forward in the digital world, inthis tool, in this setting and in others.
And if people feel like we're not doingit.
They should let me know, they should letour CEO know, and they do occasionally,

(27:03):
including some of our own associates,which I actually love.
I love it when people raise their hand andsay, this doesn't feel quite right to me.
And I've got a letter once from apharmacist in Michigan, and he said, I
don't like that policy.
And it actually started a whole reviewfrom the ground up of how we were dealing
with a particular issue.
So I think we try to be very responsive.

(27:24):
Something I've tried to talk about withinthe company is,
and all of the good governance we've putin on these tools and others, mistakes may
still happen.
That's the really big challenge I find,especially for a company that prides
itself on its reputation and doing thingsright.
It's not an environment where we'd like tofail fast and move on.

(27:45):
We really do kind of bet our reputation ona lot of things.
And what I try to encourage my team andothers in leadership is to realize that...
If a mistake happens, we're going to haveto work hard and fast to fix it.
But I think people will give us grace ifwe are transparent about it.
If we say, this is what happened.

(28:05):
Here's what we're doing to fix thisincident.
And here's what we're putting in place tohave it not happen again.
So especially in kind of generative AI,when it moves and talks and it moves fast,
we need to be ready to both be humble andbe nimble in that area.
Yeah, I mean, one of the challenges withgenerative AI is what you said touches so

(28:28):
many people that now it's not just thedata scientists who are building things.
What does trust really mean to you in thatcontext?
I think it's a really important andvaluable point that this is all about
creating and promoting trust.
But what does that actually boil down to?
On the website, it means we've correctlydescribed a product.

(28:52):
Say you've got a child who's allergic topeanut butter, you better be sure that the
product you've bought does not have peanutbutter in it.
That's really important to us, and thatwas important to us before there was
generative AI.
But now if you have generative AI creatingproduct descriptions, you've got to make
sure they're right.
And as we know, generative AI is notalways right.
So in some tools we're just about to putout in the...

(29:17):
We used outside consultants to red team,to probe it, to try to crash it, and
things still do go wrong, but I thinkputting in the effort before it goes live
to really try to stress test it and thento stay, again, really vigilant about how
the AI might change is essential.

(29:39):
I feel, again, a deep sense ofresponsibility, not just because it's the
catchphrase of our team, but I think...
all kind of purveyors of technology shouldbe thinking about the impact it's having
on the world and on the people it comesinto contact with.
And going forward, are there eitherdevelopments that you're looking to or
anything that you see coming in the nextfew years that are going to be

(30:01):
particularly important in this area?
Well, I can tell you the things I'm themost worried about, unfortunately, it's
not the way to start to answer thatquestion.
But the things that I worry about are theappearance of accuracy and truth when it
is not there.
It's too cute to say fake news, but justsimply the authoritative answers that

(30:25):
often are provided by these tools that aresimply incorrect.
Do all of our citizens in this country orelsewhere?
have the ability to be good consumers ofthis kind of media savvy or media, you
know, digital kind of citizenship in thetrue sense of can we consume and analyze

(30:48):
information for its content and for itsprovenance?
And do we have the tools to help us dothat?
I saw a great study years ago about kindof government issued propaganda and
Americans were more likely to click onsomething that was, it kind of seemed to

(31:10):
be true, but it was a little bit off thanmany parts of Eastern Europe.
And I thought, wow, that kind of shocksme.
You know, we have free press and we haveall this stuff.
And it was because they had lived withpropaganda, perhaps in more recent times
than we had.
And I thought, well, how do we get to, youknow, a place where we have really
empowered?

(31:30):
Internet users of all ages, shapes, sizes,everything to scrutinize the information
or perhaps have penalties for people whoput out things knowingly that are
incorrect.
That's to me one of the hardest questionsfor the Internet community, for the world,
and one of the consequences of gendered AIthat worries me the most.

(31:54):
The other one is our kids using it towrite term papers.
But that's me.
The exciting thing to me is always, notjust the general utility for human beings,
but I have a son who's on the spectrum.
We have lots of associates who aredisabled or in some way we have many
customers.
We've just started, you may have seenquiet hours in the stores where every

(32:17):
morning for two hours, everything in thesound wise and the lights are lowered.
And it's been a real, it's been a joy towatch that evolve and people really get
excited about it.
I always call it kind of the ancillaryeffects as well.
People of all kinds who may not fit in itstraditional kind of disability community

(32:39):
have said, some people just like it quiet.
My husband says I would much rather shopwhen it's quiet.
And whatever.
And so the thing I'm the most excitedabout, not just about AI and gender AI,
but technology is the ability to assisthumans who are physically, cognitively
otherwise.
diverse or challenged in some way.

(33:01):
And Gen .AI, especially in voice enabled,is just a boon for many, many people.
And I'm so excited about that.
What that means for our workforce,enabling more people to join our
workforce, generally join the workforce,and enabling customers to find the
information they want and the goods andservices they want.
I think that's one of the best and highestuses of technology.

(33:25):
No question.
No, it's all tremendously exciting andit's sort of the paradoxes in and of this
technology that what makes it exciting isalso what in some ways makes it scary.
I mean, you talked about sort of two sidesof citizenship in both the corporate and
the public governmental context where thistechnology can be so great at
personalizing, but that can also beabused.

(33:46):
So true, so true.
And transparency is necessary but notsufficient answer.
Absolutely.
Nuala, really great to talk to you.
Thanks so much for sharing yourexperiences and what you're doing.
Delighted to be here.
Thanks for having me.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.