Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:30):
Welcome to
What's the BUZZ?, Where leaders
share how they have turned hypeinto outcome.
Today we'll talk about how toteach a how to teach your AI
agents ethical behavior and whobetter talked about it than
someone who's actively workingon AI ethics.
Rebecca Bultsma.
Hey, Rebecca.
Thank you so much for joining.
Rebecca Bultsma (00:47):
It's my
pleasure.
Thank you for having me.
Andreas Welsch (00:49):
Wonderful.
Why don't you tell our audiencea little bit about yourself, who
you are and what you do.
Rebecca Bultsma (00:54):
Sure.
So at the ChatGPT moment, I wasone of the first people to start
experimenting with ChatGPT, andI started using it all the time,
pushing the limits.
Using every new a I tool thatcame out.
And after a few months, Istarted having questions about
who was behind the curtain, whowas making decisions where it
(01:16):
was drawing information from.
And I couldn't find a lot ofpeople who could explain it to
me outside of computer sciencepeople who I didn't necessarily
understand.
And so I started trying to helpother people make sense of it
while also enrolling in aMaster's program, one of the
only ones in the world at thetime, at the University of
Edinburgh and Scotland thatspecifically focuses on data and
artificial intelligence ethics.
(01:37):
And so I've been fully immersedin AI ethics and still using and
experimenting with all thetools.
I think they're amazing.
I think they're cool.
I think it's an amazing time tobe alive.
But now I also have a verynuanced view of all the ways
that this can go wrong and allof the risks that are underneath
some of that shiny brightsurface.
Andreas Welsch (01:58):
Awesome.
I love that perspective.
Hey I tried this out.
I was curious.
I wanted to learn more.
And I think it just shows thathow broad this space of AI is
and how curiosity helps us learnmore about these things.
And you.
Almost like Alice going down therabbit hole and learning more
and more in a positive
Rebecca Bultsma (02:13):
Absolutely.
With a little mad hatter on theside for sure.
Andreas Welsch (02:17):
Exactly.
Alright, wonderful.
Hey folks, if you're justjoining the stream, drop a
comment in the chat where you'rejoining us from.
I'm always curious to see howglobal our audience is.
And also if you want to learnhow you can turn technology hype
into business outcomes, take alook at my book, the AI
Leadership Handbook on, onAmazon and anywhere you get your
books and audio books.
Now Rebecca, I know we're bothrushing out of meetings, rushing
(02:39):
into the next meeting.
So our time today Yeah.
Is a little shorter than we hadin initially planned.
That's why I wanna jump straightin, into the topics that we
discussed.
And, I feel for a number ofyears we've been talking about
this concept of ethical AI,trustworthy AI, responsible AI
it's evolved over the last 7, 8,9 years.
(03:00):
And I'm wondering what are youseeing?
What, what has changed from yourperspective when we talk about
all of, these differentconcepts.
Rebecca Bultsma (03:07):
We've been
talking about responsible and
ethical AI obviously for a longtime because AI's been embedded
in a lot of our workflows andour daily life in a lot of
different ways.
For example algorithmic poweringof social media, right?
There was questions aroundteenagers using it, a teenager
looking at maybe a post about aneating disorder, and then the
(03:27):
artificial intelligence poweringthe algorithm.
Pushing thousands of thosenotifications on them.
So we've been talking about AIethics for a long time.
What's changed, obviously, isthis introduction of generative
AI and because it can act like ahuman and do really human kind
of.
Things.
It raises a ton of new ethicalissues to the front of mind for
(03:52):
most people.
That we haven't had to thinkabout yet, or with this level of
urgency.
And so it's always been there inthe background.
I could give you a hundred casestudies of how.
AI ethics or the lack thereofwas super problematic before
generative AI, but now it's aneven bigger deal just because
the capabilities have expandedso quickly and there's so many
(04:14):
more risks.
Andreas Welsch (04:15):
So now we take
these models with the world
views encoded probably largelyWestern or u US based world
views.
And we put them into our techstack.
We put them into the mostcritical pieces of our
infrastructure, whether it'sdealing with customers or with
employees or leaders, and,getting coaching for employee
conversations and whatnot.
(04:36):
And we're even, adding one thingon top and we say, Hey, you do
this autonomously or semiautonomously.
Now with agency, it feels likewe haven't even really started
it at the foundational level andyet many organizations are
looking to push this.
Yeah, and I'm a I'm an adjunctprofessor.
I I work with undergrad studentsand.
One of the modules I teach isaround ethics, and I feel it's
(05:00):
such a broad and big topic, butit's also very hard to get your
arms around.
Oh yeah.
And it's hard enough to teachhumans to behave ethically.
Now we want to do this withagents or we need to do that
with agents.
If they act on your behalf, ifthey make decisions, maybe
optimize more for a businessgoal and like for the
collective.
Good.
What are the new challengesthat, that you're seeing there
(05:22):
in things that leaders need tobe aware of?
Rebecca Bultsma (05:25):
There's
honestly so many I highly
discourage organizations fromembedding things like agents
before their actual people havea really solid understanding of
AI, generative AI in general.
It's capabilities, it'slimitations, the connected
risks, because those are justamplified obviously.
Once you bring in something likean agent, and you're totally
(05:46):
right, there is this major riskof bias.
We call it like a weird bias isan acronym.
It's western educated,industrialized rich, and
democratic societies, or are wayheavily.
Overrepresented.
So that's its own kind ofproblem.
But there's this risk already ofrelying on AI systems because as
you mentioned, they're, theyhave an ethics system encoded
(06:09):
into them that reflects thebuilders who built it and a
flawed data system that it wastrained on the internet.
And so the risk of us using itwithout awareness is huge.
Us without having enoughawareness to oversee an agent
and make sure that it's alignedwith our company values.
There could be majorreputational damage if it acts
(06:30):
in a way that's contrary to yourvalues.
There's shopping agents now youcan buy right from within
ChatGPT or a bunch of the majorfinancial I institutions in the
world have come together andcome up with a way that.
Agents can pay other agents.
So if I, I'm looking at a greatpair of shoes and I tell my
agent that as soon as they getbelow$300, it can just go buy
(06:52):
it, use my account.
And so there's obviously risksconnected to that, that we don't
fully understand.
Chat, GBT is released browsersand agents, but there's
malicious actors out there who.
Know very simply how to maybehack these or redirect them to
incorrect websites and you endup sharing personal information
with the wrong places.
There's just so many unknownswith those kind of external
(07:15):
agents.
Obviously within something likeMicrosoft copilot in a, an
enclosed ecosystem that is moresecure.
There are some really amazingthings you can do with agents,
and that's the problem, like theword agent is murky, right?
Like travel agent it doesn'tcome with you to do stuff or
like it's a weird word, but, andit means lots of things in lots
(07:39):
of different contexts, but.
You just don't want an AI systemmaking decisions ever, in my
opinion, because it can'texplain how it arrived at those
systems or those decisions inany sort of way that's
explainable or defendable.
And then when it goes wrong,who's accountable?
It's not the agent.
(07:59):
You can't blame the AI.
It will be you.
It will be your company.
If it's bias, if it.
Spends$10,000 on shoes insteadof buying one pair.
Who do you blame?
You can only blame yourself,which is why you really need to
understand it and have reallygood control before you do this.
Andreas Welsch (08:19):
I think it's a
really interesting time that
we're in for so many reasons.
First of all of these things arenow possible.
Some of them are still in, inthe infancy, but you can see
where this is going.
To your point, shopping,financial transactions.
Automation built in into yourbrowser, yes I can see where
this is going.
And like I said, many of thethings are pretty incredible.
But then thinking about whatdoes that agent optimize for?
(08:42):
Does it optimize for theprovider's goals, which could be
revenue maximization or does itoptimize for the customer's
goals?
Which would probably be payingless or getting better service
or faster shipping at no cost orwhat have you.
And then what is ethical in, inthat sense?
Is it the, win-win?
Is it maximizing the greatergood for everyone involved?
(09:04):
Are you looking at more longterm relationships, customer
lifetime value?
Maybe?
I, I.
Give a little more here, butthen you buy more next time.
These sort of things.
And how do you even know howthese agents decide it?
I think that's the big questionfor me now.
Huge.
Yeah.
From your work now working withleaders, working with
organizations, advising them onethics, things that they should
(09:28):
keep in mind how can you enensure that agents don't just
optimize for reaching that goal,but also consider other aspects
that, that a human wouldconsider or that a, regular
business person would considerthinking two or three steps
ahead.
Rebecca Bultsma (09:43):
Honestly it's
tricky because as of right now,
it's hard to know what's goingon in the background of how an
AI is working and how it'smaking decisions.
One of the most interestingstories I've ever heard is, this
was a few years ago when theywere trying to teach image
recognition to an AI system.
They showed it thousands ofpictures of wolves and dogs.
(10:04):
And to teach it to learn thedifference between a wolf and a
dog.
Yeah.
And then when they tested itlater, the only reason it knew
what was a wolf and what was adog, is that the wolf had a snow
in the picture.
So even if you showed it apicture of a frenchie and there
was snow in the picture, itdecided it was a wolf, which
shows that we don't know howit's thinking in the background,
(10:25):
and it may be very differentthan how we think it's.
Thinking in air quotes when itmakes a decision.
And so I think it's superimportant that we take that into
account.
Obviously, it will get better,but a lot of it's gonna depend
on the very specificinstructions and tasks that you
provide, keeping things verynarrow to start and very
(10:45):
traceable so that you canunderstand where things go wrong
and start with really low stakesthings.
Obviously with any AI use startwith things that are low stakes,
right?
Because low stakes hopefullymean very low consequences if it
goes horribly wrong.
Yeah.
And the agents will get betterand we'll have some of these
questions answered, but we doneed like audit trails and
(11:06):
mechanisms because there isn'tnecessarily any government
oversight over this right now.
But you would need some internalcompany oversight for sure.
Andreas Welsch (11:14):
Yeah, in a way,
you might think that once things
do happen bad things happen,unattended, consequences happen
that then regulation or, broaderchanges are in, enforced in a
sense which is always ununfortunate in, in that sense.
But I think this part about,auditability, traceability
peeking in into the black box,checking the logs, what is
(11:36):
happening?
Maybe not so much on anindividual user level, but for
those who are looking atbringing agents in, into the
business, I think those arefeatures that, that are so
important because.
Like I said at the end of theday, if something happens, you
will want to trace what hasactually happened and why and
not just you want to, you willeventually need to have
something Oh yeah.
(11:56):
What has
Rebecca Bultsma (11:56):
happened.
Absolutely.
Absolutely.
Andreas Welsch (11:59):
So what do you
then recommend leaders do at
this critical point in time whenall vendors are shouting, we've
got agents build it on us or onour platform.
Come here.
And you feel the pressure thatyou should be doing something
because your friends on the golfcourse are doing something or
your competitors are doingsomething, what do you need to
(12:19):
do then?
Rebecca Bultsma (12:20):
Take a breath.
Honestly, it's not like a raceto the bottom right now.
And there's new legislation thatis coming in that may impact
you.
And I haven't done a deep diveinto it yet, but some of the new
laws that they introduced inCalifornia, for example.
Specifically target chatbots andthey're designed to protect
teenagers and kids, but theyalso potentially impact.
(12:42):
Anyone who has any sort of chatbot on their website and
introduces new liability, if inany way, shape or form it offers
any sort of advice or emotional,like anything that crosses into
that territory, suddenly it'sgoverned by different laws.
And there's other states who areintroducing similar legislation
and.
You might just wanna focus ongetting your whole staff super,
(13:03):
super trained on generative AIand working with things
internally like maybe Microsoftcopilot working on internal
agents before you start thinkingabout introducing things
externally to the public.
Until we have a betterunderstanding of what the legal
landscape looks like, what theliability landscape looks like,
and until we have a really goodunderstanding, wait for it to go
horribly wrong for somebody elsefirst is my advice because it
(13:25):
will, let's just see whathappens.
So my advice is to take abreath.
Andreas Welsch (13:29):
Sure.
I think that's actually a reallygood advice.
It's not as urgent or as, asimminent as it might see seen
when you read the news.
And there's still, en enoughtime that you need to spend
anyways learning andexperimenting and seeing what
works and what doesn't.
And also the the topic aboutstartup on, on internal use
cases in internal scenariosfirst, and, cut your teeth there
(13:50):
before you put something out inthe open is a good one.
I think it was in, in July,August timeframe McDonald's was
in, in the news.
They had connected an agent tothe HR backend system.
So applicants would getinformation more quickly.
And researchers, I actuallyfound that they were able to
hack the agent and steal thepassword, get access to, I think
it was 6 million records ofprevious job applicants and all
(14:13):
the kinds of things.
So speaking about things thatcan go wrong.
There are more of these examplescertainly coming, and it's
usually these moments in timewhen the risk is, okay, let's
take a breath and let's see whatwe should really be doing.
So yes, agree.
Now, I said this one is gonna bea, a short episode and I know
we're getting close to the endof the show.
Rebecca I was wondering if youcan summarize the key three
(14:33):
takeaways for our audience fromtoday.
Rebecca Bultsma (14:36):
I think that my
three key takeaways would be to
invest some time understandingas much as you can about.
Agents, how they work.
Experiment with them on your ownfirst, before introducing them
into your organization ormandating them into your
company.
Just experiment with accountsthat you have in your personal
email and your personal name tofigure out how they work and
where they fail.
(14:57):
That would be number one.
Number two, know that these arenot foolproof.
There's a lot of problems withthem.
You just brought up a good onewith McDonald's.
On a large scale and on a smallscale, there are issues.
And number three, just take abreath.
There's no rush.
We're all figuring this out.
Things are gonna go wrong, butmaybe just don't let it be you
(15:17):
who is the trailblazer in thingsthat are going wrong.
Just sit back and observe andtake a breath.
Andreas Welsch (15:23):
Wonderful.
Thank you so much, Rebecca.
It's been a pleasure having youon the show and hearing your
perspective on how we can teachagents ethical behavior.
Folks if you want to connectwith Rebecca, you find her on
LinkedIn.
And yeah, thank you so much forspending time with us today.
Rebecca Bultsma (15:37):
Thanks for
having me.