Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Lauren Wallace (00:23):
Privacy is
a fundamental human right.
It is a baseline expectationthat allows us to live our lives
in a self-determined manner.
In the U.S., historically,we've treated the value of
that personal informationas belonging to the
company that collected it.
In the EU, privacy isfundamental human right and
(00:44):
the right of determinationaround the use of your personal
information belongs to thedata subject—the human.
Galen Low (00:50):
Is regulation
the red tape that will hold
regulated industries backin their AI transformation?
Lauren Wallace (00:56):
I think
heavily regulated businesses
actually have an advantagein designing and implementing
compliant AI programs.
They're already baked in toexisting regulatory frameworks.
About a year and a half ago,we launched a series of monthly
lunch and learns on ethical AI.
We talked about transparency,we talked about accountability.
This is most fun.
Like everybody loveswar stories, right?
(01:17):
I pulled out example, shockinghot-off-the-presses, things
that have implicated theseprinciples of transparency
or bias mitigation.
Galen Low (01:30):
Welcome to The
Digital Project Manager
podcast—the show that helpsdelivery leaders work smarter,
deliver faster, and leadbetter in the age of AI.
I'm Galen, and every week wedive into real world strategies,
new tools, proven frameworks,and the occasional war story
from the project front lines.
Whether you're steeringmassive transformation
projects, wrangling AIworkflows, or just trying to
keep the chaos under control,you're in the right place.
(01:53):
Let's get into it.
Today we are lifting the lid onAI transformation in regulated
industries and how a privacyfirst approach to AI can
actually accelerate innovation,drive cross-functional
collaboration, and pave a smoothpath towards sustained impact.
With me today is LaurenWallace, strategic advisor
and former Chief LegalOfficer at RadarFirst.
(02:14):
Lauren has an extensivebackground spanning legal
business development andexecutive roles at brands like
Apple, Microsoft, Nike, aswell as venture capital and
private equity backed startups.
She's known for her practicaland accessible guidance to
legal, product, marketingand development teams around
the responsible use of AI.
And she's a force ofnature when it comes to
(02:36):
navigating compliance inregulated environments.
Lauren, thanks so much forbeing with me here today!
Lauren Wallace (02:41):
Thanks
for the invitation.
I'm excited to be here.
Galen Low (02:44):
I'm excited
to be here too.
We were just jammingin the green room.
I wish I recorded that bit.
I'm very excited about this.
We've got a good energy match.
We see eye to eye in some waysand in some ways we may not.
I'm really fascinated by yourbackground because you've
got this sort of mix of likefast-paced startup culture and
the arguably less fast pace ofregulated industries like for
(03:07):
example, financial services.
I think we can zig and zagtoday, but here's the roadmap
that I've sketched out for us.
To start us off, I wantedto get one of those like
big burning questions outof the way that pressing,
but somehow paradoxicalquestion that everyone
wants to know the answer to.
But then I'd like to zoomout from that and maybe just
talk about three things.
Firstly, I wanted to talkabout what a privacy first AI
(03:28):
strategy actually looks likeat the component level and
what executives and regulatedindustries need to put into
place to achieve it, as wellas maybe what the benefits are.
Then I'd like to explore someexamples of what a privacy
by design approach lookslike in practice and how
senior department leadersbringing their company's
AI vision to life can setthemselves up to avoid
painful issues in the future.
(03:48):
And then lastly, I'd like toexplore how the competitive
landscape will look afterfive years of organizations
of various sizes having gainedmomentum on their AI strategies.
That was a big mouthful.
How does that sound to you?
Lauren Wallace (04:01):
Well,
except for the five year
perspective part, Galen.
I think we can, I think we cancover a lot of ground here.
Galen Low (04:06):
Alright.
I mean, you know, we'll bitour crystal balls out and we
can yeah, we can go 3, 5, 10.
We'll play.
I thought I would startoff by asking you like one
big hairy question, butI'm gonna tee it up first.
When we think of heavilyregulated industries like
financial services or healthcareor energy or telecom, et
cetera, most of us think oflike limitations that force
(04:26):
organizations to move slowly.
And then the follow onthought from there is that AI
transformation in regulatedindustries will also move slowly
leaving these industries likein the dark ages while the
rest of the world acceleratesinto their AI strategies.
So my big hairy question isregulation the red tape that
will hold regulated industriesback in their AI transformation?
(04:47):
Or is it the foundation thatmight benefit everyone in the
end or something else entirely?
Lauren Wallace (04:52):
Yeah.
I'm gonna go withsomething else entirely.
Well, you know, startsomewhere in the middle on
the options you've described.
I would turn this kind of intothe middle and say, I think
heavily regulated businessesactually have an advantage
in designing and implementingcompliant AI programs.
And that's because theprinciples that underlie AI
(05:12):
governance, like transparency,accountability, bias
monitoring, prevention, theseare essential elements of
an AI governance program.
They're already baked in toexisting regulatory frameworks.
And that's a biggerscope, I think, than some
people necessarily think.
'cause it includes thingslike GDPR, which regardless
of what industry you'rein, you're probably
(05:34):
pretty familiar with it.
But if you are in thebanking industry, you're
subject to fair lending laws.
If you hire people for yourbusiness, you're subject
to equal opportunity laws.
There are a host of other civilrights and consumer protection
frameworks that, in particularmay include prohibitions or
restrictions around usingalgorithmic decision making.
(05:55):
These rules, like the modelmanagement rules that the banks
have used for years, they'vebeen in effect for ages.
It's called algorithmicdecision making, that it
easily translates into, youknow, what we're looking
at with all these AI tools.
So these big, heavilyregulated institutions in
some of the sectors that youdescribed, they already have
robust existing complianceprograms for those frameworks.
(06:17):
And they have resourcesassigned for governance
to these frameworks.
And they have controls andthey have tooling designed
to ensure compliancewith these frameworks.
So I'm not saying it's an easything to bolt on AI governance
to that, but at least you'restarting from someplace when
you want to add AI compliance.
I've worked with a lot ofthese big, heavily regulated
(06:38):
institutions on their AIcompliance programs and you
know, it runs the gamut.
Some of 'em are kind ofcommenting cold or they
have a institutional allergyto AI and you kind of
have to get around that.
But there are plenty of themthat are very sophisticated.
They have these existingprograms and they're
(06:59):
just looking abouthow do we expand this?
How do we bring on the talentand the expertise that we need?
How do we bring in ourcommunity in a new way so that
they can participate in AIcompliance at the larger scale
that we're working against?
They've also, and I knowa lot about this, been
subject to a very complexpatchwork of security
incident notice requirements.
(07:20):
Now, my community, myradar, first people
out there, I see you.
I love that you're usingradar first for this.
We have that product, whichI'm a giant fan of, has about
500 global privacy and securityincident notification rules
on the books that you cantest your incidents against.
I think people need tounderstand that it doesn't
(07:40):
matter where the incidentarises, whether it arises
from a misdirected facts orfrom an AI that you've enabled
to use personal information.
An incident is an incident.
If personal informationis involved, you are
subject to these instantnotification rules.
So you gotta know that andnot think that you can kind of
set AI incidents to the side.
You certainly can't just likepin the blame on your AI.
(08:03):
We've seen that.
That doesn't play out very well.
So you need a process tomanage incidents, but now
you need to enhance it.
You are not juststarting from scratch.
From my, that was a verylong answer to your short
question was I actually thinkthe bigger challenge is for
the mid-size businesses.
They may be facing theseregulatory headwinds for the
first time when they implementAI into their workflows, and
(08:24):
then where do they get started?
They may have never hadtools where they could
analyze data at scale.
That these new, and some ofthem relatively inexpensive and
fairly accessible AI productsmight enable them to do.
But now all of a suddenthey're doing processes.
They're using informationin ways that they haven't
before and maybe exposingthat information to
(08:44):
a host of new risks.
So for those folks startingthe governance process from
scratch, I say, you've gotyour work cut out for you.
That was my long answerto the short question.
Galen Low (08:53):
I was like, waiting
for the hopeful bit afterwards.
You've got your workcut out for you, but.
Lauren Wallace (08:58):
Let's go
to the hopeful bit, then.
Because I think you can alwaysstart with the basic principles
of privacy by design and thenspread those principles across
to protection, not just ofpersonal information, but
protection of the company'sproprietary information.
'cause you got a whole bunchof new risks here when you
put just company informationonto ChatGPT or whatever.
You wanna assess and bolsteryour security posture.
(09:21):
We have stunning newattack surface with AI.
So you gotta make sure you'vegot your security dialed.
And most of all, and this iswhat I hope we'll spend most
of our conversation talkingabout today, is getting your
internal conversation going toreally understand what you want
to do with AI, what you thinkyou can do with AI, and what
is the real ROI that you hopeto achieve as we enter that
(09:42):
conversation in our communities.
I like to separate itinto kind of two vectors.
One is what are your internaluses of AI for productivity,
for replacing tools that youmight currently have, for
just functional enhancementsand what are your product
development use cases?
Galen Low (09:58):
Mm-hmm.
Lauren Wallace (09:58):
And there's
very different considerations
for each of those.
Galen Low (10:01):
I definitely
want to get into that.
I'm glad you mentioned aboutthe muscle, and I'm glad
you mentioned GDPR becauseI come from the digital
world, so there's two thingsin my world where we found
ourselves chasing your tailin terms of regulation.
One was GDPR, the otherone before that was
accessibility, right.
When with the ADA Act andwe were just running around
like chickens with ourhead cut off because we
(10:21):
were like, wait a minute.
This sounds so legal.
We don't have aprocess for this.
We don't even know.
We don't even have like thebeginnings of a conversation
about regulation and fines andreporting incidents, and even
just like data visibility.
We didn't have all ofthese things and we
were really scrambling.
And there's some folks whocame out ahead on, especially
(10:42):
on the accessibility side.
I found that like it becamea requirement and we needed
to figure it out fast.
The agencies that did helptheir clients kind of build out,
you know, web solutions thatwere accessible, experiences
in the digital world thatare accessible and like got
a leg up, but it was sucha scramble because we were
looking at it the other way,which you mentioned, right?
Which is like, yeah, let'sjust like try all the tools.
(11:03):
Let's move fast.
Who cares?
This is digital,this technology.
We're not bound byall of these rules.
And suddenly we gota taste of that.
We're like, oh, okay.
Lauren Wallace (11:12):
Guess what?
Rules?
Yeah.
So many rules and so many rulesthat are designed to express
complimentary principles.
But may exercise themin very different ways.
So if you have a GDPR versusthe CCPA, the California
Act that was followed up bythe CPRA, they come from the
same good place, but theyimplement very differently.
(11:34):
And so how do you as the digitalproject manager try to make
sure, and I'll tell you thisis what a lot of my customers,
my clients do, is just tryto find that high watermark.
Try to find what's consistentacross the regulatory
environments that you'reoperating in, and treat everyone
as if they were subject to thatsame high principled approach.
And that kind of try to likerun along the bottom and do
(11:56):
the least that you can do inevery case, that might seem
like that's gonna save yousome regulatory exposure, but
it's gonna cost you so muchmore in implementation and in
Headspace when you're tryingto go do interesting things.
So just like pick the best waythat you can do it and do it.
You mentioned accessibility.
This is a subject that is verynear and dear to my heart, and
(12:16):
I'm working with some othergroups on that, and I would love
to come back and have anotherconversation with you about
accessibility in AI becauseI think we're making a lot of
assumptions that guidelines,like WCAG, where you're just
kind of checking the boxon these attributes of your
website are suitable when you'redealing with a whole different
kind of cognitive load thatpeople are using or not using,
(12:38):
or may not have access to.
When they're using AI eitherin a direct interaction, like
in a consumer facing tool likecopilot or something, or when
they're interacting with AI intheir, in the physical world.
So when we look at people withdisabilities who have been
really empowered by thingslike Alexa and HomeKit to
(13:00):
set up their homes or theirenvironments in ways that
are so much better and moreconvenient and easy to use.
But these things are beingmoderated by AI in the
background, and so youdon't see it, you don't tell
it what to do, but again,your operating environment
assumptions have been made foryou by someone else and this
incredibly powerful and peopledeserve a lot more from this.
(13:23):
So I'm excited to have awhole separate conversation
about accessibility with you.
Galen Low (13:26):
I would love that
equity has been coming up and
equal access has been coming upa lot in these conversations,
and I think you're right aboutthe sort of high water level.
I also worked on someprojects in accessibility,
web accessibility with theWCAG stuff where it definitely
was a box ticking exerciseand it definitely was a
couple weeks before theywere due to be penalized.
(13:47):
And it definitely was sort ofwalking along the bottom, but
it actually added even morerisk because it sounds to a
lot of folks it might be like,ah, but you know, Lauren,
aren't we over engineering?
If we're doing this, like ifwe're engineering to the highest
level of regulation of like,you know, the common points.
But I've actually seen it.
I know that it could be scrapingthe bottom and just doing
(14:08):
the minimum is actually moreexpensive in a lot of ways.
You come back to it later onand you spin up all these other
initiatives to like improve it.
It's just puttingoff the inevitable.
Lauren Wallace (14:17):
Yeah.
You don't wanna have to go inand do point fixes later on.
And also, I think anotherthing about selecting this high
watermark from a regulatoryperspective, it's going to
encourage an ethical approachto the work that you do.
So even where you don't have theregulatory obligations spelled
out somewhere that you can likepoint to chapter in verse, you
might say, well, we still thinkthis is the best way to do it.
(14:38):
I think that's importantjust because we wanna be good
corporate citizens and wewanna be able to tell our kids
that we didn't, you know, doterrible things in our jobs.
Also because you don'tknow going in precisely
where the challengesto you might come from.
It might not come from astate regulator or a federal
regulator or European regulator.
It might come froma class action.
(15:00):
It might come from someentrepreneurial legal team.
Somewhere that's determinedthere's an opportunity here.
It might come fromyour competitors.
And so simply being ableto point that you met some
kind of notice requirementand some regulation
somewhere, that's not enough.
You need to show that asan organization, you are
operating in accordance withethical guidelines that you
(15:22):
had in your organization.
And I know that's one of thethings we wanna talk about
today is achieving buy-in.
You know, as a project manager,like that's your day, right?
Is getting people on thesame page, but how do you
communicate about that page?
You want people to get on.
What are the tools thatyou have available to you?
What are the words?
What's the vocabulary?
Galen Low (15:41):
I love that.
I wonder if we could dive inthere because, so the note
I had here was like, I'veseen you speak on the topic
of developing privacy first,AI transformation strategy.
The thing you're sayingabout just being ethical.
And sort of communicatingthat across everyone's
on the same page, that'salso resonating with me.
I wondered if maybe thoughyou could like break down
for my listeners the corecomponents of like what a
(16:04):
privacy first approach to AItransformation looks like.
Maybe some of the benefits,but even if we wanna like
cast wider than just privacy,because I think privacy is
part of that ethical approach.
It's still part ofthe fabric there.
So if you wanted to zoom out alittle, I'd welcome it as well.
Lauren Wallace (16:20):
Well, I'm
gonna zoom in first and you
know, get a little personal, alittle bit controversial maybe.
To me, privacy is afundamental human right.
There's a baseline expectationthat allows us to live our lives
in a self-determined manner.
And organizations andstates should cautiously
infringe on that.
Right?
(16:40):
Or should I?
They shouldn't, right?
But if they're gonna like,be cautious about it.
Please.
Where I'll get a littlecontroversial is looking
at sort of the globallandscape for privacy.
In the eu, which is thebellwether for treating privacy
as a fundamental human right andreally specifically articulating
what they mean by that in GDPRand then by extension in the
(17:01):
EU AI Act, which relies reallyheavily on GDPR to say, okay,
let's start with privacy andthen let's go from there.
That's the DNA of the EU.
Privacy is fundamentalhuman right, and the right
of determination aroundthe use of your personal
information belongs to thedata subject, the human.
In the US historically,we've treated the value of
(17:26):
that personal information asbelonging to the company that
collected it, belonging to thecompany that uses it because
they extract value from it,and so surely if they extracted
the value must belong to them.
That's where I'll get alittle controversial, less
controversial to say that thereare other jurisdictions around
the globe where the value ofthe personal information, in
(17:46):
fact, the specific contentof the personal information
is deemed to belong to thestate because the state is
going to use it in the bestinterests of the data subject.
So these are threevery different baseline
theories of who owns andshould get the value of
the personal information.
So, coming around from that.
I think when you look atyour company's ethical
guidelines, you say, okay,where do we fall in this mix?
(18:10):
This is where you start.
It's not with a list ofprinciples like fairness,
accountability, bias mitigation.
These are good, these aregreat words, and they can
help you build the templatethat you're all gonna fill
out together and you're gonnashare on your company and say,
these represent our values.
But at the core, do we asan organization believe that
we serve by extracting valuefrom personal information?
(18:34):
We believe that we serve byhelping our customers use
personal information for thebenefit of the data subject.
And you're allowed to pick,you know, in the US you're
allowed to pick those things.
I wanna help those companiesthat wanna serve the
interests of the data subject.
Galen Low (18:49):
Right.
Yeah.
You had mentioned the sortof European model being the
sort of bellwether model forindividual privacy as a right.
It's funny when you saythat about like, I'm gonna
zoom it out as a NorthAmerican experience for me,
is that, yeah, I've kindof accepted the fact that
my data is like currency.
It's transactional.
When you said, oh,because they've extracted
(19:10):
the value, they own it.
I was like, yeah,it makes sense.
In my head I was like, ohyeah, I didn't pay money.
I like, because like I findthat, you know, in some
ways the consumer in me islike, okay, well if I don't
have to give you money andit's quote unquote free.
Then that's fine.
And I think that's wherethe, like I'm maybe
zooming out too far, butwhere the end user starts.
Allowing an organizationto be less cautious with
(19:31):
my data because I'm like,sure, take my email address
and then enrich the data.
Like I work in media, right?
So I'm like, we're lookingin, like I've seen some of the
backend of some of this stuff.
And I'm like, some of it'sbetter than money in terms
of value, but the ownershipthing, I was like, wow.
Okay.
Yes.
And that's the thing I, thatalways stuck with me with GDPR
is like people taking control oftheir own personal information
and having agency over it,but then the gap of like.
(19:54):
But who's got time?
Like I barely havetime to delete photos
outta my camera roll.
Like I'm not going and figuringout where my data lives and
like writing requests tobe like, could you please,
you know, delete this datapermanently from your server.
People are doing that.
It just wasn't me.
And I was like, okay.
We almost allow it in some ways.
Lauren Wallace (20:09):
Kudos to
organizations like NOYB,
which stands for None OfYour Business, which is the
Max Schrems' organization.
Did you know that?
NOYB, None Of Your Business.
Because they can exerciseit right at scale.
For us as individuals,it's excruciatingly
difficult to exercisethis right individually.
Now, a lot of the states,and this is I think something
(20:30):
that's moving forward withAI regulation that has been
kind of missing from privacyregulation is the obligation
to get consent to inform andthen to require consent, and
that consent must be informed.
Consent in the privacyworld has been implied in
so many of our transactions.
And when you go and you know,you click through that cookie
(20:51):
banner or the whatever formand the product that you're
using, always mindful ofthe idea of that this is a
baseline maxim in privacy.
I'm sure you heard it a milliontimes, but if the product is
free, then you're the product.
Galen Low (21:05):
Yes.
Yeah.
Lauren Wallace (21:07):
But it's
very easy to get seduced
by the functionality thatis being promised to you.
And sort of think,well, my, it's just me.
It's just me sitting over herein my space in Portland, Oregon.
How much could my informationpersonally be worth?
But in the aggregate and in theenrich environment as you're
talking about, and this is oneof the emerging risks that comes
(21:27):
from AI, emerging privacy riskthat exists in AI because of
the capability of enrichment.
And inferences that can bedrawn from enriched data.
So the information that youprovided to this one vendor
in this one instance, in orderto buy this one thing that
may standing alone, not havea ton of value, pennies maybe
to somebody but aggregated andenriched and resold to third
(21:50):
parties that can use it forother things, now you're dossier
data value is much higher.
And how can you, as thedata subject go into
each of the, you can't.
That's not available to you.
You can't remediate that.
So again, my heart goes out tothe organizations that want to
be ethical in the first placeand have deep awareness of the
(22:13):
data that they have, where theygot it, if they had the consent
to use it in the first place.
Are they using it in amanner that exceeds the scope
of that initial consent?
And could theydefend that somehow?
This.
Expanded use is consistentwithout original intention.
Was the data subjectinformed at the time that
they were supplying this,that the, what their consent
(22:34):
was gonna be used for?
Again, these are not thingsyou wanna have to go back
in to your legacy data set.
Now you go out andacquire a company, you
buy 'em for their data.
Right?
You're not buying 'emfor anything else.
I was, this is a littletangent, but I was thinking
about this the other day.
There's a big storyin the news recently.
I'm not gonna name names, but abig bank had bought a startup.
(22:54):
That startup provided servicesto people who were seeking
financial aid, students whowere seeking financial aid,
and the big bank bought thisstartup because the startup
said that they had 4 millionnames of people who had bought
their student loan services.
And the bank obviously sawthis as a feeder of information
(23:17):
about individuals that theycould use to then target those
individuals as prospectivecustomers for the bank.
The customer acquisition coststo the bank are about 150
or so dollars, depending persuccessful customer acquisition.
Somebody who goes ahead andopens a checking account, right?
So they were willingto pay $175 million for
these 4 million names.
(23:38):
That's about $40, $45 a name.
So you can see why they're,that Delta 40 to 150 or so,
they thought it was about,it's worth about a hundred
dollars per person to go inand buy a hydrated vetted list.
Of prospects for theirbanking services.
Well then they found out those4 million names were fake.
They'd been syntheticallygenerated, you know, using
(24:00):
some kind of AI program.
And as soon as they actuallywent and tested those
names, they discovered thatthey were all dead ends.
And so $175 million in thisproject that they had done.
They sued the company, theybacked outta the acquisition.
You know, maybe theygot the money back.
I don't know.
I thought that's so interesting.
They bought this startupnot for its technology, not
(24:22):
for its people, but solelyfor the names on the list.
And given the public informationabout the acquisition, about
the lawsuit, we could seethe attributed value very
clearly of each of those names.
The company was worthnothing if those 4 million
names weren't accurate.
So take that into your own life.
And think, okay, what if me orone of one of my kids was in
(24:45):
that data value pipeline from avery early age and now up to the
age where you might be openinga checking account or getting
a mortgage from this bank.
So that transactional value thatyou see when you think, sure,
click to buy, except whatever.
My data in this moment doesn'tfeel like it's worth very much,
but if you look at your lifetimevalue, well, I'll tell you
(25:07):
this big bank thought you wereworth at least a hundred bucks.
So would you do somethingdifferent with that a hundred
dollars than what you know, thatdecision that you just made?
Galen Low (25:15):
There's so much
there because it's such
an analog to what you weresaying earlier about a lot
of folks are thinking aboutAI and AI transformation
in their workplace as thething they interface with
and the skill that they needto develop to do their job.
It's not always evident, thething you just described,
which is like what is in thatblack box of what happens.
And usually we think of itas, well, we don't know how
(25:36):
some of these models work,this machine learning, they
are fixing their own code andwe don't fundamentally know
why some of this AI works,but that's not the black box.
We mean here.
Black box, we mean here.
Is that actually the stuffyou don't see after you
hit submit or after yousubmit that prompt or after
like where's it all going?
How is it being aggregated?
Is it transferring hands?
And like going back tothat, the model, right?
(25:58):
Where you're like, yousit down and you decide,
okay, you know what?
We are not the EU model.
We definitely feel like we getvalue from our customer's data.
Let's start there.
There's still so much workto be done, I guess, to
understand then where allthis data goes, what we are
responsible for and like whatethically are we responsible
for as an organization.
(26:18):
I think that's.
An interesting question that, Imean, I don't know if we need to
answer today, but we could try.
Lauren Wallace (26:25):
It's
kind of a big question.
There are so many big questionswhen we step into this realm
as citizens, as parents,as consumers, as corporate
citizens, there are just somany big questions and I find
that is a pretty giant oceanto boil and I'm just standing
over here on the edge of itand I've just like got a big
(26:46):
lighter trying to boil thatocean with, it's too much.
So for me, I like tostart with a use case.
What is the actual thing you'retrying to achieve with this
one actual application and thisone data set that you might
feed into this application?
What is the ROI you reallyhope to extract from that?
Are you saving money?
Are you making money?
Are you gonna be able to addcustomers and then start to
(27:08):
break it down from there?
It's too much in thecompliance circles.
We talk a lot abouttone at the top.
Which I think is a veryvaluable construct is how
do our executives, ourleadership, our board, talk
about their values and howdo they send those values
down into the organization?
But that only gets you sofar where we're operating
(27:30):
at the project level.
You gotta break it downagain and say, okay, what
are we trying to achieve?
What's the use casethat we're after?
What is the actual ROI.
How do the ethical principlesthat maybe our leadership
or maybe the regulationsthat we're subject to, how
do they align to the thingsthat we're actually gonna do?
And if we don't know,because that black box,
(27:52):
is it safe to proceed?
And I've said no.
In my capacity as chieflegal officer rate our first,
I tell you what I mean.
We looked at hundreds of vendorsover the course of the year.
I actually did alittle readout on it.
There was a pretty highproportion that we declined.
We said, we don't have enoughinformation, or they haven't
posted a trust center thatwe can really dig into and
(28:12):
get the information thatwe need, or they're using
underlying models thatthey're not disclosing to us.
They did some research and Idiscovered that maybe they're
also over here and I, youknow, that's not a tool that
we've vetted, so I thinkfeeling confident, saying no.
Is absolutely fine some ofthe time, and that's where
it helps to have the ethicalframework where you say,
okay, one of our ethicalprinciples is accountability.
(28:35):
Let's say pick that one.
Well, if I don't know the answerto one of these questions in
this kind of straightforwardanalysis of what is this product
supposed to do for us, who'sgonna be accountable for it?
I can't just turn to the vendorand say, well, I didn't really
understand what you were doingwhen I bought it from you.
Accountability isa tough principle.
It's very aspirational.
And it's a good aspiration.
(28:57):
But really, who do we meanwhen we say accountable?
Do we mean our boardis accountable?
Do we mean that our seniorleadership is accountable?
Do we mean thatindividuals on the team
are personally accountable?
And I think you can startagain from the project
perspective like this.
Use case oriented, ROI orientedis setting up teams that are
(29:19):
accountable to each other.
And when you're settingup this team for a project
that's gonna use AI.
I think it's so importantto bring in your whole
community, and clearly youhave product at the table.
You have engineeringat the table.
You might have security atthe table, but you've gotta
have, let's say, customersuccess at the table.
Does CS have scripts sothat they can respond
(29:40):
to customer questionsabout this functionality?
Do they understand thefunctionality well enough
that in the absence of thescript they could at least
have vocabulary to surfacethe question to someone else?
Product marketing, doyou have documentation?
That thoroughly explains,that demonstrates you
understand what thisproduct is gonna do for you.
Marketing.
(30:01):
When you go out and talk to yourcommunity, do you know if your
customer community, do you knowwhether they're in that allergic
to AI category or at the otherend, you know, very eager
to try new things category.
How are you gonna tone andtune your messaging so that
when you talk about newfeatures or functionality
that you're bringing outthat are AI enabled, you're
meeting them where they are.
(30:23):
Legal, of course yougot legal at the table.
I gotta say that.
Importantly though, back tothe customers, you may have
customers in your portfoliothat prohibit the use of AI on
the products they buy from you.
That can be buried in thecontract somewhere like, and
it might not say AI, it mightsay algorithmic decision
making or using models.
You might have a 10-year-oldcontract that prohibits
(30:43):
this use, so you need toget legal the table too.
And then also looking at itfrom a regulatory perspective.
But now you're allaround the table.
Now is when you develop theaccountability to each other
and say, I asked you and Idid not hear back on this.
Galen Low (30:56):
I'm glad you
went there because as I
was thinking about, it's acomplicated question, right?
Accountability.
Like who?
And I was like, okay whereare we gonna take this?
And it's kind of likewe hold each other
accountable internallybecause we are a team.
The question I actually hadlike crafted here, which I think
we've kind of covered, is thatlike something like privacy?
It's only as strong asthe weakest link in a
lot of cases, right?
So you get a team together andif there's like one person on
(31:18):
the team that's like, yeah, itdoesn't matter, I'm just gonna
whatever, upload this dataset to like an a public LLM or
we'll just skirt by this becauselike, we can get away with it.
We've gotta deliver by Friday.
So let's just get it done.
We'll circle back.
But that sort of like cultureof being, of holding one another
accountable for that value,I think is really strong.
Lauren Wallace (31:40):
And here's
what I love about privacy as a
guiding principle that can playout in so many different ways.
Unlike so many other of theconsiderations that we take
into account when we're lookingat development or product
designs, whatever privacy ispersonal, do a quick survey
of your organization sometime.
Ask everyone, whether theyor someone that they care
(32:00):
about has been victim of apersonal information breach.
I mean, on the one hand, weall get letters from all of our
proprietors all the time saying,oh, by the way, you know, and
you should set up your Experiancredit report monitoring.
But if you've ever personallyhad your data misused
and apparently for thebenefit of somebody else.
It is a literal nightmare,and I've been through
(32:22):
it myself personally.
It's not why I became aprivacy lawyer, but it's not
why I became a privacy deal.
Galen Low (32:27):
Fair enough.
Yeah, I hear you.
I hear you.
Lauren Wallace (32:29):
And I was
talking to someone recently,
this was just a week or so ago,who had recently just kind of
gone through this whole processof trying to rebuild their life
after having been their bankaccount was hacked, I think.
And then all kinds of badthings happen from that.
So when you are trying toencourage accountability
and care for each other.
In this project context, whenyou can bring privacy in and
(32:53):
see how really personal itis, do kids really want their
children's information tobe available to anybody, you
know, with a passcode and forthe lifecycle that we talked
about when we looked at thatbanking example, really looking
ahead and thinking, well,this information that I put
in here, it's about my childor about my home or something.
This might be veryvaluable to someone five,
(33:15):
10 years from now maybe.
Can I stop that now?
Galen Low (33:18):
I wonder if we could
take this whole vertical slice
and look at it maybe from aproject or product perspective,
even just from that, youmentioned tone at the top, and
yes, it's important, but alsoit's not the be all end all.
So if we started just beneaththat, you know, developing a
product that's going to use.
AI or algorithmicprocessing in some way.
(33:40):
And there's a couple piecesthat you had in there and
you know, tell me if I'mright or wrong about this,
but like I'm thinking of likethe education piece, right?
You mentioned like havingthe lexicon or even having
the empathy to have thatthought of it's just our
customer's data versus thatcould have been my kids' data.
And then the like, havingeveryone at the table, I
know some folks listening arelike, we can't have everyone
(34:00):
at the table for everything.
Lauren, like, that'sgonna be so expensive.
Might work for you in financialservices, but like, you know,
we're scrappy all the way downto like, if somebody breaches
that value on the team, whatdoes it look like for that team
to like start to repair it?
Those are kind of the threepoints I wanted to look at.
Lauren Wallace (34:20):
Let me look
at it a little differently.
I wanna think for a secondbecause you have intentional
misuse of personal informationthat is a terrible thing
and it's a criminal offensein some places, but you
always have inadvertent andunintentional use of PI.
And that I think is somethingthat we can be very accountable
about being defensive about.
Most of these tools that wego by do have configurations
(34:43):
that you can enable.
That will make your PI,or again, your corporate
confidential information,let's like kind of bundle those
together for this purpose.
There are defensive things thatyou can do in the first place,
first at the baseline securitylayer, but also with these
products as you bring them onboard and you start to use them.
Well, let's first of allmake sure we do those things.
(35:05):
Let's avoid theinadvertent misuse.
So let's sweep that outand now we get to the core
issue you're talking about,which is the intentional or
maybe careless or reckless.
Galen Low (35:14):
Right, yes.
Lauren Wallace (35:15):
Use of people's
personal information because
we do have deadlines to meet.
We've got something tobring out by Friday.
Galen Low (35:21):
Just don't do that.
Well, actually I'm glad yousaid that because when you said
earlier that you have to sayno, that you've had to say no.
And when you lookback historically,
you've said no a lot.
And my wife works in insurance.
Right.
And they're, you know, I'mvery heavily regulated,
especially here in Canada.
And risk and compliance is veryserious and business gets turned
(35:41):
down because of risk, which isnot a lot of the case a lot of
the time for other organizationsin other industries where
they're like, we just we'releaving money on the table
by not having this here.
Versus what is the riskof us taking this money
if we haven't done our duediligence on the other end?
Or, you know, on theinfrastructure or on
the attack surface.
Or, I'm actually reallyglad you said that.
(36:02):
Right.
Just don't, which is like,that is a skill that.
It's a tough one, right?
Even when it's your job,I imagine being the one
who's saying, no, we can'twrite this business, sorry.
Or you can't deliverthis product.
Sorry.
Or maybe not, sorry.
I dunno.
Right.
It's like it's a toughposition to be in.
How do you wave that flag?
Especially when it'snot really "your job".
Lauren Wallace (36:23):
Yeah.
Well I'll say there's nopleasure in it, right?
It's not like, some sortof vindictive thing.
And I think we can revertto the shared vocabulary
and tools that we have.
To talk about itorganizationally in a
more productive way.
And the most old school wayof doing this is your good
old risk matrix where you'vegot likelihood going up
one side and severity goingacross the other side, and
(36:45):
you can have a conversation,a real candid conversation
about finding your.in thatfield and as an organization
agreeing, well, our line isto the left of that thought.
Below that dot.
And so we're notgonna do that thing.
It doesn't have to bean individual thing.
Say organizationally, wehave a risk approach that
(37:06):
our board believes in, ourinvestors believe in, and
our regulators believe in.
And if we don't stand behindit day to day, we don't
wanna have to come backin, in two, three years and
say, oh, we kind of wentoutside the lines that time.
And if we went outside thelines because we really
didn't understand what wewere buying or what we're
doing in these black boxes.
Ignorance is nodefense under the law.
(37:27):
Yes.
It never has been.
It never will be.
So I think tone at the top,shared values and the personal
investment that we have inour privacy and the privacy of
our families and loved ones.
I think that's alwaysa great place to land.
Galen Low (37:42):
I love that.
And you know what I love aboutit is that part of me was like,
okay, well are we gonna talkabout standing up when we say
building the muscle around,like ethical use of AI and
privacy and all these values.
Part of me was like, oh, is itgonna be like, okay, well build
your, you know, I don't care.
Small startup build anenterprise grade multi
personnel team that handlesrisk compliance, you
(38:04):
know, law and all that.
They're like, oh, we don't even,like, we'd have to hire like
20 people and we'd have to likewrite the book from scratch.
But actually, I thinkwhat you're saying, tell
me if I'm wrong, toneat the top is important
because you set values.
Then having the educationwithin your teams to be able
to say, that sounds likeit might be on the wrong
side of our risk threshold.
Can we have aconversation about it?
(38:25):
Is the beginning of thatmuscle, even if you don't have
a chief legal officer or ifthat person is like also wearing
three other hats, you know, atyour organization, it's still
about having the vocabularyand the culture and the tone
to have that conversation.
The other thing that I foundinteresting, 'cause I was
gonna press you on this,but the more I thought about
it, the more it made sense.
I said earlier, oh, mylisteners are gonna say
(38:47):
Lauren, we can't have allthese people at the table.
We can't bring everyone to thetable every time we do anything.
And then I thought about it andI was like, but actually when it
comes to AI right now, maybe youkind of do need to because it's
not something that is known.
It's not somethingwhere the risk is low.
It's not something that,you know, what people
are gonna do with it.
(39:07):
You know, left to their owndevices and actually bringing
people to the table helpshave that conversation more
than just getting work done.
It's like culture building.
Lauren Wallace (39:16):
Yeah.
And culturally, I don'tknow, an organization that
isn't either going throughor contemplating an AI
transformation at this moment.
And you cannot disregardthat many individuals in your
organization, maybe personallyaverse to the proliferation
of AI in their lives.
They're looking around theirroom and realizing that
(39:37):
every single thing thatthey do is touched by AI.
They don't reallyunderstand it very well.
And how could you,I mean, no shade.
You cannot understand it.
It's not accessibleto you to understand.
And so asking these people,these individual human
beings who may have thispersonal aversion to AI
for themselves, for theirsecurity, for their families,
for their job security, forthe security of their personal
(39:58):
information to say, okay,shut that off, because we
are aggressively pursuing anAI transformation strategy.
That's, you may get yourinitiatives off the ground, but
I think they're gonna fizzleout when folks haven't had the
opportunity to develop somefluency and some confidence.
So, I wanna tell you one thingwe did at RadarFirst, about
a year and a half ago, welaunched a series of monthly
(40:19):
lunch and learns on ethical AI.
Very casual thing.
But everybody, of course, wasinvited and it was meant to be
an open forum, and we coveredone topic in each session, so
we had about 60, 90 minuteseach time we talk about human
agency and oversight, or wetalk about transparency, we
talk about accountability.
And for that, I usedthe baseline EU AI
(40:40):
school guidelines.
It was before the AI Actcame out, but we had a
lot to work with already.
This stuff's a out there.
So we did a bit of educationon how these guidelines
had been developed, wherethey came from, what was
the baseline sort of humanrights principles underlying
each of them developedsome vocabulary about it.
We showed some examples.
This is most fun, like everybodyloves war stories, right?
(41:02):
So we, I pulled out examples.
I don't know if you knowthe AI incident database.
It's tremendous resource online.
Just look it up.
AI incident database.
Like, whoa, shocking.
Hot off the presses, thingsthat have implicated these
principles of transparencyor bias mitigation.
So we talked about the examplethat was given in each case.
What went wrong here?
What could they have donein the first place to avoid
(41:22):
this, or now that it hashappened, what's their
responsibility to mitigate it?
And then we just had anopen forum to discuss it,
and the people who cameinto the room were at every
stage of their personalAI transformation journey.
Maybe by the time they leftthe room, they hadn't changed
where they were in their right.
Fair enough.
You know, estimation at all.
But when they went back totheir desks and worked in their
(41:45):
own AI enabled or AI enablingprojects, they know how to
framework to operate in, theyhad a vocabulary to discuss
concerns with their team, orescalate concerns to legal
or to compliance, or to theproduct team as appropriate.
It was a great experience.
I recommended it to everyone.
It's super fun to do.
It also reinforces thatwe're talking about
(42:08):
human virtues here.
Human values here.
The impact on ourselves,on our families, and on
our planet for the future.
Galen Low (42:16):
Who would run
those sessions if they
don't have a Lauren Wallace.
Lauren Wallace (42:19):
Oh, if you don't
have a Lauren Wallace, gosh.
Well, this comes back to anotherquestion actually, which is who
runs your steering committee?
Which is, you may not have one,but it's nice if you do and
it's nice if it's extremelymultifunctional to bring in
everybody and bring in peoplealso at various levels of
seniority in the organization.
'cause some of your freshestthoughts are gonna come
from people who are notnecessarily been there very
(42:40):
long or have grown up intheir careers very much yet.
But who should run that?
It's a great question.
It's the same questionfor each of these things.
Is it someone on the productside who might be very close to
how your company is literallythinking about enabling it?
Should it be somebody on GNAwho is very close to how,
looking at how your companyis spending money on AI or
(43:01):
anticipating saving money onAI, you do have to have a tone
at the top contribution to it.
So you know, you wanna havean executive who's at least
a sponsor of the thing, but Icould see where you could even
share that assignment month tomonth among different people.
And get a little differentvalue every time.
Galen Low (43:19):
I love that, and
I love the, you started out
using the word communityand now I can see it sort of
transpiring, you know, notnecessarily the traditional
community of practice, butthe act of getting together to
share information, to figureout where there's alignment
or where discussion is needed,and then arm one another with
the vocabulary to continue thatconversation in the day-to-day.
Because I, in my head andprobably some of my listeners,
(43:41):
right, it's like, oh, do weneed to hire someone who's an
expert in like AI and privacyand like ethical implementation
of AI to run lunch and learnsversus I think tone at the
top plus an executive sponsor,plus the people who are doing
the work and the decisionsthat they're making and what
they're thinking about so thatwe can have that dialogue.
It is a cross-functionalgame right now.
(44:01):
It is a multi-function sort ofconversation that needs to be,
had to set the new stuff thatwe wouldn't have had before.
Because not every organizationwas contemplating or executing
an AI transformation before,and now it's table stakes.
Lauren Wallace (44:15):
It
came on as fast too.
Galen Low (44:17):
Absolutely did.
And I, that's what I like aboutwhat you said initially, right.
So from the organizations andinstitutions within regulated
industries actually had thatmuscle already, in some cases,
to be managing this risk anddialoguing about this risk.
Lauren Wallace (44:29):
And
they've made these risk
threshold decisions before.
It's not a new conversation.
So you are talking aboutgetting institutional buy-in
and the authorization to sayno to things where you haven't
already sort of integratedyour risk perspective into
what your company does.
That's gonna be a hard problemto solve right out of the
(44:50):
gate on a case by case basis.
Back to your point about whoshould run these things, I don't
think there's anything wrongwith bringing in a consultant to
help you establish the frameworkfor how you're gonna have
these conversations, becauseotherwise, oh my God, you're
gonna talk all day and you'renever gonna get everything done.
Galen Low (45:04):
That's fair.
That's fair.
Lauren Wallace (45:05):
But the
people in your organization
are the experts that shouldbe just providing the
inputs to this framework andhelping decide how to decide
on what your outputs are.
Galen Low (45:15):
To round
us out, I wanna talk a
bit about the future.
I don't think I'll dothe crystal ball thing.
We've been mentioning some ofthese regulations that have
happened in the past, right?
GDPR, and you know, there's someconversations right now slash
over the past decade or moreabout legislation lagging behind
use of things like social media.
We've been talking abouthow folks in regulated
industries might actuallyhave a leg up because they
(45:37):
have built the muscle tohave these conversations
around risk and compliance.
They have the notificationsystems, they understand
the attack surface.
But then I'm thinkingin my head, I'm like.
You don't wanna have togo back and like rework
things if possible.
And yet legislation hasn'treally caught up with AI either.
And I guess my question is,are all organizations kind of
(45:57):
going into a gray area a bit?
And do we have decadesahead of us of like just
making some assumptions andthen having the law come
back and change everything?
Or is it maybe flipped?
Will actual behaviornow influence how
legislation manifests?
Lauren Wallace (46:12):
That was
a great question, Galen.
Galen Low (46:15):
Just to round us
down with a little question.
Lauren Wallace (46:16):
Yeah.
This is a little bitof a mind blowing kind
of a question there.
No problem.
Well, let's think aboutthe law for a minute.
Law is just a way of writingdown what our shared ethical
principles are and puttingit in someplace where people
can find it so they know.
You may not agree witheverything that got written
down, and you may not sharethat ethical principle, but you
know where to go to find outwhat the community thinks about
(46:39):
this shared principle and howyou're supposed to carry it out.
And the litigation processprimarily exists to handle
issues when they're novel,they're new, they haven't
come up, there hasn'tbeen time to process them
through legislative cycles.
So looking at case law issuper interesting to see
what's actually coming outin the litigation context
(47:01):
for how people are so class.
We talked about classactions for a few
minutes a little bit ago.
It's a whack-a-mole thingwhere you see these big
foundation frontier modelproviders, sometimes
winning, sometimes losing.
So there's a lot ofbreathing room in how these
things are playing out.
But then these litigationoutcomes do tend to get
kind of poured back in toour legislative outputs.
(47:23):
But it takes a long time.
You know, we say.
The wheels of justice grindslow, but exceedingly fine.
So you end up with 180 pagelong legislation that intends
to capture every possiblescenario so you don't have
novel outcomes that give riseto a legislative situation.
'cause that's veryexpensive, in time consuming
(47:44):
and awful for everyone.
That's part of the reason ittakes so incredibly long, and
I don't think we really canbe looking to our legislators
to act fast on this, kindof don't want them to.
We saw in Colorado whenthey came out with their
EU legislation thatthey acted pretty fast.
They got something pretty good,and then they had to yank it
because they couldn't figureout how to actually enact it.
(48:07):
So companies kind of startto stood up their compliance,
stand up their complianceprograms proactively before
this rule came out, andthen it got pulled, and then
maybe it's gonna come outin June and then maybe it's
gonna come out after that.
So.
You can really get whipson by, I think, proactive
regulatory compliance.
If you are trying to do iton a point by point basis.
(48:29):
If you have developed yourtone at the top, if you know
what it means to be an ethicalorganization, if you know where
your risk threshold is, you'regonna put your legislative
or your litigation risk overkind of on the right side of
that category and say, okay,we're willing to face that.
If it comes to that, butwe don't know enough right
now, but what we know aboutourselves is what we believe
(48:49):
is right and what we believeour customers expect of us,
and we can act on that today.
Galen Low (48:54):
That was a
fabulous answer, by the way.
That is amazing.
Thank you so much for this.
Just for fun, do youhave a question that
you want to ask me?
Lauren Wallace (49:02):
I do, Galen.
We've had a veryinteresting conversation.
We've been talking about reallyhard and really interesting work
for the last 45 minutes or so.
And talking about howcomplex our day-to-day is.
And so I wanna check in on yourwellbeing and ask you when's the
last time you took a vacation.
And if you wentsomewhere, where'd you go?
Galen Low (49:21):
It was a year ago
and it was Mexico and it was
a big huzzah with my wife'sside of the family, the whole
family, her sister my kid.
Not the dog.
Dog stayed home, but itwas lovely just kind of
sitting, hanging out.
We got great weather andit was very much a recharge
and a reflection on likefamily, if that makes sense.
(49:42):
Good and bad.
Right.
You know?
And life happens fastand you don't always
pick the directions.
Things go in terms of healthand careers and everything.
But yeah, thatwas the last time.
We're due.
I actually feel, even as wego through these conversations
around AI transformation,and I think you're right
that like there's very feworganizations that aren't
thinking about this right now.
(50:02):
Some feel forced to, some areexcited about it, but it's
like adding to our stack ofthings to think about in an
already fast-paced world.
Yeah, there's a lot going on.
Scheduling time, findingthe time for things,
finding that balance.
You know, it, it is tough.
I think it's tough for alot of people right now.
Maybe insidiously, maybewithout people knowing it.
Yeah, I think there is likea, there's an impact that
(50:25):
I think we need to dialogueabout a bit more in the
zeitgeist around, yeah.
What our lives look like nowthat we are being asked to
kind of drink from the firehose on a lot of things.
Lauren Wallace (50:35):
Our cognitive
load is imponderable at this
point, and sometimes all youcan do is do what you did is
take your people, go someplacewarm, and just appreciate it.
I'm glad you did.
Galen Low (50:46):
And temporary.
Yeah.
Flee temporarily.
Lauren Wallace (50:50):
That's great.
Galen Low (50:51):
Awesome.
Lauren, thank you so muchfor being here with me today.
I really enjoyedour conversation.
For our listeners who enjoyedthe conversation as well, where
can they learn more about you?
Lauren Wallace (50:59):
Hit me up on
LinkedIn, we'll go from there.
Galen Low (51:01):
Awesome.
Fantastic.
I will include a link toLauren's profile in the show
notes as well as some of thelegislation we mentioned,
NOYB in the show notes.
So check those out andyeah, I think that's it.
Lauren Wallac (51:13):
Thank you, Galen.
Galen Low (51:15):
That's it for
today's episode of The Digital
Project Manager Podcast.
If you enjoyed thisconversation, make sure
to subscribe whereveryou're listening.
And if you want even moretactical insights, case studies
and playbooks, head on over tothedigitalprojectmanager.com.
Until next time,thanks for listening.