Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Pamela Isom (00:21):
This podcast is for
informational purposes only.
Personal views and opinionsexpressed by our podcast guests
are their own and not legaladvice, neither health tax, nor
professional nor officialstatements by their
organizations.
Guest views may not be those ofthe host.
(00:50):
So hello and welcome to AI orNot, the podcast where business
leaders from around the globeshare wisdom and insights that
are needed now to address issuesand guide success in your
artificial intelligence anddigital transformation journey.
My name is Pamela Isom and I'mthe host of this podcast, and we
have one of those specialguests with us today Victor
(01:12):
George.
Victor is a leading strategistin regulatory risk and ethics
and a principal consultant.
Victor, your experience isabsolutely incredible.
I thought it would be a goodidea to spend some time talking
to you today, sharing with usyour experiences, and so welcome
(01:33):
to AI or Not.
Victor George (01:35):
Oh yeah, thank
you, pam, so much for inviting
me on.
You know, seeing you, you know,connecting with you and our
shared backgrounds and storieshas just been a privilege in
itself.
You know, connecting with youand our shared backgrounds and
stories has just been aprivilege in itself.
So I'll just share about myself.
I am, as I said before, whereethics, governance and audit
meets.
And so ethics and understandinghow people are cultures I love
the uniqueness and our culturaluniqueness, from my
(01:58):
African-American background tounderstanding who people are, to
how we think, consider, andthen that governance, part of
how organization thinks what isthe brain, what purpose, what
drives them.
And then that audit space hasbeen does it really make sense
from a regulatory side?
Are you following, are youdoing what you say you do?
And so that has been a big partof my career for the last 10,
(02:21):
15 years and I've worked acrossthis US international teams
partnered with big four firms,and so continue to drive that
ethics, that governance and thataudit process.
It's just I enjoy it, fromcybersecurity to remediation
efforts.
Now we're in AI, so that'sexciting.
So you're, if we talk aboutyour career journey that's
(02:41):
exciting.
Pamela Isom (02:41):
So, if we talk
about your career journey, where
are you in your journey todayand where do you see yourself
headed?
Victor George (02:51):
Well, at this
moment, my journey I have is my
ethics my consultant firm,exterior, also known as Ethics
Era, where I consult withcompanies whether it be startups
, banks, fintech, software ontheir ethics, their governance,
and our programs can becybersecurity, so anything you
(03:17):
name it keeping them from beingsanctioned by the government,
sued and having negative mediaand really, though, really
helping government organizationsthink ahead of time, before
these actions happen.
So ethics really is to me, it'snot about this morality thing.
It's just thinking andconsidering and mapping it
internally with your people sothey can understand and move
fluidly in your organization,because it does help and it does
serve them more.
Companies where the individualwithin companies.
(03:38):
Most of the issues I findregulatory is that you have a
lack of psychological safetywithin organization internally.
Employees don't have it, theydon't want to provide documents,
they don't want to do this,they feel their jobs are at risk
or, if they do any, and itmakes it harder for the
organization, it makes it harderfor the people there, and
(04:00):
that's how a lot of problemscome up.
I mean, most of that deals withfines and lack of deadlines, is
that?
So I want to continue toharness that and build that up
for organizations, as well asbeing able to build as we also
talked before, this equity aswell in our communities as well
as in the workplace.
Pamela Isom (04:19):
That's interesting
thing.
We had a conversation about anexperience that you have that
touches on ethics, equity,diversity, inclusion,
accessibility and artificialintelligence.
Do you mind sharing that withme again, Because I have a few
questions about that experiencethat you had.
(04:40):
So would you mind going overthat with me again?
Victor George (04:43):
Yes, I had one of
those what the F?
Moments.
I'll say this I interviewedwith this consultant firm, grc.
You know, a consultant firm,small boutique, and actually the
hiring, the recruiter, reachedout to me so I was like, ok, you
know, I'll check this out.
And I made it to the secondround, final round, with the
(05:04):
actual owners of the firm and itwent well.
I thought it did.
Even the recruiter was like,hey, we're going to have this
offer letter going.
I was still unsure aboutaccepting it because in my mind
I was like, eh, you're notreading something that I don't
know whether I will or not.
But I read from the recruiterthat he sends me a message like,
(05:26):
hey, the company, they're alljust starting to move on because
they felt that you use AI torespond to their questions
during the interview.
And I was perplexed, and stillam, because of that association
and the questions that wereasked during the interview.
But really my part in this isbecause, as Pam, as you and I
(05:49):
met at a Black AI think tankevent online and I actually a
few weeks ago hosted I was oneof the panel speakers on that
and I don't I'm assuming thatcould also have been an impact
when they looked at my profile.
Oh, you know, he knows allabout AI.
That was also factoring becauseyou know, this job, that role,
(06:11):
was to me a piece of cake.
I showed up as my mostauthentic self.
I didn't do a whole bunch ofpreparing, reading this and
going over here and there Ishowed up.
And you know there were parts inthe interview where and I'm
just going to show you anexample where you know my son
had a bug bite and he had to goto the doctor's office.
(06:32):
He was having some infection onit.
So you know, while he was inschool I thought I saw a
notification about that.
I'm like, well, let me makesure he's okay, cause I was a
little bit on edge and I lookedat her just like this and said,
oh, so you're off.
I was looking at my phone.
I let them know, I was justlooking at my phone really
quickly.
My son he had a bug bite Justwant to make sure everything's
good to go.
And it was like, oh fine, yeah,yeah, and we went into the
(06:59):
interview.
There was another time whereinterview was stopped,
ironically because the owners ofthis consultant firm the
husband wife owns it One of themwas in the hospital and they
had to take a break.
Between that, the doctormedical office called him and so
I said hey, you know, can wetake a few moments?
We need to go take this call.
So we probably stopped for abreak for probably about five or
10 minutes, you know.
Just okay, all right, come back.
(07:21):
They did apologize and I atthat time I understood for me,
grace, because I just had a 20,30 second call.
Life happens, life is life, andwe both laughed about it.
The part, that other part thatcame off on that interview was
they asked me you know thingsthat I don't even know.
I couldn't even, I don't evenknow if could create this.
(07:42):
Because they asked me where doyou see yourself in five years?
And part of that was veryexplicitly I want to be in a
good environment with goodpeople collectively.
Rather than in this, in a placeof politics, I want to be in
that of a collectiveness to getthe job done.
You know and they were a littlebit surprised that that's all I
(08:03):
said, and the other part cameoff was highly intelligent.
People can have a struggle inthis role because it requires
and, as I interjected, itrequires understanding people
and, at times, people you knowwho are highly intelligent,
maybe can get caught in theacademia side, which that's what
I like about audit.
There's a scope and like, yes,they were connecting on that
(08:26):
Very specific work instances,like you know, building
cybersecurity programs, helpingorganizers.
That I gave in my experience.
So, you know, I don't know ifAI can come up with my
experience helping build acybersecurity program, or
challenges I had with mymanagers where they were
stressed and how I was able tomanage that conflict.
(08:47):
And, you know, get to the pointwhere.
And if you look on my LinkedIn,I have a recommendation on that
too because of that, so supportit.
So the recruiter also used thisword verbatim odd on their
reasoning for that, because hisreason was like, hey, what did
y'all do, what did you do in theinterview?
And just like this, here is avideo interview.
(09:09):
I'm looking directly at you.
I'm thinking like, huh, whatcould have been that?
But my thinking is I was verygood for the job, I knew the
work, I understood it, and theother part was why would I take
this job with my law degree?
Because, as I didn't mentionthis, that I do have a law
degree and, as I expressed heretoo, I went to law school.
(09:30):
That was a compliance audit.
Law school focused on that.
It had a compliance program,and I like to be solution-based
rather than litigation-based inthat organizational space.
So that's why I did this and Ihave courses to support that.
That was their other question.
Why would I want to take thisjob?
When I went to law school andplus, I'm like, hey, you know,
(09:50):
y'all paying me, why not?
And I don't want that stressthat lawyers have either of that
law environment.
So and I'm, you know, I justthink about like how can AI be a
justification on that?
And you were, and I were justtalking about that, like and I
actually did use AI to ask methat to feedback chat to DT.
Pamela Isom (10:12):
We're talking about
ethics today and governance,
and so that was an example ofvery poor governance.
It was an example of horriblegovernance and so sorry that you
had to go through thatexperience.
It was also a it seems to be amisuse of your time and a how
(10:33):
one could jump to the conclusionthat you were using AI makes no
sense to me, I think I sothere's no sense in trying to
figure it out.
I know that you told me thatyou query after the experience,
you query one of the AI tools toget some perspectives on how
you could have done thingsbetter, but before we get to
(10:56):
that, I'm I'm sorry that youwent through that experience.
Number one.
And then, second of all, I justthink that this is why we want
to bring forward the fact thatwe're discrimination and just
bad judgment is a problem.
It's just a problem in society.
(11:16):
So I don't understand, based onwhat you said, how you did you
make it to the second round ofinterviews?
Victor George (11:23):
Yeah, the second
final round.
It was the second.
Pamela Isom (11:26):
So you made it
through the first round, then
you got called back for thesecond round, and then did you
partake in the second round.
Victor George (11:32):
Yes, yes, I did.
Pamela Isom (11:32):
I did.
Victor George (11:35):
I'm sure those
were the two that that made that
assumption, the or one of thetwo being the owners of it.
Pamela Isom (11:40):
The firm yeah, so
it was during the second round
that it was concluded that youwere using ai and did they
explain to you to what extentthey thought you were using AI?
Did they think you were a deepfake?
Victor George (11:54):
The recruiter was
unsure.
He just said they felt that Iwas using AI to respond to their
questions.
And these were the peopleinterviewing me the second,
final round.
And in his words, again it wasodd.
He was like I don't know whatto do with this here.
Pamela Isom (12:11):
Okay.
So I think that that'sdefinitely a problem and you,
being with your legal background, you know that that's a problem
.
Blame AI, blame their actions,their negligence, their
(12:35):
discriminatory behaviors, theirunethical practices on AI, and
this is an opposite of what Inormally talk about, in that you
know, I've had examples whereyou know folks should have the
AI gave a response and insteadof them taking responsibility
for the response, they blamedthe AI, which is not an entity,
(12:58):
it's not a legal entity.
So that was just foolish.
That was like so foolish.
In this case it's different,but in this case it's still an
example of what we areexperiencing.
And what I don't appreciate isthey didn't tell you any rules.
I just think that this wholething is just a mess and just
wrong.
So it's just wrong.
Victor George (13:22):
But I'm very
concerned about AI supposed to
improve our lives so that I canspend and have a good time, a
good life with my family, so Ican work well and not be
overwhelmed.
Why is this now a problem?
Now we're using AI against us.
Ai is supposed to enhance ourlives and that is where I'm
(13:43):
really concerned, at where, ifsomeone else is going to be
accused of in an interview thatmay have needed the job, that's
my thing.
I didn't need it.
That's where I'm at.
Pamela Isom (13:52):
But this is a
common problem.
This is a common problem insociety today, in the school
system as well.
In the school system, kids aregetting accused of using AI,
sometimes falsely accused.
There's been lots of claims outthere where students and young
adults those that are in collegehave been falsely accused of
using AI and then they've gotthese tools that they're running
(14:15):
that are not that accurate, andso then the tools are coming
back saying that you have beenusing AI and it has caused
anxiety in some of our youth.
It has caused all kinds ofissues.
I believe that the school system, the undergrad schools and
colleges and universities aretrying to work through this, but
(14:36):
this is a common problem.
The thing is that this has beengoing on for a while now, so
this needs to get fixed.
But this isn't the school.
This is an example of whathumans do, and this is why we
have to put governance in placeand guardrails in place.
I don't like it when people andhumans will do this right,
(14:57):
because it's our own prejudices,our own biases get in the way,
and so I don't care what it was.
They were thinking they didn'thave facts, they didn't have
facts.
So you know this was just wrong, just outright wrong.
And what is it?
Where is the governance in allthis?
How?
Where's the ethical governance?
So this goes back to my conceptof ethical governance.
How do you tell a person wethought you use AI.
(15:20):
We thought we thought you useAI, so we're not going to extend
the offer to you.
I mean, you can do better thanthat.
They can do better than that.
Victor George (15:29):
And I'll add this
too Tell me how AI gives that
person a unfair advantage.
Pamela Isom (15:35):
That's what I mean.
Victor George (15:36):
I am.
Also, when I say audit, Iinject it too.
I'm where community meetsbecause I understand cultural
uniqueness and I love it.
I connect with people becausemost things in audit is
psychological safety.
You get someone to providetheir documents and I gave
specific examples.
So when someone hears, that isthat fake.
Pamela Isom (15:55):
So the second part
of this scenario is you then
went to one of the AI tools andsaid you know, you put in a
prompt and the prompt said giveme feedback on this interview.
And this is what happened.
So could you give me feedback?
And my understanding is thefeedback that you got was off,
(16:16):
but off phase.
And what kind of feedback didyou get?
Victor George (16:20):
I'm going to read
.
I was reading some of it.
Yeah, it said you know key ways.
Ensure a distraction-freeenvironment from the outset to
set a strong initial impression.
Balance structure andcontinuity.
Use structured responses butallow for natural conversation
elements to avoid seemingrehearsed.
Articulate clear career goals.
(16:43):
Align your career aspirationsclearly with the company's
strategic direction.
Empathize practical skills.
Highlight your practicalon-the-ground experience.
So it just so and it says speakspecific.
So I asked them what was theirthought.
But their response wasbasically you know, maybe you
can do this better or thatbetter or that better.
And I actually interjected inthis AI tool, jackbt, that I was
(17:07):
directly on video.
So you know I didn't even say,hey, you know there may be
biases, but I had to give moreinformation.
Then it said well, it could bebiased based on and you know
whatnot, but you know it justwasn't ready to, you know,
acknowledge what it was moreabout, what I needed to do more.
Pamela Isom (17:27):
Sounds like the
tool itself gave the textbook
responses.
So when you have a interview,you sit up straight, make sure
there's no distractions.
But that's more of the textbookand those are things that we
should do, but that doesn't fitwith the culture in which we
live today where everything isconnected, where your social
(17:52):
life is a part of your work life, right, so that doesn't fit.
So that's a matter of the datastill isn't where it needs to be
in the AI models, in therepositories, the LLMs that the
AI models are using.
But what I do think I'm so gladthat you brought this, because
I think that a lot of times,people don't really understand
(18:13):
why we say that ethics, equity,these types of biases are so
important to identify andaddress, and they also don't
understand how this carriesforward into the models so that
mindset, the attitude that youexperience, can carry forward
(18:33):
into the models.
There's plenty of instanceswhere things have carried
forward.
So what I don't know is exactlywhy they responded like they
did, and we don't want to jumpto conclusions, but what I do
know is you don't tell a personthey're going through the
interview.
They make it to the secondround, and then I just think
it's poor governance, poorethics, just poor everything.
(18:55):
To tell somebody, well, wedidn't extend you the offer
because we thought you used AI.
That just sounds so weak.
And so I would ask that we, asleaders of GRC and things like
that, that we help organizationsunderstand, help people
understand that we set theguidelines up front, let folks
(19:17):
know.
You know and we understand thatyou're going to be human.
My instinct says you wouldn'thave took the job anyway.
So, despite all of that, youwouldn't have took it, you
wouldn't have been happy.
If you did take it, you wouldhave been there for a minute and
gone.
So that's, that's the instinctsabout it.
So, yeah, I appreciate youbringing it up, because the
thing for us is to help peopleunderstand why paying attention
(19:40):
to how we treat others, how wedo that, carries forward as data
in the models.
Victor George (19:50):
Right, and I
agree and you know I had.
This is that you know, becauseI'm human and how I can want to
second guess myself.
Did I say this right?
How can I not be associatedwith an AI tool, now that you
know it creeps you?
know subconsciously, that's juststress, we don't need Right,
(20:10):
and you know, because now theseAI tools and we're going to talk
about this before, oninvestigations, which I was
going to say how that couldcreate bias, but it's now I'll
be in my head Did I say thisright?
Did I do this right?
And how can I not be seen as AI?
How can I be real and be realbut not too real where I am a
threat?
So then there's another aspectof that which, now that it
(20:34):
creates more, now AI is going tomake our lives even more.
You know that kind of, you knowkind of a thing where anxiety,
stress, and now we're in aworkplace and that I believe you
say I'm not if that's, ifthat's part of that's what it's
lead to, I'm on ai, not.
You know that not.
and so, oh, you know, I'm sayingai now, but ai not if that's
(20:59):
what it's going to be I, I hearyou, I understand it's, it's
supposed to enhance ours, butthat's not ai's fault, that is
people and that is people andthat is the configurations based
on that, where you know howthose thoughts, concepts,
consciousness, ethics as you sayand I've said this too ethics
(21:20):
is we got to keep.
We use that term too broadly.
Ethics is a set of principlesand values based on from that,
unique from one group or another, and so how we measure the
rules of that person's ethicsorganization should be more
considered.
Are we including the ethics ofa particular culture, group, age
(21:40):
, are we including the ethicsand the principle based in this
group?
And then I think we'll havemore understanding, because the
AI tool just basically said well, we've been taught a lot, you
got to work harder, you got todo this and that basically said
we've been taught a lot.
Pamela Isom (21:55):
You got to work
harder, you got to do this and
that, exactly, it was a horribleexperience.
It's a horrifying experience,actually, and we just have to
think about it.
So my mom always used to tellme you make sure that you don't
stoop to the same level.
So you be forthright withpeople.
Be forthright, and if they hada question about it, they could
have asked you the question,right, if they really had a
question.
(22:15):
But, honestly, that was neitherhere nor there, just a weak
excuse to as far as I'mconcerned, because they had no
rules, they had no guidelines,they had no rules, and I don't
think anything is said that youcan't do research before you
speak, even though you weren'tdoing that.
I don't think there is anythingthat says you can't do research
(22:36):
or refer to your notes in themiddle of an interview.
So that's just nonsense.
I want to make a note.
I'll tell you something.
So one of the things that we'vedone in my organization is we've
created this ethical governanceframework, and this ethical
(23:01):
governance framework bringstogether cybersecurity and AI
and privacy principles, right.
So we put them all together andso we've got these 14 pillars
and it's called this ethicalgovernance framework, and at the
very front of this is values.
Where are the values?
So, because what I feel like isethical values are not there,
right, ethical values aren'tthere, and so for many, and so
(23:24):
that seems to be taking priorityfor many Right To be unethical.
So we lay it all out and so,when I get some time, I'm going
to share this with you, becausemaybe we can look at how we
convey this even more, becauseit's a serious matter.
(23:46):
It's not just something to talkabout, it's something to pay
attention to.
But I blend cybersecuritytenants and AI tenants and
ethics all together and call itan ethical governance framework.
So I'll let you take a look atit.
You can give me some feedbackon it.
Victor George (24:02):
Oh, absolutely
I'd love to because, as you said
here too, with a lot ofcybersecurity issues and we're
talking about GRC, grc was a.
I mean, cisos didn't know this.
They want to say now GRC's.
But GRC's always been the same.
It's just that we didn'tunderstand.
They didn't want to connect ortie the technical aspect to that
broader picture of what ethicsand governance looks like.
(24:24):
Now they're doing it and so, nomatter how you know technical
we're getting with AI, we're not, like you're saying here,
getting to the principles andironing out and measuring what
that ethics looks like we'regoing to be using.
This will be incredibly costlyto people such as ourselves who
(24:44):
are in other groups or peoplewho may not even have the
proximity for people to sharetheir truths.
I've had many people beforewhen I've interviewed who spoke
English as a second language,had a Southern dialect and had
an urban dialect, had a verySouthern drawn dialect, and I
appreciate that's theiruniqueness.
(25:05):
Okay, can you do the job, can?
Pamela Isom (25:07):
you connect?
Are you kind, are you like thoseprinciples, that but you see,
you see, in that example thatyou just gave.
So let's say that there is adialect.
The AI models need to be ableto process and conduct natural
language processing and processthe various dialects and draw no
(25:29):
conclusions and recommendationsbased on that dialect.
But I had a person that shewhen we met we met about a year
and a half ago and one of thefirst things she said to me is
Pam, I think AI is racist.
And I said why do you say that?
And she said because the AI,she uses the voiceover.
(25:54):
So she asked AI a question andshe asked the question in
English, although she's from adifferent ethnic background, and
so she asked the question inEnglish because she has a
dialect.
It responds back in her nativelanguage.
(26:15):
So she's upset because she saysI want you to respond back to
me in English and it respondsback to her audio in her native
language, right?
So she said I think it's racist.
Is it racist?
And I was like no, it's not.
(26:35):
It's the way it was trained,it's the way they train the
algorithms, it's what, it's howthey train the, the models.
And I had I spent some timeexplaining it to her and I said
the thing for you to do is tosend feedback via the tools to
let them know that, hey, this isgoing on and the different
languages are not beingconsidered Right.
(26:57):
So so the different desires ofpeople with with different
dialects and accents are notbeing considered or is being
there, is drawing conclusionsand is drawing the wrong
conclusion.
If I tell the model, I want youto respond back to me in
English.
I don't care if I have adialect, that my accent is some
(27:20):
other culture.
I want it back in English.
Oh yeah, yeah, that's, that's aproblem, that's a training issue
Right, but the thing is that wehave to be mindful.
So we talk about diversity ofopinions, diversity of
perspectives, multi-stakeholderfeedback and inputs.
That's what has to go into themodels, things that we probably
(27:50):
wouldn't think about when you'retesting the models and you're
proving them out to ensure thatthey're going to deliver without
introducing harm.
So that's why I created theethical governance framework.
Victor George (27:57):
Absolutely.
It's just that some of themodels and AI tools that I've
been testing out where you knowcompanies may reach out to me
for their AI GRC tool thataudits their cybersecurity
processes I've seen misses onhow they reviewed the control
design and I'm like, okay, thisdoesn't create that
psychological safety, thatcultural awareness that the tool
(28:21):
would have.
So, even from a lens ofcybersecurity GRC, a lot of the
control designs we find, youknow, I found are inadequately
mapped and created and where itcan customize to the
organization's needs.
So organizations want diversityyou know they want that in
those tools so they can haveadequate protection and the
(28:45):
right governance in place toprotect their from hacking and
other threats involved.
Pamela Isom (28:51):
Right, if I think
about your experiences and your
background, you have a specialtyin AI risk management in the
anti-money laundering and fraudarena.
Can you tell me more about that?
Victor George (29:05):
Yes, and so I
have in my experience worked on.
Anti-corruption means thirdparty risk management where
anti-bribery, anti-moneylaundering within applications
the last five years withapplications that are on a
crypto, when you think aboutcryptos like, are you know, are
these applications escalatingthe correct transactions
(29:27):
involved?
You know if there's a KYC or atransaction made from you know a
high-risk country, is thatbeing escalated to that system
or tool that this is a problem?
You know, when you'reonboarding a particular client
with AML and there are risksinvolved, are they being
correctly escalated, managed andremediated?
(29:49):
All those controls beingremediated covering managing
these AML risk that are involved?
Because so much now we knowwith transactions are going and
(30:14):
remittance are through manyunique different means, from
cash out to wiring and all thecryptos, and there's not
adequately covering to managethose and safeguarding those
risks from an AML standpoint.
Pamela Isom (30:29):
So is your position
in all of this to help
organizations to understand thevulnerabilities in the AI, or
tell me more about how you'reworking with organizations in
this capacity?
Victor George (30:41):
So, without the
AI part, I've been doing some
data tests.
I want to say that because I donot think that AI is ready yet.
Okay, I'm not ready.
No, I'm like nope, because it'sgoing to create more problems
and we're going to overly dependon that AI tool to create a
beta or this synthetic auditorwho is going to be reviewing the
(31:05):
controls in place, and so Idon't think the tools are quite
there yet.
Reviewing the controls in place, and so I don't think the tools
are quite there yet.
Now, from what I'm doing is Iwork directly on KYC, anti-money
efforts, investigations and nowI'm working on the applications
that review it and soconfiguring it and opening up
how the tools or whatinformation is being pulled in
(31:26):
from the.
It can be from transactions,from personal information data,
how the data set, how the rulesare being applied that escalate,
that trigger an escalationresult, a red flag.
So now working on that space,on the data rule sets, but AI is
not quite ready yet.
(31:46):
So I work directly withorganizations to how they fit
and what their service offeringsare and how we can create the
data sets that are going tocorrectly safeguard them.
On the AML KYC, because thegovernment now that they're
required.
You got to have thesesafeguards in place.
Pamela Isom (32:04):
When it comes to
anti-money laundering and fraud?
Are they using AI at all, or isit more and you don't recommend
?
Victor George (32:16):
I haven't seen it
yet.
I don't recommend it, not now.
I have worked with some serviceproviders on that and
particularly on that think itcan help with certain you know,
you know reviews that if youhave such a large data set of
tools that you're reviewing, itcan help assist and give you a
(32:37):
guidance of what to look for.
That I'm still not comfortableyet on solely relying on that as
we would, you know, on atechnical tool or software.
I'm not comfortable yet with it.
Pamela Isom (32:51):
Okay, I was reading
.
There's some documentation outthere that says between two and
five percent of global GDPthat's up to two trillion
dollars is laundered each year.
That's a lot of money.
That's a lot of money.
And it says these cash flowscause financial institutions up
(33:13):
to hundreds of millions annuallyon anti-money laundering
technologies and operations.
So, I think, as I know that thetechnologies are emerging every
day and I've been involved inconversations with people that
(33:34):
you know, during the COVIDcrisis, there was quite a bit of
fraud going on.
It wasn't money laundering, butit was fraudulent activities
and they use technologies likeAI to help.
It was just because there wasso much information that they
had to sort through and thisgovernment was using it to, or
this agency it wasn't agovernment agency.
(33:56):
They were doing work for thegovernment and they were using
AI to help to look forcommonalities and trends and
sort through the massive amountof fraud that was going on.
So I can see some use cases,but I hear you.
I hear you saying be carefulwith that, because we don't want
(34:16):
to falsely accuse, but more so,you need it to be accurate.
You need your responses to beaccurate and AI is not to the
place yet where the responseswill be as accurate as we would
like for something assignificant as risk management
in anti-money laundering.
Victor George (34:35):
The part that AML
and I've worked in that and
when I have, I'll say that itgives me, and I'll say this, a
headache to work with AML KYC.
I'm protecting someone'saccounts and all that.
It's a headache because therest the controls that are in
place and how they're doing itis incorrectly mapped.
(34:57):
That is the biggest issue.
It's very rarely and AMLanalysts will tell you this.
Aml experts will find that youhave people who are coming from
audit background where, from anaudit standpoint, you are
reviewing things in scope andthere's such a tight methodology
in place Most times, and notfrom an organization standpoint
too, the people they're usingare KYCs, aml.
(35:20):
They're using people who don'treally understand it.
I work alongside these and thisis where my connection piece
comes in, and I'll ask someonewho's an AML investigator or
analyst.
I'm like, hey, what are wesupposed to do?
What are you supposed to dohere?
And their question is I don'tknow, I'm just doing what they
say.
If this number comes up, I justescalate this here.
So the why part and a lot ofthese parts, they just don't
(35:42):
know.
And so that's to me where Ifind the risk, the measure.
So I think over time weoveranalyze and we over consider
and it becomes a place ofconnecting that again, like I
said, with the people and thosefrontline workers and mid
management and big picture, theyjust they're not, they're not
correctly creating that programfor their clients.
(36:05):
And the fact that very littlepeople understand audit in the
way of the methodology, becauseit's a way of thinking for me is
I would say for me in my ownexperience maybe 10%.
Pamela Isom (36:20):
Yeah, I'm not sure
I would want an AI agent as an
auditor.
So I hear you.
I mean, that's what I do for aliving, is I audit AI solutions?
And I'm not worried that AI isgoing to replace me at all,
because there's no way anauditor can do the things that
you would require one wouldrequire as a human being.
(36:41):
So I understand, I understandwhere you're coming from.
Are there any words of wisdomor experiences that you'd like
to share, above and beyond whatyou've already gone over today,
that we can take with us?
Victor George (36:59):
My grandma would
say you know, seek to understand
and then to be understood, andyou know.
Another part for me is I say noethics, no accountability, and
it's not N-O-No, it isK-N-O-W-No no ethics, no
accountability.
I think most things are not,will be, will flow more fluidly,
because most things are amisunderstanding.
Pamela Isom (37:19):
I believe your
example today was a pretty good
example of what happens whenethics are not considered.
Happens when ethics are notconsidered right, because that
yeah, that was just nuts.
Victor George (37:37):
So I appreciate
you being on the show and I'm so
glad that we've met absolutelyand yeah, so thank you for being
on the show oh, thank you somuch for having me and I get to
share it with you today andwonderful, and the listeners as
well.
So I appreciate you for doingthis, for having me on.