All Episodes

January 20, 2025 44 mins

Mitigating AI Risks

Ryan Carrier is founder and executive director of ForHumanity, a non-profit focused on mitigating the risks associated with AI, autonomous, and algorithmic systems.

With 25 years of experience in financial services, Ryan discusses ForHumanity's mission to analyze and mitigate the downside risks of AI to benefit society.

The conversation includes insights on the foundation of ForHumanity, the role of independent AI audits, educational programs offered by the ForHumanity AI Education and Training Center, AI governance, and the development of audit certification schemes.

Ryan also highlights the importance of AI literacy, stakeholder management, and the future of AI governance and compliance.

Chapter Markers

00:00 Introduction to Ryan Carrier and ForHumanity

00:57 Ryan's Background and Journey to AI

02:10 Founding ForHumanity: Mission and Early Challenges

05:15 Developing Independent Audits for AI

08:02 ForHumanity's Role and Activities

17:26 Education Programs and Certifications

29:21 AI Literacy and Future of Independent Audits

42:06 Getting Involved with ForHumanity

About this podcast

A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

Hosted by Yusuf Moolla.
Produced by Risk Insights (riskinsights.com.au).

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Yusuf (00:01):
Today we have a special guest, Ryan Carrier.
Ryan is the founder andexecutive director of
ForHumanity, a non profitdedicated to mitigating the
downside risks of artificialintelligence, autonomous
and algorithmic systems.
ForHumanity's mission isto examine and analyze the
downside risks associated withthe ubiquitous advance of AI
and automation to engage inrisk mitigation and ensure the

(00:23):
optimal outcome for humanity.
With over 25 years of experiencein financial services, Ryan
brings a unique perspectiveto AI risk management and
governance that's directlyrelevant to financial
services professionals.
Welcome to thepodcast, Ryan Carrier.

Ryan Carrier (00:40):
Thank you, Yusuf.
It's great to be here and,talk to my former brethren.
So, happy to do it.

Yusuf (00:45):
Excellent.
So, we've been working togetherfor some time now, but for the
benefit of listeners, can youtell us about ForHumanity,
and how it came about?

Ryan Carrier (00:57):
Sure, and my background in finances
is very much a part ofwhere that came from.
So, as you mentioned, startingback in the 90s, I worked
for the World Bank, Standardand Poor's, I actually ran
a commodity trading desk forMacquarie, for about five years.
And then when the financialcrisis came along, our
division was cut by Macquarie.

(01:19):
And so I started my own hedgefund at that time, using
a lot of that, commoditiesknowledge and creating a
quantitative trading strategy.
And notably, Applying artificialintelligence to the portfolio
management of that hedge fund.
And so now you get this processcoming into the 2010s, from my
perspective of, you know, sortof quantitative strategies,

(01:41):
but also an awareness,understanding and usage of
artificial intelligence.
And so the hedge fund wassomething that I survived.
And I think most peoplerecognize what that means, but I
say it to everybody so that theyknow, you know, I'm not sitting
on a big pool of capital, right?
I didn't, you know, have asuccessful hedge fund career,

(02:03):
make myself super wealthy, andI'm just kind of kicking back
and giving back to society.
I survived the hedge fund.
So what that meant in2016 was that I had to
close my hedge fund.
So I'm closing the hedge fund.
As a lot of people mightknow, you know, it's not a
full time job at that point.
So I've got sometime on my hands.
I know artificial intelligenceand anybody who was kind of

(02:26):
paying attention to AI atthat time, we had problems
with Facebook influencing U.
S.
elections.
Microsoft and.
You know, releasedthis, racist, chat bot.
It turned racist within24 hours of release.
And we were having a lot ofproblems , with AI or with
social media or the use ofthese tools and how they

(02:48):
were impacting society.
And I don't mind tellingyour listeners or anybody I
run into that I got scared.
sufficiently scaredwhen thinking about
the future for my boys.
I have two boys.
Back in 2016, they were fourand six years old, so I was kind
of projecting their future outand thinking about all of these
impacts from technology, and Igot scared enough that I started

(03:10):
a non profit public charitywith no funding and no plan.
It was really just that missionstatement that you read.
And a lot of that isinformed by finance, right?
Specific termslike downside risk.
Those aren't terms thatare really used outside of
finance, but they're reallymeaningful in finance in the
sense that we recognize thatreward comes with taking risk.

(03:33):
In finance, right?
So what we want to do, or whatwe're going to focus on it
for humanity is really justlook at those downside risks,
those negative impacts, thosedetrimental impacts to society,
and see if we can't mitigatethem as best as possible.
So that's why we examineand analyze those downside
risks, and then we're goingto engage in the maximum

(03:53):
amount of risk mitigations.
With the theory being ifwe mitigate as much risk as
possible in these tools, thenwe maximize the benefit for
humanity, and that's wherethe overly ambitious name of
the organization comes from.
But not having a plan, now I hadto figure out what the plan was.
So starting that mission,I wrote a lot of words back

(04:14):
in late 2016, early 2017about the future of work,
technological unemployment,rights and freedoms in the 4th
Industrial Revolution, data,should we own our own data?
Is that part ofthe solution here?
transhumanism, which is aterm your listeners might
not be familiar with, but itreally means taking machines.

(04:34):
and crossing the verycritical barrier of our skin.
So putting machinesinto our body.
and the reason I bring thatup as an example is, is that's
actually an enormous challengeto our species as we begin
to deal with augmented humansand un augmented humans and
how we'll deal with each otherin society and Unfortunately,

(04:57):
I have to say, I don't thinkthat's going to go very well.
So in looking at all ofthose things, they all,
as your listeners willhear, is these are enormous
challenges, societal challenge,generational type of challenges.
and so in thinking about those,I realized, well, wait, I need
something that we can do today.
What can we getstarted at today?

(05:18):
And I saw a lackof trust, a lack of
transparency, a lack of, of.
what I would call aninfrastructure of trust
built into these tools.
Really no risk managementculture whatsoever coming
from finance where everythingis risk management culture.
And so what I, what Irealized or recognized is that

(05:39):
independent audit of financialaccounts and reporting is
an amazing way through a50 plus year track record
of building up an enormousamount of public trust.
In opaque corporate behavior,translating the numbers into
10 Qs and 10 Ks that the publiccan possibly understand what is

(06:03):
happening with these companies,whether that happens through
analysts or people writingreports or now podcasts and
all sorts of other ways thatpeople examine companies today.
It starts often withunderstanding the numbers.
About these entities, and thatleads to this infrastructure
of trust of the goingson of these companies.
And so the idea is,well, we need that same

(06:25):
understanding about how AI,how algorithmic systems, how
autonomous systems operate.
Are they built in a complianceby design kind of manner?
And so the theoryis very simple.
Replicate financial audit.
very much.
In audits, and the terminologythat we use is independent
audit of AI systems.

(06:46):
Now, you're not doing, youknow, balance sheets and
cash flow statements and, andyou know, things like that.
Instead, what we're lookingat is risks associated
with ethics, bias, privacy,trust, and cybersecurity.
So it's different disciplines,but the principle of the
audit function is And what isbeing audited is very similar.

(07:10):
And so for the last sevenand a half years, I first
wrote about that in 2017.
Look, audits and technologyhave been around for decades,
so I'm not pretending thatthat was anything new.
But the idea of taking thefinancial audit and adapting
and adopting it to artificialintelligence, I do think I
was the first person in theworld to make that sort of
statement and claim, andthat was in June of 2017.

(07:33):
And ForHumanity has beenDeveloping that ever since,
and so we've come a long wayfrom just me sort of screaming
in the wind back then to2, 500 volunteers or more.
From 98 countries around theworld, we've drafted audit
rules for more than 7, 000effective risk controls,

(07:54):
treatments and mitigations.
And we work with governmentsand regulators in many
jurisdictions around the world.
So hopefully that'ssome decent background.

Yusuf (08:01):
Yeah, thank you.
So, in terms of what ForHumanitynow does, am I correct in
saying that primary activitiesare around education and
drafting audit criteriafor certification purposes?

Ryan Carrier (08:17):
That's correct.
So I'd start by sayingwe aim to support.
That infrastructure oftrust called independent
audit of AI systems.
Our role is to replicate, andthese acronyms that that your
listeners might know, butmost of the rest of the world
doesn't, which is FASB and IFRS.
So Financial AccountingStandards Board, International

(08:39):
Financial Reporting Standards,the two independent bodies.
Who are industry bodies thatestablished global accountancy
rules, so GAAP and of course,IFRS, and those are the
standards by which 10Q and10K audits are conducted.
For Humanity aims to play asimilar role to FASB and IFRS

(09:00):
with one major enhancement.
The primary certification bodyfor individuals in financial
audit world is the AICPA, right?
Accrediting people as CPAs.
I don't see a reason that thosetwo groups should be separate,
FASB and AICPA, except thatAICPA existed before FASB.
if I remember myhistory correctly.

(09:21):
So they were already separateto begin with, whereas when
ForHumanity started thisprocess, we said, well,
we can help to establishwhat the rules are in a
consensus, crowdsourced,transparent process that we
welcome all people to join.
And all corporations to join.
so we aim to establish therules, submitting them to

(09:42):
governments and regulatorswhere, appropriate and
where they're interestedin, endorsing them.
And then the other roleis to train individuals.
On those criteria, and thenfurther, we, facilitate
the business of this worldby licensing all of our
intellectual property forauditors, pre audit service
providers, technology providers,and other kinds of teachers.

(10:04):
So those are the, that'sthe main role that
ForHumanity aims to play.
We do this as a nonprofitpublic charity, just trying
to facilitate and foster whatis eventually going to be a
world of maybe as frequent asannual, independent audits.
On AI systems.

Yusuf (10:21):
as part of that, a big chunk of the effort that
you would be involved inWith the sizable volunteer
pool that you now haveis developing those audit
certification schemes and thecriteria that go within them.
Can you explain to us whatthat process looks like?

Ryan Carrier (10:40):
Sure.
So I'm going to use GDPRas an example, and I'm not
sure how familiar, you know,certain members of your,
of your audience may be.
GDPR is the general dataprotection regulation.
It is the primary driver ofdata privacy and protection.
around personal data basedout of the, out of the EU.

(11:00):
It's kind of the first majorlaw, maybe cyber, certain
cybersecurity laws, but kindof the first major law that
really impacted the AI space.
Although back then it was moreabout personal data than it was
about AI, even though AI wasstarting to, to generate a need.
This data through profilingand systems like that.

(11:21):
So in that particular case,that law allows for the creation
and adoption by governments,national data authorities
of certification schemes.
A certification scheme definesa scope, what can be certified
under that scheme, and thenthe set of rules that are
applicable and appropriate.

(11:42):
Now, there's no realrules governing how those
certification schemes are built.
However, there are standards.
ISO provides some standardsthat many follow, but one of
the key things that we wouldhighlight is standards work.
For more information, visit www.
fema.
gov Is often too general.
It's not specific enough toallow a third party at risk

(12:03):
auditor to conduct an audit.
So often what is conducted isassurance, which is a similar
kind of process, but it has lessstructure and formality to it.
Assurance versus afull, full audit.
What we find is that theaudit creates the most
robust, Assurance ofcompliance and conformity.

(12:26):
And so what we aim to do istake the law, like GDPR, or take
the law, like the EU AI Act,or take the law, like the new
privacy law adapted to AI inAustralia, and what we aim to do
is create binary, compliant, ornon compliant Rules, and what we
mean by that is, is we work ina team, we work as a group, and

(12:50):
we look at what the law says,and we translate that into,
well, what criteria would prove.
That someone is able to upholdthat aspect of the law could
be a duty or an obligation,whatever it might be.
So, we, we use our words, we uselanguage to try to identify the
specific detail that would allowsomeone to say, yes, I have

(13:13):
complied with that aspect orthat requirement under the law.
So, in the case of GDPR, oneexample that we have is that
they have to provide theright for any data subject
to come and access the data.
And so when they access thatdata, they put in a request
and then they might look tocorrect it or rectifies that

(13:35):
the technical terminology, orthey might want to erase it or
they have other rights as well.
Well, that process is onlydefined as a process in the law.
Those are rights grantedto the data subject.
Well, what we do is we getinto the details to say, well,
So in granting those rights,the law has also said the
process needs to be things liketransparent and accessible.

(13:58):
Okay.
And so what we do is weactually define what that
process looks like to meetthe obligations under the law.
And then we ensure thatthe rights are upheld
through that process.
And so what we're doing isgetting into the details.
And in that particular caseof GDPR, it's 258 binary.
Audit rules that areestablished, and then

(14:19):
we've submitted that to theEuropean Data Protection
Board, which is the, the bodythat oversees all of the 27
national data authorities.
And they have the rightto approve or uphold that
certification scheme or not.
And we'll see, we're inthe middle of that process
and have been for thebetter part of 18 months.

Yusuf (14:39):
Okay, and that, those audit criteria will be a
combination of standardizedcriteria across the for humanity
certification schemes basedon, ensuring infrastructure
of trust generally, and thenit would be items from the
individual regulations thatneed to specifically be audited.

(14:59):
Is that, right?
So there's two, sets ofsources for the criteria.
One being thestandard for humanity.
What we expect from allAAA systems and the other
being what does thisspecific regulation require?

Ryan Carrier (15:12):
Yeah, and again, it's a great question, and it's
a great area to specify on.
A lot of this is builtor, structured from that
financial history that I have.
In finance, we haverobust governance,
oversight, accountability.
So, I might be in charge ofdoing something, but there's
going to be people who validatemy models, or there's going

(15:34):
to be people who overseethe risk that I might trade
or use or the risk budget.
There might be complianceand attorneys who want to
see my documentation, right?
So everybody has their dutyand their responsibility.
So we start by establishingrobust governance oversight
accountability that starts atthe level of what's known as top

(15:56):
management and oversight bodies.
Top management and oversightbodies, CEO, chief risk officer,
chief technology officer,and the board of directors.
They have duties in, inregards to the responsible
oversight, governance andaccountability of AI systems.
So they will have some dutiesin our, in our audit criteria

(16:18):
and those aren't specific.
Well, sorry, sometimes thoseare specified by the law.
So sometimes the lawwill say, look, you know,
I'll give you an example.
In GDPR, the personwho's in charge of data
protection is called thedata protection officer.
And the law actuallyspecifically states that
that DPO, data protectionofficer, must report to the

(16:38):
highest level of management.
So that's an example of wherethey do specify a role of a DPO.
for top management toplay, which is a reporting
structure to the DPO.
Most times they do not do that,but sometimes they do, and we
want to be prepared for that.
So we do have, just as you kindof alluded, right, governance,
oversight, accountabilityfunctions we need to build
in, and then we have how do wecomply with the specific law.

(17:02):
The other side of that coinwould be a privacy policy.
Dictated by GDPR or dictatedby the Privacy Act in
Australia, and the idea thatwhat goes into that privacy
policy is specificallylaid out in the regulation.
At a minimum, you haveto have, you know, your
categories of data.
Who are the recipients of data?

(17:22):
How can they contact thedata protection officer?
So on and so forth.

Yusuf (17:26):
we're talking about that, can we talk about the specific
education programs that you havethat aim to lead to, make better
auditors and certify auditors.
That starts with, thefoundations program that is
required across the board.
Can you tell us alittle bit about that?

Ryan Carrier (17:43):
Yes, so ForHumanity 3 years ago,
almost 3 years to the day thisis January 15th and started
on January 3rd of 2022.
So, yeah, a little over 3 years.
We established the ForHumanity,it was originally called
4Humanity University.
University is not aprotected term in the
United States, but it is inmany other jurisdictions.
So I didn't realize that.

(18:04):
So we just called itForHumanity University,
trying to educate people.
And so we've now changedit to the ForHumanity AI
Education and Training Center.
So that's the officialname, it is a fully
online set of courses.
That people can access througha platform called Moodle.
There are 9 courses today.
There will be more coursescoming this, well, spring in the

(18:26):
Northern Hemisphere and thosecourses range from as short as
3 weeks to as long as 9 weeks.
They are 30 minutes perlecture, and they come
with associated quizzes.
The opening course, thecourse everyone should take
first if they're interestedin learning more about, about

(18:46):
this infrastructure of trustis called Foundations of
Independent Audit of AI Systems.
It starts with something asvery basic as understanding
what an audit is.
You know, what are thefoundations of this theory of
independent audit of AI as itrelated to this, this 50 year
track record in finance of, ofindependent audit of financial

(19:07):
accounts and reporting.
So we spend a lot of time sortof teaching that background,
but also teaching the lexicon,the language of audit.
Not everybody's familiar withwhat audit looks like, right?
Not everybody's familiar withthe difference between an
assessment, assurance, audit.
Okay.
very much.
What are the roles?
What are the liabilities?
Who's responsible for what?

(19:27):
Who does what to whomin this ecosystem?
So we explain all that.
It's a five week course.
All of our courses are free,I guess I should mention.
So it doesn't costanybody anything to just
check it out, right?
You can go to forhumanity.
center, which is our website.
You can register for astudent account and you
can check out the courses.
They're YouTube, 25 roughlyminute long lectures, and

(19:51):
you could do two or three andgo, wow, this is interesting,
or wow, this is boring.
I have no interestin this, right?
But it's easy tocheck out, right?
Low, Barrier to entry.
Foundations, is a five weekcourse, as I mentioned.
It gets people started in thespace, and then every Friday
we offer a certification exam.
So anyone who's passed,there's quizzes associated
with each lecture.
Anyone who pays attention to thelecture will pass the quizzes.

(20:14):
It's a verificationof knowledge.
So then every Friday peoplecan take an exam to say, I
have earned this knowledge.
Anybody who passes theexam receives a certificate
from us that they arecertified in foundations.
That does have a costassociated with it.
So the knowledge is free.
The course is free.
If you want to sit thecertification exam, it
starts at 100, but thenthere's discounts applicable.

(20:38):
So once someone hasfoundations, then they can
begin to get into the morespecifics and the details.
The next level up is kindof our equivalent to a CPA.
So someone who's takenfoundations and who says, I
want to get that GDPR knowledge.
I'm involved in GDPR and I wantto help my firm have compliance
or I recognize that there'sgoing to be a growing need for

(20:59):
audits of GDPR compliance and Iwant to provide audit services
or I work for, for PwC or, or,or KPMG and my firm has asked
me to get information on howI can, and learn how to do
audits of AI systems, right?
So they could go and they canthen take our GDPR course.
Or our EU AI Act course or anyone of six courses built on the

(21:22):
back of laws and regulationsthat are being developed and
passed all around the world.
So those six courses GDPR, EUAI Act, something called the
Children's Code, which governsbasically how kids interface
with personal data online.
So, it governs the nature,and it's based on a UK law.
There's the DigitalServices Act, which

(21:43):
governs online platformsand online search engines.
There's disability inclusionand accessibility, what
organizations should do to meettheir, a lot of their disability
or their anti discriminationor non discrimination laws.
And then the last 1 is basedon a very specific law in the
United States called the NewYork City Automated Employment

(22:04):
Decision Tool, Bias AuditLaw, which requires anybody
who's providing automatedemployment decision tools to
bias audit some of their work.
It's a very, very smalllaw, and it actually shrunk
since it was first proposed.
But we have a coursefor that as well.
Then also, on top of that, wehave two expert certificates,

(22:25):
one on risk management andone on algorithm ethics.
Let me just, risk management is,is what it seems, which is how
do you manage the risk, buildprocesses, very similar to risk
management and finance, aroundmanaging the risks, but they're
multidisciplinary risks aroundthe, the use, implementation,

(22:45):
delivery of AI, algorithmic,and autonomous systems.
Thanks.
Algorithm ethics, however,is something that that your
listeners likely would notbe familiar with, which is
the idea that these toolsare socio technical tools.
As a result, and when I saysocio technical, what I mean is,
like, when you use a calculator,you punch in the numbers and the

(23:07):
machine produces the output, butnowhere in the process was the
human involved in the processof that calculation, right?
It was a tool thatspits out a result.
Well, with artificialintelligence, a lot of times
humans are in the equation.
Their personal data is drawnin for recommendation engines,
example, or for creditloan applications, lots of

(23:28):
ways for biometrics, right?
The human is part ofthe equation of the
assessment process.
So, results of thosetools is that there are
many, many instances ofethical choice built into
the design, development,deployment, and management.

(23:49):
monitoring, evendecommissioning of these tools.
And as a result of that,we see all these instances
of ethical choice, whichwe call algorithm ethics.
And we recognize that thereweren't people in the world who
were trained to do this work.
And so we've begunto do that training.
That course has been aroundfor two plus years now, and

(24:10):
we'll grow and expand into afull certificate for ethics
officers trained to engagein the managing of these
instances of ethical choice.
On behalf of theirorganizations.
So the sum of all of thoseprograms, which is 9, is our,
body of, course work in the AIEducation and Training Center.
And people can, can signup for any of those, all of

(24:32):
them, as they might like.

Yusuf (24:34):
That foundations course that we started with.
and say Algorithm Ethics.
while the Foundations is calledFoundations of Independent Audit
of AI Systems, there's a lotelse that goes into it beyond
audit to be able to audit.
So, it's not just aboutaudit concepts, and it goes
into, five core pillarsbeing bias privacy, ethics,

(24:59):
trust, and cybersecurity.
, plus the audit process.
And so, whether you'reinterested in, actually
auditing or not, that coursecould almost be foundations
of AI governance governanceof AI systems or control
of AI systems, if you like.
It is really applicable toa broader audience than just

(25:20):
people that want to do audits.
And then when you go into thingslike algorithm ethics, that will
be for auditors, yes, but alsofor those people that want to be
involved in ensuring the ethicsof algorithms in a deep way.
And so, the courses mayhave audit in their names,
they're not necessarily.
restricted to, people that havea desire to conduct audits.

Ryan Carrier (25:45):
That's correct.
I can describe that inanother way as well, right?
People ask me regularly,who should take these?
and they also asked me,well, what does the business
of auditing AI look like?
And, you'll see why it'simportant to make this
distinction right now.
There is no business.
for auditing AI today.
Why?
Because most organizationsaren't ready for

(26:05):
their audits yet.
They have not built themselvesto be compliant by design.
They've not put in place theprocesses, the procedures,
the infrastructure to create acompliance by design environment
like we have through COSO.
With financial audits.
So really what we have isa pre audit world, but also

(26:30):
when you have a pre auditworld, you have pre auditors,
advisors, consultants, peoplewho are knowledgeable, lawyers,
right, plugging in, but alsoyou have people inside the
companies going, well, I wantto know what our duties are.
I want to understand whatmy responsibilities are.
And they have just asmuch merit in taking these
courses as somebody whowants to become an auditor.

(26:52):
And so you're absolutely right.
It isn't just for auditors.
It is for anyone who wantsto know and understand
implementable auditablecriteria for achieving
compliance by design solutions.
And in addition tothat, it could be people
who are interested inteaching this as well.

(27:13):
often joke with groupswho come to me and are
interested in teaching.
I'm like, well, My Englishis great, or maybe some
people think it's great.
Yeah, I do claim to be anEnglish to English to English
to English translator, so Ithink I have skill, in such
a thing, but my Portugueseand my Spanish and my French
and my Italian and many otherthings are terrible and I will
never be able to teach them.

(27:34):
Therefore, I need teachers,we need teachers, for humanity
does, needs teachers who willeventually teach all of these
courses in many, many differentlanguages for people and
translate our work into others.
So we want to encourageother people and we encourage
everyone around the world tocommercialize all of our work
through our licensing program.

(27:55):
You want to be an advisor,It's a higher risk for you
to say, well, I'm tellingyou what my advice is.
Well, you might say, I'm, I'mtelling you what advice for
humanity, this non profit publiccharity of 2, 500 people from
98 countries with 7, 000 riskmitigations already drafted.
Here's what they AndI can advise you on

(28:17):
how to implement that.
It is a different sale, right?
It is a different approachon how to provide advice to
these, these sophisticatedorganizations in a field that
is just beginning to grow.
So you know, there'sa lot of ways.
That the knowledge andinformation of what it means
to be compliant with laws,regulations, best practices

(28:41):
and standards, what that meansthere's a lot of people who can
benefit from that knowledge.

Yusuf (28:47):
Okay, excellent.
And I'll let that English toEnglish to English thing go
because I know you are AmericanEnglish just because you spent
a few years in Sydney and weredriving on the correct side of
the road rather than the rightside of the road at that point.

Ryan Carrier (29:00):
I do know the difference between a
footpath and a sidewalk.
And a bin and a garbage can.
So I have a decent start.

Yusuf (29:08):
That's good.
in terms of financial servicesleaders, what, sort of skills
should they, you know, we'retalking about people that
are interested in ensuringthat they understand what
needs to be done and do it.
and that's important.
What sort of skills shouldleaders be developing
themselves or helping theteams develop to stay ahead.

(29:30):
Well, at least abreast, butideally ahead of AI governance
expectations today coming.

Ryan Carrier (29:38):
I have a kind of a rigorous list.
It's a little bit toolong for me to recite.
So I'm going to generalizethem, but let's assume that
your question was focusingon, again, top management
and oversight bodies.
What are their duties?
In terms of, you know,we, in our organization,
we're deploying AI.
So, number 1, governanceoversight accountability.

(30:00):
Number 2, properresource allocation.
It is not sufficient to talkand not do in terms of budget,
time, and infrastructure.
Right?
Putting in place the rightresources to properly
manage these risks.
You have to have data integrity.
You have to have good data.
Garbage in, garbage outhas great meaning for

(30:24):
artificial intelligence.
But not only do you have tohave good data, so system
integrity, but you also haveto have the robust technical
infrastructure to support that.
You must have risk management.
You must have qualitymanagement in place.
You must think aboutall stakeholders, not
just your organization.

(30:44):
It was interesting when wefirst were getting involved
with NIST, as NIST was goingfrom their risk management
framework, which they had hadfor years, and cyber security
risk and things like that,and they were expanding into
AI, we were one of the leadingvoices basically saying, look,
the biggest risks here are topeople impacted by these tools.

(31:05):
where bias exists.
So you need a 360 degreeperspective of stakeholders,
whereas NIST previouslyonly thought of risk at
the organizational level.
So defining your stakeholders,figuring out what your duty
to those stakeholders lookslike, ensuring that you
have diverse input and multistakeholder feedback, and

(31:26):
taking care of vulnerablepopulations and how they may
be impacted by these tools.
We also require human oversight.
So humans should always own or,be responsible and accountable
for these tools, dependingon how involved they are in
the process, that's humanoversight, and that can be
defined on a case by case basis.

(31:47):
There should be transparency.
There should be supplychain management.
There should betraining and education.
And then finally, thereshould be processes for
decommissioning.
There's a list of 20, wecall it top management
and oversight bodies.
I can get into more detailfor anybody who's interested.
I think we even have some ofour PowerPoints on the website
that that walk people throughthis, but those are the kind

(32:07):
of things that It's not justabout strategy and the benefits
of the tool, and here's whatcost, meaning the financial cost
is, and can we implement it?
There are significantand meaningful risk to
brand, to ethics, harms toindividuals by not thinking
about a robust approachthat starts with governance

(32:30):
oversight accountability.
Does that answerthe question, Yusuf?

Yusuf (32:33):
Yes, thank you.
to extend that a little bitso we've got top management
and oversight bodies andexpectations that exist of them.
And then the EU AI Act, whichis the first that is broad
across high risk AI systems.
there may be others here andthere, but that's probably
the, largest, longest thateverybody's talking about.
And Article 4 of that Acttalks about AI literacy.

(32:55):
And so there's been asignificant effort within for
humanity to, um, help enable AIliteracy expectations to be met.
What, like, whatdoes that look like?

Ryan Carrier (33:06):
Sure, so, let me mention just a couple of quick
things about the EUAI Act.
Many of the provisionsdon't come into enforcement
until August of 2026,which is a little bit more
than a 2 year runway fromwhen the law was enacted.
However, one of the provisionsthat applies to all artificial
intelligence And not justhigh risk AI, which most of

(33:28):
the law is about high riskAI, but this applies to all
artificial intelligence.
It's called Article 4 andit's called AI literacy.
Providers and deployers.
Those are the two maindescriptions about what
an organization is.
Of AI, so providersand deployers

Yusuf (33:46):
providers and deployers are basically people
that build the stuff andpeople that use the stuff.

Ryan Carrier (33:51):
That's correct.
Yeah.
Good.
Good translation.
So providers and deployers ofof AI systems must engage in
the training of AI literacy.
Now, the rule underArticle 4 says you must
train your employees.
The people who work directlyon the AI system, and even
your leaders, based on theirexpertise, based on the context

(34:14):
of the system, you must trainthem on what AI literacy means.
Elsewhere in the law, theyaffirm that that AI literacy
also has to include trainingthe people who use the system.
So as a result of that, whatForHumanity has done in support
of AI literacy is we are, we,we basically just finished

(34:37):
defining five personas.
So the first persona,moms and pops, sort of a
retail audience, the peopleimpacted by the system.
whose AI literacywill be very low.
The second are all theemployees of the organization.
There is no distinction inwhat the law calls for that,
you know, certain staffdon't have to be trained.
It basically saysall your employees.

(34:59):
So we treat themlike moms and pops.
We treat them like retaileffectively because it could
be the, and I don't mean thispejoratively, it could be
the cafeteria worker and thejanitor, technically speaking.
So those, those people are beingtrained at a very basic level.
That's personas one and two.
Persona three are the peoplewho work with the AI system.

(35:22):
So now we're going totrain more on the context.
We're going to train more onthe risk controls, treatments,
and mitigations, processesof governance, oversight, and
accountability for that tool.
The fourth persona is topmanagement and oversight bodies.
Now, if your organizationonly has one tool, then
what, what persona four lookslike, it's going to be very
similar to persona three.

(35:43):
But if you're Google, Whereyou might have a thousand AIs,
the training that goes on fortop management and oversight
bodies is very different thanthe training of an employee
who works on a single AI.
So we allow for that differenceto basically say, you need to
understand what are the main,what's the main infrastructure
that is put in place to governand operate these risks.

(36:06):
A little bit like whatwe talked about before.
The 5th, or the highest level,are what we call AI leaders.
These are the groups thatare responsible for owning
how these tools are designed,developed, and deployed.
Okay.
And here, what we find iswe don't need to focus on
benefits of these tools.
They're already sold onwhat the benefits are, often

(36:28):
so, so much so that they'veactually neglected the risks.
And so really what we'refocused on with them is what
are the risks, harms, but mostimportantly, have you identified
all of your stakeholders?
Have you thought about whois impacted by your tool?
And then do you have a processfor assessing what is your duty?

(36:49):
Are there vulnerable populationsin those stakeholders, and
what risk controls, treatments,and mitigations do we need
to be putting in place tomitigate common risks, common
problems, either associateddirectly with the tool or in
general that may not have beendealt with in the tool itself.
In all of these 5 cases,what we're doing is we're
establishing the learningobjectives, and we're making

(37:11):
those learning objectivesavailable to groups who might
want to create the trainingprograms on behalf of a company.
or two companies, or sevencompanies, or whatever it
might be, because it allhas to be context driven.
So ForHumanitycould never do it.
We simply want to supporthopefully hundreds and
thousands of people who willhelp advance AI literacy.
We, however, will play ourrole, which is to advance

(37:35):
generic AI literacy and makethat available freely for
that retail kind of audience.
And we will give that away.
We're just, we're justbeginning that process.
We're seeking fundingto, to help us do that.
And what we want to do is wewant to augment what all the
corporations are required to do.
We want to give them ourresource to say, and by the

(37:56):
way, if your users, if youremployees want to learn more,
That's not your duty to trainthem, but ForHumanity is
more than happy to give youour resources to do that.
So that's the role we think weneed to play in AI literacy.

Yusuf (38:09):
That's great.
And so, there'll be hopefullyquite wide reaching in
terms of lifting the bararound what people know
and understand about, thesesystems and how they operate.
how do you see the role ofindependent audits evolving
over the next couple of years?
I know there's been quite abit happening in the last few
and then a few regulationsthat have come out that have

(38:30):
started to accelerate theneed for independent audits.
Where do you see that goingin the next couple of years?

Ryan Carrier (38:36):
Well, in the very long run, I see it replicating
or looking very similar toindependent audit of financial
accounts and reporting.
So a very similar ecosystemto financial audit.
But that's that's a processthat's going to have
to grow over a decade.
and we actually seethe recognition of that
maturity in the EU as anexample, when they first.

(38:57):
put out the law for the EUAI Act, they only called
for a conformity assessment.
So they asked the EuropeanStandards Body, and that's
who's working on this now, tocreate standards, and then they
required that their notifiedbodies, a lot of the testing
and evaluation bodies inEurope, would be able to do a
conformity assessment Based onthose standards, that was their

(39:19):
only choice for verification.
Why?
Because there's no ecosystemfor this verification otherwise.
And there is a robust productcentric ecosystem for these
notified bodies and theseconformity assessments.
So, it made a lot of senseto treat this as, you know,
kind of a product liabilitytype of an approach.
And so, They're,they're doing that.

(39:41):
It made no sense tocall for an audit.
Why?
Because even by 2026,there's just not going to be
enough people out there whocould conduct these audits.
Enough firms withenough groups trained.
It's estimated that there's8 million AIs that would have
to go through this process.
Just no possible way, right?
So I see it growing.
I see the ecosystem evolving.

(40:03):
I see procurement.
and the process of deployerswanting to buy from providers,
I see that influencing theneed and demand and the
value of independent audit.
There is nothing more valuablefor a provider of a tool.
If they're one of fiveproviders, and they show up to

(40:23):
a procurement contest, an RFP,let's say, and four of them show
up with nothing, and the fifthone shows with an independent,
third party, validated audit.
From a, with a reputableset of audit rules built in,
conducted by reputable auditorswho are trained and certified.
The meaning of their abilityto say, we are compliant

(40:44):
with the EU AI Act.
We are compliant with thePrivacy Act in Australia.
Whatever it might be,they're able to deliver
that information.
Whereas the others, you know,you're taking their word for it.
That is an enormouscompetitive advantage.
In that RFP process.
And so I see deployers.
I see the procurement process.
I see voluntary complianceactually generating a

(41:07):
lot of the early demand.
If we think back tofinancial audit in 1972,
73, when GAP was created.
It was voluntary adoptionby corporations that allowed
the US SEC in 75 and 76 topass a law that said you
will follow GAP if you'regoing to be publicly traded.
It was because everybodywas already following GAP.

(41:29):
It was a kind of ano brainer, right?
That was throughvoluntary adoption.
We see the same sort ofprocess taking place.
And in addition, we're aboutto launch the equivalent
of regulatory sandboxes.
We call it an AI policyaccelerator, where we will
begin to get voluntary adoptionfor the reasons I already
mentioned from organizationsbringing actual, you know,

(41:50):
Artificial intelligence into thesandbox, into the accelerator
to begin to prove theircompliance with the audit rules.
And that is, I'm literallyputting ink to that
contract this week.
So we're juststarting that process.

Yusuf (42:05):
Fantastic.
as we wrap up here, wherecan listeners find out
more about 4Humanity?
Where would you point peopleto, how can people get involved?
How can people get in touchwith you if they need to?
What does that all look like?

Ryan Carrier (42:17):
Yep, LinkedIn is a great place to find me.
If you type in Ryan Carrierand, and for humanity
or executive director,I pop up pretty easily.
accept most requests andwe'll be glad to do so on
the back of this as well.
our website ispretty informative.
So that's, for humanity.
center, C E N T E R,the American version

(42:38):
of center C E N T E R.
so that's the, the, the website.
address of the website.
That's also where you canregister for our community.
Which is a Slack basedcommunity, or you can register
for a student account forthose, educational courses
that we talked about.
Slack is where wedo most of our work.
Everyone is welcome in Slack.
There is no commitment beyondwhat I'm about to express, which

(43:01):
is, you'll find a small formcalled get involved at the very
top of the page, and that asksyou to provide your name, email
address, So we demand your emailaddress, and we demand that you
agree to the Code of Conduct.
The Code of Conduct governshow we behave inside
the Slack community.
And so once you've providedus those two things, that

(43:21):
is 100 percent of what Iwould ever demand of anyone.
And then you receiveaccess, full transparency,
to our Slack community.
And that's the point,is to give you access
to all of these tools.
My whole life.
Yusuf is spent trying totake all these tools that
we're, we've built and arecontinuing to develop and get
them into hands of people whocan commercialize this work.

(43:45):
That is the main thingthat we're trying to do.
We love volunteers thatcome and say, Ryan,
you're doing a good thing.
ForHumanity isdoing a good thing.
How do I help?
Love that, right?
But we also know thatwhen people come with
their commercial interestsinvolved, we're going
to get their attention.
We're going to keep their focus.
And so we love both people thatcome encourage people to come
and commercialize their work.

(44:05):
Register through thatGet Involved link.
You'll receive an invitation toSlack from our Slack community.
And then you'll be off andrunning, whether it's through
the courses or through Slack.
Reach out to me on LinkedIn.
These are all the bestways to get connected.

Yusuf (44:19):
Excellent.
Thank you.
Ryan, thank you so much fortaking time to talk to us
today and we probably willsee you again sometime soon.

Ryan Carrier (44:27):
My pleasure.
Thank you.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.