Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
This article was publishedin October 2024 and it's
titled Algorithmic Integrity.
Don't wait for legislation.
So the background to this isthat we've been seeing for
a while various laws beingenacted or developed . And
there's obviously lots ofdiscussion going on about
this in various jurisdictions.
(00:22):
And there's nothingwrong with that.
I mean, we need legislation.
better laws, there's lots ofnewer technologies and newer
risks that exist that mean thatwe need to actually have strong
legislation to enable betterfairness, accuracy, et cetera,
better integrity overall.
However, waiting for legislationisn't always the best approach.
(00:45):
And some of these are a bitnarrow, some of them can be
quickly outdated, et cetera.
And so I just wanted to talka little bit about that and
what the alternative might be.
So we'll go into a bit ofdetail around why legislation
isn't the thing that weneed to be waiting for and
then what else we can do.
So here we go.
(01:07):
Legislation isn't the silverbullet for algorithmic
integrity, neither are standardsor best practice frameworks.
I know that many peoplewill disagree with this
for various reasons.
Whole industries are beingdeveloped around them with lots
of money and effort thrown in.
Some organizations won't bemoved without legislation.
(01:30):
Others just want to achieve astandards based certification.
Now, are they useful?
Sure.
They help provide clarityand can reduce ambiguity,
and of course once a lawis passed, we must comply.
However, existing legislationmay already apply.
(01:53):
New algorithm focusedlaws can be too narrow
or quickly outdated.
Standards can be confusing andmay not cover what we need.
Best practice, invertedcommas, frameworks help, but
they're not always the best.
And there are several, sothey can't all be best.
(02:14):
In short, they are helpful,but we need to know what
we're getting, what theycover, don't cover, etc.
Let's explore legislation inmore detail and leave standards
and frameworks for futurearticles, future episodes.
Generic Compliance ExercisesLaws play a crucial role
(02:35):
in setting expectations.
But if we're only aiming tomeet legal requirements, we're
missing the point entirely.
Consider this.
Would you entrust your company'sfinancial future to someone who
only follows a rigid, one sizefits all investment strategy?
Probably not.
(02:55):
You probably want a portfoliomanager who understands market
dynamics and can read trends.
They understand yourcorporate objectives and the
broader economic landscape.
The result is a nuanced,tailored investment decision.
The same principleapplies to our algorithms.
We aren't content with merelyfollowing generic standards or
(03:16):
scraping by legal requirements.
Instead, we focus on buildingsystems that are fair,
accountable, and transparentin our specific context.
Just as a skilled portfoliomanager adapts strategies to
the company's unique needs, wetailor our algorithmic integrity
practices to address ourcontext, risk, and objectives,
(03:39):
and the people we serve.
Let's talk aboutexisting legislation.
Some existing laws arealready relevant to
algorithmic integrity.
For instance, antidiscrimination laws can
apply to algorithmicdecision making, while
data protection regulationsalready govern many aspects
of AI systems data handling.
(04:00):
Fairness is already capturedin human rights laws.
While these laws may not bespecific to algorithms, the
broad concept of fairness isn'tspecific to algorithms either.
AI Act was passed recently.
GDPR laws existedalready by then.
Waiting for the EU AI Actcould have meant that privacy
(04:23):
obligations were not fulfilled.
Granted, existing lawsmay not always be easy to
interpret in the contextof new algorithmic systems.
So the new laws can help,but existing laws apply.
Even when the new onesare not ready, or don't
cover your specific system.
(04:44):
What about contractualobligations and
customer expectations?
Contracts with customersand third parties
already set requirements.
These may not be capturedin legislation, but we
have to abide by them.
And we want to abide by them.
Customers expect that wewill treat them fairly and
manage their data carefully.
(05:06):
There are already somelaws for both of these
in most jurisdictions.
Regardless of what the lawssay, we want to treat our
customers fairly and keeptheir data secure and private.
New legislation and scope.
Newer algorithm focusedlaws often suffer from
being too narrow in scope.
(05:28):
Consider the EU AI Act again.
It's the first, or one ofthe first, of its kind.
It has generatedsignificant activity.
It will, no doubt,lift integrity.
But what does thatmean for your systems?
Is your system coveredby the definition of AI?
(05:50):
Is your system coveredby the risk level?
The Act has a tieredrisk approach.
Many systems, Thosedeemed low or minimal
risk may not be covered.
Then we have Local Law144 in New York City.
It focuses on bias in automatedemployment decision tools.
(06:12):
Some say that the law isa watered down version
of the original bill.
It may not meet theoriginal intent.
It doesn't cover allprotected categories, it's
limited to race and gender.
It focuses only on certainaspects of the hiring process.
it allows for considerablediscretion by employers
(06:34):
in determining whethertheir system is in scope.
Again, before itwas enacted, anti
discrimination laws existed.
So you could be, in invertedcommas, compliant with this law.
But not be compliantmore broadly.
Another interestingcase is Colorado's ECTIS
(06:55):
Regulation 3 CCR 702 10.
It focuses specifically onthe use of External Consumer
Data and Information Sources,ECDIS, and related algorithms.
It is certainly useful.
It makes it clear thatdiscriminatory ECDIS
(07:15):
must not be used.
This is really important.
insurers have been usingthese, often without
justification, sometimeswithout even knowing that
they're doing something wrong.
The models and data are soldto insurers by apparently
reputable organizations.
(07:36):
Everybody else is usingit, so the assumption is
that they're okay to use.
This legislationmakes it clearer.
But anti discriminationlaws already existed, so
discriminatory actives, againexternal consumer data and
information sources, shouldnot have been used anyway.
(07:57):
All of this doesn't meanthe laws are not useful or
unnecessary, but they areeach limited in scope, so
compliance with them can createa false sense of integrity.
If you're complyingjust with them alone.
New legislationversus objectives.
(08:18):
New legislation takestime to develop.
Technology is advancing rapidly,as are the use cases for tech.
The laws can be outdated bythe time they are enacted.
But the underlying objectivesdon't change that frequently,
so keeping an eye on thebroader goal rather than the
(08:38):
specific legislation may bea better long term approach.
By focusing on broader ethicalprinciples, such as fairness,
transparency and accountability,we can create more robust
and adaptable algorithmicintegrity practices that remain
relevant even as technologyand legislation evolve.
(09:02):
Proactive approaches toalgorithmic integrity.
Instead of relying solelyon legislation, standards
or frameworks, we want tofocus on building systems
that have genuine integrity.
Rather than waiting forlegislation, consider
the following five.
1.
Conducting regularrisk assessments and
(09:24):
impact assessments 2.
Implementing diverse andinclusive design practices 3.
Establishing internalgovernance structures
for algorithm selection,development and deployment 4.
Engaging with stakeholdersto understand and
address concerns 5.
(09:45):
Investing in ongoingeducation and training
Let's commit to makinginformed, ethical decisions,
even when, or especiallywhen, no law or framework
explicitly tells us to.
Integrity isn't aboutblindly following rules.
It's about doing theright thing, even when
(10:06):
no one's watching.
By embracing a proactive,principle based approach to
algorithmic integrity, wecan build systems that not
only comply with laws, butstand the test of time and
maintain customer trust.
That's the end of the article.
Thanks for listening.