Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
This article was publishedat the end of October 2024.
It's titled Risk FocusedPrinciples for Change Control
in Algorithmic Systems.
The background to this articleis that there are several
change control procedures, from traditional practices
around financial audits andaudits of information systems.
(00:22):
change in algorithmic systemsis quite different and so we do
need to reconsider what we wantto focus on when we designing,
implementing, auditing changecontrol practices as they
relate to algorithmic systems.
And so this article explainsthat difference and then a
(00:46):
risk focused, principles basedapproach to change control to
ensure that the set of controlsthat you use is effective.
properly address the risk, andare effective and efficient.
So here we go.
With algorithmic systems, asingle uncontrolled change
can trigger a cascade ofunintended consequences,
(01:09):
potentially compromisingfairness, accountability,
and public trust.
So managing changes isimportant, this is a given.
But with algorithmic systems,change control goes beyond
traditional practices.
While the base conceptsmay be similar, the
specific risks differ.
Importantly, if you use thewrong framework, you could be
(01:31):
including controls that youdon't need, excluding controls
that you do need, and notreally addressing the risk.
Your change control process maythen tick the boxes, but be both
ineffective and inefficient.
This article outlines apotential solution, a risk
focused, principles basedapproach to change control
for algorithmic systems.
(01:54):
An existing guideline.
There are well establishedmethodologies and guides
for change control.
We'd ignore them at our peril.
We can learn from them.
For example, one source isfrom financial auditing,
, Often cited in establishingalgorithm audit methods, the
(02:14):
ISA 315 guideline for generalIT controls includes four key
elements of change management,Number one, change management
process, ensure changesare properly planned,
tested and implemented.
Number two, segregation ofduties for change migration,
to prevent unauthorizedchanges from being implemented.
(02:34):
Number three, systemdevelopment, acquisition or
implementation, to ensure newsystems are properly designed
and tested before deployment.
And then four, data conversion,maintain data integrity during
system changes or upgrades.
Traditional IT changecontrol versus algorithmic
systems change control.
(02:55):
The typical objective ofIT change control is to
minimize fraud and error.
This holds true for algorithmicsystems, but it's not just
about keeping systems runningsmoothly and accurately.
We also need to establish trust,fairness, confidentiality, This
means that when we audit changecontrol for algorithmic systems,
(03:18):
we maintain a level of focuson error and fraud, but also
pay attention to other risks.
The ISA guideline focuseson financial statement risk.
It is mainly aboutfraud and error.
It's a solid base, but itwon't fully address the
unique risks and challenges.
posed by algorithmic systems.
For example, complexity,many AI and machine learning
(03:40):
models are complex, makingit difficult to predict
the full impact of changes.
using traditionaltesting methods.
, data dependency.
Changes in input datacan significantly alter
algorithmic outcomes withoutany code modifications.
A scenario not always directlyaddressed in traditional
IT change management.
(04:02):
ethical implications.
Financial statement auditsrarely consider the ethical
implications of changeswhich are crucial for AI
systems making high stakesdecisions., confidentiality.
The data must be kept privateand often the algorithms contain
IP that must not be leaked.
So, we need to expand andor adapt to cover the unique
(04:25):
needs of algorithmic systems.
Key aspects of change managementfor algorithmic integrity.
Drawing on the ISA 315 guidelineand expanding to encompass
algorithmic system risks,here is what the four key
controls could translate to.
Number one, robust designand testing processes.
(04:47):
Before any change isimplemented, it must undergo
rigorous design and testing,including impact analysis, how
the change might affect thealgorithm's accuracy, fairness,
and alignment with businessobjectives, comprehensive
testing, so using diversedatasets to ensure the change
performs as expected acrossvarious scenarios, etc.
Notes And then peer review.
The proposed changes arereviewed by other data
(05:08):
scientists or AI expertsto catch potential issues.
The second controlwould be controlled
migration to production.
The process of movingchanges from development
to production environmentsmust be tightly controlled,
including segregation of duties.
Staging environments, soan environment that mimics
production for finalchecks before go live.
(05:30):
Rollback plans, clear proceduresfor reverting changes if
unexpected issues arise,segregation of duties, so clear
separation between those whodevelop changes and those who
implement them in production,limiting access to make changes
to production algorithms.
And then approvals, multistep approvals for changes
involving both technicaland business stakeholders.
(05:51):
The third Documentationand auditability.
So every change must bemeticulously documented,
including change logs, sokeep detailed records of what
was changed, why and by whom.
Version control, totrack changes over time.
And audit trails, all actionsare logged and traceable.
And the fourth oneis data integrity.
So changes in input datacontinue to support the business
(06:14):
objectives, including dataquality, the data continues
to be correct and fit for use.
Data approval, a formalprocess for approving new
or revised data sourcesfor use in the system.
And then data lineage andprovenance, documentation to
explain and track data flows,including transformations
from sources to targets.
This could be agood starting point.
(06:37):
But not all systemsare the same.
Tailoring the changecontrol approach.
The approach to controllingchange will vary, depending on
the nature of the system, thelevel of complexity, whether
the system is developed inhouse or is purchased, and
the risk management focus.
Here are some examples.
They are hypothetical but basedon real world observations.
(07:00):
And just a note, when Italk about risk focus,
it's not exhaustive.
It will be based ona risk assessment and
is purely indicative.
, example A, creditscoring systems.
In this case, we're thinkingrules based, in house developed.
So the risk focus maybe transparency and
regulatory compliance.
And then in the article,there's a set of potential key
change controls under that.
(07:23):
example B, insurance pricingmodels, so this could be
machine learning and purchased.
The risk focus here may beperformance, fairness, and
model interpretability.
Example C, AI poweredinsurance claims processing,
in house developed, andit said AI powered so it's
machine learning and things.
The risk focus may beefficiency, accuracy
(07:45):
and fraud detection.
And then example D, creditlimit assignment system.
Here we may have a hybridrules and ML type system,
in house developed withthird party components.
the risk focus here,fairness, accuracy
and explainability.
Again, risk focus isnot exhaustive, it's
(08:06):
just purely indicative.
So in short, not all systemsare the same, as depicted
in the simple hypotheticalexamples I just gave there.
So rather than a standardset of controls, it may
be better to start with arisk assessment, then use a
set of principles to guidecontrol selection and design.
(08:28):
Let's talk about that.
Risk assessment and principles.
Truly effective changemanagement for algorithmic
systems must align withbroader principles of
algorithm integrity.
By overlaying the traditionalchange control guidelines on
the 10 key aspects of algorithmintegrity we've previously
discussed, we can craft aset of guiding principles.
(08:49):
This approach allows us toadapt specific change controls
based on risk assessments, withthe principles then steering
the implementation details.
So let's start withthe overarching focus
on risk assessment.
The risk assessment processshould identify potential risks
across all areas, so includingthings like performance,
fairness, transparency,security, prioritize risks based
(09:13):
on their potential impact andlikelihood, guide the allocation
of resources and focus areas,be regularly updated to reflect
evolving threats, technologies,and regulatory landscapes.
That sets the foundation.
Principles.
With our risk basedfoundation in place, we
(09:33):
can use these principlesto develop our controls.
The first one, changesenhance the algorithm's
accuracy and robustness.
And the risk linkhere is performance
degradation and errors.
Principle two, changesenhance or maintain the
algorithm's alignmentwith intended objectives.
(09:53):
Risk link here is misalignmentwith business goals, inadvertent
use of irrelevant data.
The third principle changesdo not introduce bias and
adhere to ethical standards.
Risk links, discrimination,ethical violations and
erosion of public trust.
The fourth item, changesmaintain or improve the ability
(10:14):
to explain the algorithm'sdecisions and ensure clear
accountability for changes.
Risk links, opacity indecision making processes,
lack of responsibilityfor modifications.
The fifth principle, changesfollow secure development
practices and maintain orenhance privacy protections.
(10:36):
The risk link here, securitybreaches, unauthorized access,
and privacy compromises.
Principle 6.
Changes maintainadherence to laws and
contractual obligations.
Risk link here obviously isregulatory non compliance or
contractual non compliance.
So we now have an approachthat is flexible and targeted.
(10:57):
Embracing change,preserving integrity.
With algorithmic systems,change is inevitable.
We need to manage thesechanges in a way that
preserves and enhances theintegrity of our systems.
Change must be deliberate,controlled, and aligned with
our values and objectives.
Systems vary in natureand complexity, etc.
(11:19):
So the specific risksand controls will be
different across systems.
This means that a principlesbased approach, underpinned
by a risk assessment,will likely be better
than a checklist approach.
Ultimately, if you followan approach like this, you
will have better control,more effective, more
efficient, a better use ofyour time and resources.
(11:42):
That's the end of that article.
Bit of a long one.
Thanks for listening.