All Episodes

December 12, 2024 4 mins

Spoken by a human version of this article.

Ongoing education helps everyone understand their role in responsibly developing and using algorithmic systems.

Regulators and standard-setting bodies emphasise the need for AI literacy across all organisational levels.

Links


About this podcast

A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

Hosted by Yusuf Moolla.
Produced by Risk Insights (riskinsights.com.au).

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
This article was publishedin December 2024.
It's titled Algorithm Integrity,Training and Awareness.
there's, literacy coming uppretty much everywhere and
this is really just a shortarticle outlining Things that
I've seen around education andtraining and the need for that.
So here we go.
Many algorithm relatedlaws and guidelines refer

(00:22):
to training to improveintegrity and responsible use.
There is a good reason for this.
If everyone involved knowswhat to look out for.
We have a better chanceof achieving integrity in
our algorithmic systems.
Boards providing oversight,senior management providing
direction, second line riskand compliance providing
guidance or review, managerstaking care of operations,

(00:46):
data scientists developing thesystems, and all staff using AI.
If everyone becomesaware and gets involved,
the job becomes easier.
Without awareness,algorithm integrity becomes
difficult to achieve.
If the data scientist isnot aware, they may throw in
data that results in bias.
If senior managers arenot aware, they may not

(01:07):
commission assessments.
In short, we need ongoingtraining and awareness
across our organizations.
ForHumanity recognizes thisand has a specific project
focusing on AI literacy,establishing learning
objectives and developing aIliteracy teaching modules.
is a non profit public charitydedicated to mitigating risk

(01:29):
from AI, algorithmic andautonomous systems, To get
involved with the project, youcan join the growing community
and there's a link in thearticle and I'll put in the show
notes to where you can join.
And for more in depth learningon AI risk topics, ForHumanity
also provides free courses.
And there's a linkthere as well.
Here are a few examples ofguides and laws that emphasize

(01:52):
the need for AI relatedtraining and education to ensure
responsible development and use.
And there's five of them.
First is the InternationalAssociation of Insurance
Supervisors, who aredeveloping a guidance paper
on the supervision of AI,in insurance of course.

(02:12):
The November 2024 draftrecommends regular competency
based training for boardsso that they can effectively
scrutinize AI system deployment.
It also recommends effectivetraining cascading throughout
the insurer staff are awareof and understand their role

(02:32):
in addressing AI risks, DNB.
The near launcher bunk includedskills in 2019 as one of the six
general principles for the useof AI in the financial sector.
Thirdly, asic, the AustralianSecurities and Investments
Commission produced a reportin October, 2024 following

(02:55):
a review of how 23 financialservices organizations were
using or planning to use ai.
The report includedthis and I quote.
Directors and officers shouldbe aware of the use of AI
within their companies, theextent to which they rely on
AI generated information theirduties, and the reasonably

(03:17):
foreseeable associated risks.
NIST, the National Instituteof Standards and Technology,
produced the AI RiskManagement Framework in 2023.
It includes a specificsubcategory that outlines
the need for AI riskmanagement training for
personnel and partners.

(03:38):
And then the EU AI Act, theEuropean Union Artificial
Intelligence Act, includesa specific expectation
about AI literacy.
We will explore this topicfurther in future articles.
Thanks for listening.
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.