All Episodes

December 21, 2025 4 mins

Spoken by a human version of this article.

TL;DR (TL;DL?)

  • Technical stakeholders need detailed explanations.
  • Non-technical stakeholders need plain language.
  • Visuals, layering, literacy, and feedback are among the techniques we can use.

To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe

About this podcast

A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

Hosted by Yusuf Moolla.
Produced by Risk Insights (riskinsights.com.au).

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:13):
This is the final article in theexplainability series.
So this article's titledAlgorithmic System Integrity
Explainability Part Six, andthis is about interpretability.
So the TLDR is technicalstakeholders need detailed
explanations.
Non-technical stakeholders needplan language.

(00:35):
Visuals, layering, literacy, andfeedback are among the
techniques we can use.
So this is, as I said, the finalarticle in the series that
started a few episodes ago.
To recap the challenge, we needexplanations that are both
accurate and understandable.
Non-technical people typicallyneed plain language

(00:56):
explanations.
This is not always easy toachieve when the starting point
is a complex mathematicalconcept.
Converting to plain language canmean that important context is
lost.
Technical people often needtechnical explanations, but they
are nuances like the need totranslate system field names to
more meaningful names.

(01:18):
Some complexity needs to beretained for certain purposes
and removed for others.
So as solutions to thisinterpretability challenge, we
have four potential approachesand here they are.
The first is visuals fornon-technical stakeholders.
Visuals can help simplifycomplex concepts.
For example, explaining AIdecision making processes

(01:41):
through flow charts or diagrams.
If done well, they can help toreduce the loss of context.
That often comes with plainlanguage translation.
For technical stakeholders,visuals can help with
understanding complex flows orinteractions.
The second potential solution tothis interpretability problem is

(02:03):
layering.
Layered explanations can workfor different levels of
technical expertise.
We could, for example, have highlevel summaries for
non-technical stakeholders andkeep building on this until we
get to the detailed technicalexplanations for technical
personnel.
Or we could start with thedetails and pair back until we

(02:26):
have the layer or version thatsuits non-technical
stakeholders.
Importantly, we still need eachlayer to be accurate and
consistent with the otherlayers.
We need to avoid contradictionsbecause this can erode trust.
It's a real problem.
Explainability is in part anywayabout enhancing trust.

(02:48):
The third potential solution isliteracy training and education
programs for stakeholders.
To improve the understanding ofAI systems, this can include a
range of levels of education.
For example, workshops on AIbasics or algorithm basics for
non-technical stakeholders,advanced technical training for

(03:10):
developers, and then variousothers in between.
There's a, an article that'slinked in this article explain
that explains tailoring literacyefforts to different roles, and
that includes four humanities,five distinct personas.
The fourth possible solution isfeedback.

(03:30):
Feedback mechanisms can be usedto gauge the effectiveness of
explanations, and this can helpidentify where explanations are
unclear or ambiguous.
A note on literacy.
The third item, literacy is thetopic of a past article, as
we've said, and is likely to bethe topic of future articles.

(03:51):
Even among highly capablepeople, it is not sufficiently
appreciated, and it might be oneof the most important topics in
algorithmic system integrity.
That's the end of this articleand the series.
Thanks for listening.
Advertise With Us

Popular Podcasts

Two Guys, Five Rings: Matt, Bowen & The Olympics

Two Guys, Five Rings: Matt, Bowen & The Olympics

Two Guys (Bowen Yang and Matt Rogers). Five Rings (you know, from the Olympics logo). One essential podcast for the 2026 Milan-Cortina Winter Olympics. Bowen Yang (SNL, Wicked) and Matt Rogers (Palm Royale, No Good Deed) of Las Culturistas are back for a second season of Two Guys, Five Rings, a collaboration with NBC Sports and iHeartRadio. In this 15-episode event, Bowen and Matt discuss the top storylines, obsess over Italian culture, and find out what really goes on in the Olympic Village.

iHeartOlympics: The Latest

iHeartOlympics: The Latest

Listen to the latest news from the 2026 Winter Olympics.

Milan Cortina Winter Olympics

Milan Cortina Winter Olympics

The 2026 Winter Olympics in Milan Cortina are here and have everyone talking. iHeartPodcasts is buzzing with content in honor of the XXV Winter Olympics We’re bringing you episodes from a variety of iHeartPodcast shows to help you keep up with the action. Follow Milan Cortina Winter Olympics so you don’t miss any coverage of the 2026 Winter Olympics, and if you like what you hear, be sure to follow each Podcast in the feed for more great content from iHeartPodcasts.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.