Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:13):
This is the final article in theexplainability series.
So this article's titledAlgorithmic System Integrity
Explainability Part Six, andthis is about interpretability.
So the TLDR is technicalstakeholders need detailed
explanations.
Non-technical stakeholders needplan language.
(00:35):
Visuals, layering, literacy, andfeedback are among the
techniques we can use.
So this is, as I said, the finalarticle in the series that
started a few episodes ago.
To recap the challenge, we needexplanations that are both
accurate and understandable.
Non-technical people typicallyneed plain language
(00:56):
explanations.
This is not always easy toachieve when the starting point
is a complex mathematicalconcept.
Converting to plain language canmean that important context is
lost.
Technical people often needtechnical explanations, but they
are nuances like the need totranslate system field names to
more meaningful names.
(01:18):
Some complexity needs to beretained for certain purposes
and removed for others.
So as solutions to thisinterpretability challenge, we
have four potential approachesand here they are.
The first is visuals fornon-technical stakeholders.
Visuals can help simplifycomplex concepts.
For example, explaining AIdecision making processes
(01:41):
through flow charts or diagrams.
If done well, they can help toreduce the loss of context.
That often comes with plainlanguage translation.
For technical stakeholders,visuals can help with
understanding complex flows orinteractions.
The second potential solution tothis interpretability problem is
(02:03):
layering.
Layered explanations can workfor different levels of
technical expertise.
We could, for example, have highlevel summaries for
non-technical stakeholders andkeep building on this until we
get to the detailed technicalexplanations for technical
personnel.
Or we could start with thedetails and pair back until we
(02:26):
have the layer or version thatsuits non-technical
stakeholders.
Importantly, we still need eachlayer to be accurate and
consistent with the otherlayers.
We need to avoid contradictionsbecause this can erode trust.
It's a real problem.
Explainability is in part anywayabout enhancing trust.
(02:48):
The third potential solution isliteracy training and education
programs for stakeholders.
To improve the understanding ofAI systems, this can include a
range of levels of education.
For example, workshops on AIbasics or algorithm basics for
non-technical stakeholders,advanced technical training for
(03:10):
developers, and then variousothers in between.
There's a, an article that'slinked in this article explain
that explains tailoring literacyefforts to different roles, and
that includes four humanities,five distinct personas.
The fourth possible solution isfeedback.
(03:30):
Feedback mechanisms can be usedto gauge the effectiveness of
explanations, and this can helpidentify where explanations are
unclear or ambiguous.
A note on literacy.
The third item, literacy is thetopic of a past article, as
we've said, and is likely to bethe topic of future articles.
(03:51):
Even among highly capablepeople, it is not sufficiently
appreciated, and it might be oneof the most important topics in
algorithmic system integrity.
That's the end of this articleand the series.
Thanks for listening.