All Episodes

September 5, 2019 31 mins
“Explainability” is a big buzzword in AI right now. AI decision-making is beginning to change the world, and explainability is about the ability of an AI model to explain the reasons behind its decisions. The challenge for AI is that unlike previous technologies, how and why the models work isn’t always obvious — and that has big implications for trust, engagement and adoption. Nicole Rigillo breaks down the definition of explainability and other key ideas including interpretability and trust. Cynthia Rudin talks about her work on explainable models, improving the parole-calculating models used in some U.S. jurisdictions and assessing seizure risk in medical patients. Benjamin Thelonious Fels says humans learn by observation, and that any explainability techniques need to take human nature into account.  Guests Nicole Rigillo, Berggruen Research Fellow at Element AI  Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University Benjamin Thelonious Fels, founder of AI healthcare startup macro-eyes Show Notes  01:11 - Facebook Chief AI Scientist Yann LeCun says rigorous testing can provide explainability01:58 - Berggruen Institute, Transformation of the Human Program05:34 - Judging Machines. Philosophical Aspects of Deep Learning - Arno Schubbach 06:31 - Do People Trust Algorithms More Than Companies Realize? - Harvard Business Review 08:25 - Introducing Activation Atlases - OpenAI10:52 - Learning certifiably optimal rule lists for categorical data (CORELS) - YouTube11:00 - CORELS: Learning Certifiably Optimal RulE ListS 11:45 - Stop Gambling with Black Box and Explainable Models on High-Stakes Decisions 16:52 - Transparent Machine Learning Models for Predicting Seizures in ICU Patients - Informs Magazine Podcast19:49 - The Last Mile: Challenges of deployment - StartupFest Talk24:41 - Developing predictive supply-chains using machine learning for improved immunization coverage - macro-eyes with UNICEF and the Bill and Melinda Gates Foundation Further Reading  A missing ingredient for mass adoption of AI: trust - Element AI Breaking down AI’s trustability challenges - Element AI The Why of Explainable AI - Element AI  Follow Us  Element AI Twitter Element AI Facebook  Element AI Instagram  Alex Shee’s Twitter Alex Shee’s LinkedIn   --  L’« Explicabilité » est un grand mot à la mode en IA en ce moment. La prise de décision en matière d’IA commence à changer le monde, et l’explicabilité concerne la capacité d’un modèle d’IA à expliquer les raisons qui sous-tendent ses décisions. Le défi pour l’intelligence artificielle est que, contrairement aux technologies précédentes, la façon dont les modèles fonctionnent et les raisons pour lesquelles ils fonctionnent ne sont pas toujours évidentes — et cela a de grandes répercussions sur la confiance, l’engagement et l’adoption. Nicole Rigillo décompose la définition de l’explicabilité et d’autres idées clés, y compris l’interprétabilité et la confiance. Cynthia Rudin parle de son travail sur les modèles explicables, l’amélioration des modèles de calcul des libérations conditionnelles utilisés dans certaines juridictions américaines et l’évaluation du risque de crise chez les patients médicaux. Benjamin Thelonious Fels estime que les humains apprennent par l’observation et que toute technique d’explication doit tenir compte de la nature humaine.  Invités Nicole Rigillo, chercheuse de l’Institut Berggruen chez Element AI Cynthia Rudin, professeure d’informatique, de génie électrique et informatique, et de sciences statistiques à l’Université Duke Benjamin Thelonious Fels, fondateur de l’entreprise en démarrage macro-eyes œuvrant en IA dans le domaine de la santé Afficher les notes  01:11 – Yann LeCun, scientifique en chef de l’intelligence artificielle sur Facebook affirme que des tests rigoureux peuvent fournir des explications.01:58 – Institut Berggruen, Transformation du programme humain05:34 – Machines de
Mark as Played

Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.