All Episodes

Audio note: this article contains 31 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.

Lewis Smith*, Sen Rajamanoharan*, Arthur Conmy, Callum McDougall, Janos Kramar, Tom Lieberum, Rohin Shah, Neel Nanda

* = equal contribution

The following piece is a list of snippets about research from the GDM mechanistic interpretability team, which we didn’t consider a good fit for turning into a paper, but which we thought the community might benefit from seeing in this less formal form. These are largely things that we found in the process of a project investigating whether sparse autoencoders were useful for downstream tasks, notably out-of-distribution probing.

TL;DR

  • To validate whether SAEs were a worthwhile technique, we explored whether they were useful on the downstream task of OOD generalisation when detecting harmful intent in user prompts
  • [...]
---

Outline:

(01:08) TL;DR

(02:38) Introduction

(02:41) Motivation

(06:09) Our Task

(08:35) Conclusions and Strategic Updates

(13:59) Comparing different ways to train Chat SAEs

(18:30) Using SAEs for OOD Probing

(20:21) Technical Setup

(20:24) Datasets

(24:16) Probing

(26:48) Results

(30:36) Related Work and Discussion

(34:01) Is it surprising that SAEs didn't work?

(39:54) Dataset debugging with SAEs

(42:02) Autointerp and high frequency latents

(44:16) Removing High Frequency Latents from JumpReLU SAEs

(45:04) Method

(45:07) Motivation

(47:29) Modifying the sparsity penalty

(48:48) How we evaluated interpretability

(50:36) Results

(51:18) Reconstruction loss at fixed sparsity

(52:10) Frequency histograms

(52:52) Latent interpretability

(54:23) Conclusions

(56:43) Appendix

The original text contained 7 footnotes which were omitted from this narration.

---

First published:
March 26th, 2025

Source:
https://www.lesswrong.com/posts/4uXCAJNuPKtKBsi28/sae-progress-update-2-draft

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Latent firing frequency histograms for Gated, JumpReLU and TopK SAEs. Unlike Gated SAEs, which use a L1 penalty that penalizes large latent activations, JumpReLU (middle) and TopK (bottom) SAEs exhibit high-frequency latents: latents that fire on 10% or more of tokens (i.e. that lie to the right of the dotted vertical line).
Reconstruction loss vs L0 for the various SAE architectures and loss functions used in our experiment. The quadratic-frequency penalty (QF loss) has slightly worse reconstruction loss at any given sparsity than standard JumpReLU SAEs (L0 loss), but still compare favourably versus Gated and TopK SAEs.

Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.