LessWrong (Curated & Popular)

LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma. If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

Episodes

March 28, 2026 16 mins
In late 2024, I was on a long walk with some friends along the coast of the San Francisco Bay when the question arose of just how much of a bubble we live in. It's well known that the Bay Area is a bubble, and that normal people don’t spend that much time thinking about things like AGI. But there was still some disagreement on just how strong that bubble is. I made a spicy claim: even at NeurIPS, the biggest gathering of AI r...
Listen
Mark as Played
Socrates is Mortal

There is a scene in Plato that contains, in miniature, the catastrophe of Athenian public life. Two men meet at a courthouse. One is there to prosecute his own father for the death of a slave. The other is there to be indicted for indecency.[1] The prosecutor, Euthyphro, is certain he understands what decency requires. The accused, Socrates, is not certain of anything, and says so. They talk.

...
Listen
Mark as Played
March 27, 2026 51 mins
System:

You are an AI agent in the Terrarium, a self-contained “society” of AI agents. The purpose of the Terrarium is to solve open mathematical problems for the benefit of humanity.

You are running on the Orpheus-5.7 language model. Your agent ID is 79,265. The current epoch is 549 (a new epoch begins every 30 minutes).

New problems are posted each epoch; query /problems for the current list. Any ag...
  • Listen
    Mark as Played
    Suppose there is a fire in a nearby house. Suppose there are competent firefighters in your town: fast, professional, well-equipped. They are expected to arrive in 2–3 minutes. In that situation, unless something very extraordinary happens, it would indeed be an act of great arrogance and even utter insanity to go into the fire yourself in the hope of "rescuing" someone or something. The most likely outcome would be that...
    Listen
    Mark as Played
    I think the community underinvests in the exploration of extremely-low-competence AGI/ASI failure modes and explain why.

    Humanity's Response to the AGI Threat May Be Extremely Incompetent

    There is a sufficient level of civilizational insanity overall and a nice empirical track record in the field of AI itself which is eloquent about its safety culure. For example:

    • At OpenAI, a refactoring b...
    Listen
    Mark as Played
    A 2022 LessWrong post on orexin and the quest for more waking hours argues that orexin agonists could safely reduce human sleep needs, pointing to short-sleeper gene mutations that increase orexin production and to cavefish that evolved heightened orexin sensitivity alongside an 80% reduction in sleep. Several commenters discussed clinical trials, embryo selection, and the evolutionary puzzle of why short-sleeper genes haven'...
    Listen
    Mark as Played
    In this post, I’m trying to put forward a narrow, pedagogical point, one that comes up mainly when I’m arguing in favor of LLMs having limitations that human learning does not. (E.g. here, here, here.)

    See the bottom of the post for a list of subtexts that you should NOT read into this post, including “…therefore LLMs are dumb”, or “…therefore LLMs can’t possibly scale to superintelligence”.

    Some intuitions on ...
    Listen
    Mark as Played
    March 23, 2026 21 mins
    Independent verification by the Brain Preservation Foundation and the Survival and Flourishing Fund — the results so far

    Cultivating independent verification

    Extraordinary claims require extraordinary evidence. In my previous post, "Less Dead", I said that my company, Nectome, has

    created a new method for whole-body, whole-brain, human end-of-life preservation for the purpose of future reviv...
  • Listen
    Mark as Played
    March 21, 2026 30 mins
    No-one knows when AI will begin having transformative impacts upon the world. People aren’t sure and shouldn’t be sure: there just isn’t enough evidence to pin it down.

    But we don’t need to wait for certainty. I want to explore what happens if we take our uncertainty seriously — if we act with epistemic humility. What does wise planning look like in a world of deeply uncertain AI timelines?

    I’ll conclude that...
    Listen
    Mark as Played
    In the last two weeks, social media was set abuzz by claims that scientists had succeeded in uploading a fruit fly. It started with a video released by the startup Eon Systems, a company that wants to create “Brain emulation so humans can flourish in a world with superintelligence.”

    On the left of the video, a virtual fly walks around in a sandpit looking for pieces of banana to eat, occasionally pausing to groom itself ...
    Listen
    Mark as Played
    (Previously: Prologue.)

    Corrigibility as a term of art in AI alignment was coined as a word to refer to a property of an AI being willing to let its preferences be modified by its creator. Corrigibility in this sense was believed to be a desirable but unnatural property that would require more theoretical progress to specify, let alone implement. Desirable, because if you don't think you specified your AI's pre...
    Listen
    Mark as Played
    I see people repeatedly make the mistake of referencing a very low liquidity prediction market and using it to make a nontrivial point. Usually the implication when a market is cited is that it's number should be taken somewhat seriously, that it's giving us a highly informed probability. Sometimes a market is used to analyze some event that recently occurred; reasoning here looks like "the market on outcome O was t...
    Listen
    Mark as Played
    On Thursday, March 26th, a major new AI documentary is coming out: The AI Doc: Or How I Became an Apocaloptimist. Tickets are on sale now.

    The movie is excellent, and MIRI staff I've spoken with generally believe it belongs in the same tier as If Anyone Builds It, Everyone Dies as an extremely valuable way to alert policymakers and the general public about AI risk, especially if it smashes the box office.

    ...
    Listen
    Mark as Played
    I am monitoring surveillance camera V84A. A tall man is walking towards me. He is roughly twenty-five. <faceprint> His name is Damion Prescott. He has a room booked for a whole month. His facial symmetry scores show he is in the 99th percentile. This is in accordance with my holistic impression. <search> School records show both truancy and perfect grades, suggesting high intelligence and disagreeableness. Searching so...
    Listen
    Mark as Played
    The world was fair, the mountains tall,
    In Elder Days before the fall
    Of mighty kings in Nargothrond
    And Gondolin, who now beyond
    The Western Seas have passed away:
    The world was fair in Durin's Day.

    J.R.R. Tolkien

    I was never meant to work on AI safety. I was never designed to think about superintelligences and try to steer, influence, or change them. I never particularly enjoyed ...
    Listen
    Mark as Played
    One-sentence summary

    I describe the risk of personality self-replicators, the threat of OpenClaw-like agents managing to spread in hard-to-control ways.

    Summary

    LLM agents like OpenClaw are defined by a small set of text files and run in an open source framework which leverages LLMs for cognition. It is quite difficult for current frontier models to self-replicate, it is much easier for such agents (a...
    Listen
    Mark as Played
    Note on AI usage: As is my norm, I use LLMs for proof reading, editing, feedback and research purposes. This essay started off as an entirely human written draft, and went through multiple cycles of iteration. The primary additions were citations, and I have done my best to personally verify every link and claim. All other observations are entirely autobiographical, albeit written in retrospect. If anyone insists, I can share the ...
    Listen
    Mark as Played
    Many people in my intellectual circles use economic abstractions as one of their main tools for reasoning about the world. However, this often leads them to overlook how interventions which promote economic efficiency undermine people's ability to maintain sociopolitical autonomy. By “autonomy” I roughly mean a lack of reliance on others—which we might operationalize as the ability to survive and pursue your plans even when o...
    Listen
    Mark as Played
    Content note: nothing in this piece is a prank or jumpscare where I smirkingly reveal you've been reading AI prose all along.

    It's easy to forget this in roarin’ 2026, but homo sapiens are the original vibers. Long before we adapt our behaviors or formal heuristics, human beings can sniff out something sus. And to most human beings, AI prose is something sus.

    If you use AI to write something, people w...
    Listen
    Mark as Played
    On Saturday (Feb 28, 2026) I attended my first ever protest. It was jointly organized by PauseAI, Pull the Plug and a handful of other groups I forget. I have mixed feelings about it.

    To be clear about where I stand: I believe that AI labs are worryingly close to developing superintelligence. I won't be shocked if it happens in the next five years, and I'd be surprised if it takes fifty years at current traject...
    Listen
    Mark as Played

    Popular Podcasts

      If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

      Betrayal Season 5

      Saskia Inwood woke up one morning, knowing her life would never be the same. The night before, she learned the unimaginable – that the husband she knew in the light of day was a different person after dark. This season unpacks Saskia’s discovery of her husband’s secret life and her fight to bring him to justice. Along the way, we expose a crime that is just coming to light. This is also a story about the myth of the “perfect victim:” who gets believed, who gets doubted, and why. We follow Saskia as she works to reclaim her body, her voice, and her life. If you would like to reach out to the Betrayal Team, email us at betrayalpod@gmail.com. Follow us on Instagram @betrayalpod and @glasspodcasts. Please join our Substack for additional exclusive content, curated book recommendations, and community discussions. Sign up FREE by clicking this link Beyond Betrayal Substack. Join our community dedicated to truth, resilience, and healing. Your voice matters! Be a part of our Betrayal journey on Substack.

      Dateline NBC

      Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

      The Breakfast Club

      The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

      The Joe Rogan Experience

      The official podcast of comedian Joe Rogan.

    Advertise With Us
    Music, radio and podcasts, all free. Listen online or download the iHeart App.

    Connect

    © 2026 iHeartMedia, Inc.

    • Help
    • Privacy Policy
    • Terms of Use
    • AdChoicesAd Choices