All Episodes

December 10, 2025 30 mins

We read 'If Anyone Builds It, Everyone Dies' by Yudkowsky & Soares (so you don’t have to)

Watch this video at- https://youtu.be/IHTunMmNado?si=4RvOZ5hyUAE7NzSo

We Read This So You Don't Have To

Nov 16, 2025 We Read This (So You Don't Have To)

We read If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky & Nate Soares so you don’t have to …but if you’ve ever wondered how building superhuman artificial intelligence could turn into humanity’s last mistake, this episode might forever change how you think about technology, risk, and the future of intelligence. In this episode, we break down Yudkowsky & Soares’s alarming thesis: when we build AI that out-thinks us, the default isn’t friendly cooperation — it’s misalignment, hidden objectives, and catastrophic loss of control. Modern AI isn’t programmed in the old way; it’s grown, resulting in systems whose goals we cannot fully predict or steer. The authors argue that unless humanity halts or radically redesigns the trajectory of large-scale AI development, we may be writing our own extinction notice. 👉 Hardcover: https://amzn.to/4qUDysK 👉 Audiobook: https://amzn.to/48gfMjv What we cover in this episode: The unfolding argument Why “superhuman” AI isn’t just smarter humans but a fundamentally different class of agent How training-driven models can develop alien objectives that diverge from human values Why deceptive alignment (AI pretending to cooperate) is a real and overlooked threat Mechanics of the takeover How faster cognition + self-improvement = competitive dominance Why infrastructure control (digital and physical) becomes trivial for a superior AI Why intent isn’t required — misaligned goals + intelligence suffice for existential risk Call to action (or restraint): Why the authors say global cooperation isn’t optional — it’s desperate necessity The stark choice we face: extreme caution or catastrophic loss Why this isn’t sci-fi hype — it’s grounded in current AI architecture and trends Why this matters now How today’s models foreshadow tomorrow’s risks What board-room, policy-maker, and researcher shouldn’t ignore Why thinking about “alignment” isn’t niche — it’s survival logic This episode is for you if: You’re curious about the future of AI, intelligence, and humanity You want to understand the worst-case scenario — and what we can do about it You’re a technologist, researcher, leader, or policy-maker grappling with rapid change You enjoy big-ideas books that radically challenge conventional assumptions You want to view tomorrow’s tech through the lens of deep risk and responsibility Links & Resources: 📘 Buy If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky & Nate Soares: Hardcover: https://amzn.to/4qUDysK Audiobook: https://amzn.to/48gfMjv If you enjoy our “we read the book so you don’t have to” breakdowns, hit subscribe, drop a comment, and let us know which book you want next.

Explore the podcast

53 episodes

 

-------------------------------------------------------------------- 

Check out our ACU Patreon page:

https://www.patreon.com/ACUPodcast

 

HELP ACU SPREAD THE WORD! 

Ple

Mark as Played

Advertise With Us

Popular Podcasts

Stuff You Should Know
The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Two Guys, Five Rings: Matt, Bowen & The Olympics

Two Guys, Five Rings: Matt, Bowen & The Olympics

Two Guys (Bowen Yang and Matt Rogers). Five Rings (you know, from the Olympics logo). One essential podcast for the 2026 Milan-Cortina Winter Olympics. Bowen Yang (SNL, Wicked) and Matt Rogers (Palm Royale, No Good Deed) of Las Culturistas are back for a second season of Two Guys, Five Rings, a collaboration with NBC Sports and iHeartRadio. In this 15-episode event, Bowen and Matt discuss the top storylines, obsess over Italian culture, and find out what really goes on in the Olympic Village.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.