Future of Life Institute Podcast

Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Episodes

May 26, 2023 102 mins
Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept from emerging technologies. You can read more about Roman's work at http://cecs.louisville.edu/ry/ Timestamps: 00:00 Objections to AI safety 15:06 Will robots make AI risks salient? 27:51 Was early AI safety research useful? 37:28 Impossibility results for AI 47:25 How much...
Mark as Played
Nathan Labenz joins the podcast to discuss the economic effects of AI on growth, productivity, and employment. We also talk about whether AI might have catastrophic effects on the world. You can read more about Nathan's work at https://www.cognitiverevolution.ai Timestamps: 00:00 Economic transformation from AI 11:15 Productivity increases from technology 17:44 AI effects on employment 28:43 Life without jobs 38:42 Losing co...
Mark as Played
Nathan Labenz joins the podcast to discuss the cognitive revolution, his experience red teaming GPT-4, and the potential near-term dangers of AI. You can read more about Nathan's work at https://www.cognitiverevolution.ai Timestamps: 00:00 The cognitive revolution 07:47 Red teaming GPT-4 24:00 Coming to believe in transformative AI 30:14 Is AI depth or breadth most impressive? 42:52 Potential near-term dangers from AI Soc...
Mark as Played
Maryanna Saenko joins the podcast to discuss how venture capital works, how to fund innovation, and what the fields of investing and philanthropy could learn from each other. You can read more about Maryanna's work at https://future.ventures Timestamps: 00:00 How does venture capital work? 09:01 Failure and success for startups 13:22 Is overconfidence necessary? 19:20 Repeat entrepreneurs 24:38 Long-term investing 30:36 Fee...
Mark as Played
Connor Leahy joins the podcast to discuss the state of the AI. Which labs are in front? Which alignment solutions might work? How will the public react to more capable AI? You can read more about Connor's work at https://conjecture.dev Timestamps: 00:00 Landscape of AI research labs 10:13 Is AGI a useful term? 13:31 AI predictions 17:56 Reinforcement learning from human feedback 29:53 Mechanistic interpretability 33:37 Yudko...
Mark as Played
Connor Leahy joins the podcast to discuss GPT-4, magic, cognitive emulation, demand for human-like AI, and aligning superintelligence. You can read more about Connor's work at https://conjecture.dev Timestamps: 00:00 GPT-4 16:35 "Magic" in machine learning 27:43 Cognitive emulations 38:00 Machine learning VS explainability 48:00 Human data = human AI? 1:00:07 Analogies for cognitive emulations 1:26:03 Demand for human-like ...
Mark as Played
April 6, 2023 50 mins
Lennart Heim joins the podcast to discuss options for governing the compute used by AI labs and potential problems with this approach to AI safety. You can read more about Lennart's work here: https://heim.xyz/about/ Timestamps: 00:00 Introduction 00:37 AI risk 03:33 Why focus on compute? 11:27 Monitoring compute 20:30 Restricting compute 26:54 Subsidising compute 34:00 Compute as a bottleneck 38:41 US and China 42:14 Unin...
Mark as Played
Lennart Heim joins the podcast to discuss how we can forecast AI progress by researching AI hardware. You can read more about Lennart's work here: https://heim.xyz/about/ Timestamps: 00:00 Introduction 01:00 The AI triad 06:26 Modern chip production 15:54 Forecasting AI with compute 27:18 Running out of data? 32:37 Three eras of AI training 37:58 Next chip paradigm 44:21 AI takeoff speeds Social Media Links: ➡️ WEBSITE: ...
Mark as Played
Liv Boeree joins the podcast to discuss poker, GPT-4, human-AI interaction, whether this is the most important century, and building a dataset of human wisdom. You can read more about Liv's work here: https://livboeree.com Timestamps: 00:00 Introduction 00:36 AI in Poker 09:35 Game-playing AI 13:45 GPT-4 and generative AI 26:41 Human-AI interaction 32:05 AI arms race risks 39:32 Most important century? 42:36 Diminishing ret...
Mark as Played
Liv Boeree joins the podcast to discuss Moloch, beauty filters, game theory, institutional change, and artificial intelligence. You can read more about Liv's work here: https://livboeree.com Timestamps: 00:00 Introduction 01:57 What is Moloch? 04:13 Beauty filters 10:06 Science citations 15:18 Resisting Moloch 20:51 New institutions 26:02 Moloch and WinWin 28:41 Changing systems 33:37 Artificial intelligence 39:14 AI ac...
Mark as Played
Tobias Baumann joins the podcast to discuss suffering risks, space colonization, and cooperative artificial intelligence. You can read more about Tobias' work here: https://centerforreducingsuffering.org. Timestamps: 00:00 Suffering risks 02:50 Space colonization 10:12 Moral circle expansion 19:14 Cooperative artificial intelligence 36:19 Influencing governments 39:34 Can we reduce suffering? Social Media Links: ➡️ WEBSI...
Mark as Played
Tobias Baumann joins the podcast to discuss suffering risks, artificial sentience, and the problem of knowing which actions reduce suffering in the long-term future. You can read more about Tobias' work here: https://centerforreducingsuffering.org. Timestamps: 00:00 Introduction 00:52 What are suffering risks? 05:40 Artificial sentience 17:18 Is reducing suffering hopelessly difficult? 26:06 Can we know how to reduce sufferi...
Mark as Played
Neel Nanda joins the podcast for a lightning round on mathematics, technological progress, aging, living up to our values, and generative AI. You can find his blog here: https://www.neelnanda.io Timestamps: 00:00 Introduction 00:55 How useful is advanced mathematics? 02:24 Will AI replace mathematicians? 03:28 What are the key drivers of tech progress? 04:13 What scientific discovery would disrupt Neel's worldview? 05:59 How...
Mark as Played
Neel Nanda joins the podcast to talk about mechanistic interpretability and how it can make AI safer. Neel is an independent AI safety researcher. You can find his blog here: https://www.neelnanda.io Timestamps: 00:00 Introduction 00:46 How early is the field mechanistic interpretability? 03:12 Why should we care about mechanistic interpretability? 06:38 What are some successes in mechanistic interpretability? 16:29 How promi...
Mark as Played
Neel Nanda joins the podcast to explain how we can understand neural networks using mechanistic interpretability. Neel is an independent AI safety researcher. You can find his blog here: https://www.neelnanda.io Timestamps: 00:00 Who is Neel? 04:41 How did Neel choose to work on AI safety? 12:57 What does an AI safety researcher do? 15:53 How analogous are digital neural networks to brains? 21:34 Are neural networks like alie...
Mark as Played
Connor Leahy from Conjecture joins the podcast for a lightning round on a variety of topics ranging from aliens to education. Learn more about Connor's work at https://conjecture.dev Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin....
Mark as Played
Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at https://conjecture.dev Timestamps: 00:00 Introduction 00:47 What is the best way to understand AI safety? 09:50 Why is the world relatively stable? 15:18 Is the main worry human misuse of AI? 22:47 Ca...
Mark as Played
Connor Leahy from Conjecture joins the podcast to discuss AI progress, chimps, memes, and markets. Learn more about Connor's work at https://conjecture.dev Timestamps: 00:00 Introduction 01:00 Defining artificial general intelligence 04:52 What makes humans more powerful than chimps? 17:23 Would AIs have to be social to be intelligent? 20:29 Importing humanity's memes into AIs 23:07 How do we measure progress in AI? 42:39 ...
Mark as Played
January 12, 2023 36 mins
On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about regulating AI drug discovery. Timestramps: 00:00 Introduction 00:31 Ethical guidelines and regulation of AI drug discovery 06:11 How do we balance innovation and safety in AI drug discovery? 13:12 Keeping dangerous chemical data safe 21:16 Sean’s personal story of voicing concerns about AI drug discovery 32:06 How Sean will continue working on ...
Mark as Played
On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about the dangers of AI drug discovery. They talk about how Sean discovered an extremely toxic chemical (VX) by reversing an AI drug discovery algorithm. Timestamps: 00:00 Introduction 00:46 Sean’s professional journey 03:45 Can computational models replace animal models? 07:24 The risks of AI drug discovery 12:48 Should scientists disclose dangerous...
Mark as Played

Popular Podcasts

    Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations.

    Crime Junkie

    If you can never get enough true crime... Congratulations, you’ve found your people.

    CounterClock

    In order to tell the story of a crime, you have to turn back time. Every season, Investigative journalist Delia D'Ambra digs deep into a mind-bending mystery with the hopes of reigniting interest in a decades old homicide case.

    Morbid

    It’s a lighthearted nightmare in here, weirdos! Morbid is a true crime, creepy history and all things spooky podcast hosted by an autopsy technician and a hairstylist. Join us for a heavy dose of research with a dash of comedy thrown in for flavor.

    20/20

    Unforgettable true crime mysteries, exclusive newsmaker interviews, hard-hitting investigative reports and in-depth coverage of high profile stories.

Advertise With Us

For You

    Music, radio and podcasts, all free. Listen online or download the iHeart App.

    Connect

    © 2023 iHeartMedia, Inc.