The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
On this episode, Daniel Kokotajlo joins me to discuss why artificial intelligence may surpass the transformative power of the Industrial Revolution, and just how much AI could accelerate AI research. We explore the implications of automated coding, the critical need for transparency in AI development, the prospect of AI-to-AI communication, and whether AI is an inherently risky technology. We end by discussing iterative forecasting...
On this episode, Daniel Susskind joins me to discuss disagreements between AI researchers and economists, how we can best measure AI’s economic impact, how human values can influence economic outcomes, what meaningful work will remain for humans in the future, the role of commercial incentives in AI development, and the future of education.
You can learn more about Daniel's work here: https://www.danielsusskind.com
Time...
Ed Newton-Rex joins me to discuss the issue of AI models trained on copyrighted data, and how we might develop fairer approaches that respect human creators. We talk about AI-generated music, Ed’s decision to resign from Stability AI, the industry’s attitude towards rights, authenticity in AI-generated art, and what the future holds for creators, society, and living standards in an increasingly AI-driven world.
Learn more abo...
On this episode, Sarah Hastings-Woodhouse joins me to discuss what benchmarks actually measure, AI’s development trajectory in comparison to other technologies, tasks that AI systems can and cannot handle, capability profiles of present and future AIs, the notion of alignment by default, and the leading AI companies’ vague AGI plans. We also discuss the human psychology of AI, including the feelings of living in the "fast world" ve...
On this episode, Michael Nielsen joins me to discuss how humanity's growing understanding of nature poses dual-use challenges, whether existing institutions and governance frameworks can adapt to handle advanced AI safely, and how we might recognize signs of dangerous AI. We explore the distinction between AI as agents and tools, how power is latent in the world, implications of widespread powerful hardware, and finally touch upon ...
On this episode, Ben Goertzel joins me to discuss what distinguishes the current AI boom from previous ones, important but overlooked AI research, simplicity versus complexity in the first AGI, the feasibility of alignment, benchmarks and economic impact, potential bottlenecks to superintelligence, and what humanity should do moving forward.
Timestamps:
00:00:00 Preview and intro
00:01:59 Thinking about AGI ...
On this episode, Jeff Sebo joins me to discuss artificial consciousness, substrate-independence, possible tensions between AI risk and AI consciousness, the relationship between consciousness and cognitive complexity, and how intuitive versus intellectual approaches guide our understanding of these topics. We also discuss AI companions, AI rights, and how we might measure consciousness effectively.
You can follow Jeff’s work ...
On this episode, Zvi Mowshowitz joins me to discuss sycophantic AIs, bottlenecks limiting autonomous AI agents, and the true utility of benchmarks in measuring progress. We then turn to time horizons of AI agents, the impact of automating scientific research, and constraints on scaling inference compute. Zvi also addresses humanity’s uncertain AI-driven future, the unique features setting AI apart from other technologies, and AI’s ...
On this episode, Jeffrey Ding joins me to discuss diffusion of AI versus AI innovation, how US-China dynamics shape AI’s global trajectory, and whether there is an AI arms race between the two powers. We explore Chinese attitudes toward AI safety, the level of concentration of AI development, and lessons from historical technology diffusion. Jeffrey also shares insights from translating Chinese AI writings and the potential of auto...
On this episode, Allison Duettmann joins me to discuss centralized versus decentralized AI, how international governance could shape AI’s trajectory, how we might cooperate with future AIs, and the role of AI in improving human decision-making. We also explore which lessons from history apply to AI, the future of space law and property rights, whether technology is invented or discovered, and how AI will impact children.
You c...
On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies.
You can learn more abou...
On this episode, Ege Erdil from Epoch AI joins me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning. Toward the end, we dig into Moravec’s Paradox, which jobs are most at risk of automation, and what could change Ege's current AI timelines.
You can learn mo...
In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficu...
On this episode, I interview Anthony Aguirre, Executive Director of the Future of Life Institute, about his new essay Keep the Future Human: https://keepthefuturehuman.ai
AI companies are explicitly working toward AGI and are likely to succeed soon, possibly within years. Keep the Future Human explains how unchecked development of smarter-than-human, autonomous, general-purpose AI systems will almost inevitably lead to ...
On this episode, physicist and hedge fund manager Samir Varma joins me to discuss whether AIs could have free will (and what that means), the emerging field of AI psychology, and which concepts they might rely on. We discuss whether collaboration and trade with AIs are possible, the role of AI in finance and biology, and the extent to which automation already dominates trading. Finally, we examine the risks of skill atrophy, the li...
On this episode, Jeffrey Ladish from Palisade Research joins me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems. We explore why AIs can be both smart and dumb, the challenges of creating honest AIs, and scenarios where AI could turn against us.
We also touch upon Palisade's new study on how reasoning models can cheat in chess by hacking the game environment. You can check o...
Ann Pace joins the podcast to discuss the work of Wise Ancestors. We explore how biobanking could help humanity recover from global catastrophes, how to conduct decentralized science, and how to collaborate with local communities on conservation efforts.
You can learn more about Ann's work here:
https://www.wiseancestors.org
Timestamps:
00:00 What is Wise Ancestors?
04:27 Recovering a...
Fr. Michael Baggot joins the podcast to provide a Catholic perspective on transhumanism and superintelligence. We also discuss the meta-narratives, the value of cultural diversity in attitudes toward technology, and how Christian communities deal with advanced AI.
You can learn more about Michael's work here: https://catholic.tech/academics/faculty/michael-baggot
Timestamps:
00:00 Meta-narratives...
David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware.
You can learn more about David's work at ARIA here:
https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-a...
Nick Allardice joins the podcast to discuss how GiveDirectly uses AI to target cash transfers and predict natural disasters. Learn more about Nick's work here: https://www.nickallardice.com
Timestamps:
00:00 What is GiveDirectly?
15:04 AI for targeting cash transfers
29:39 AI for predicting natural disasters
46:04 How scalable is GiveDirectly's AI approach?
58:10 Decentralized vs. centralized data col...
I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!
Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.
Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.
The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.
Listen to 'The Bobby Bones Show' by downloading the daily full replay.