The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI" For better formatted show notes, additional resources, and more, go to https://into-ai-safety.github.io For even more content and community engagement, head over to my Patreon at https://www.patreon.com/IntoAISafety
The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophical toads... Need I say more?
If you're interested in connecting with Igor, head on over to his website, or check out placeholder for thesis (it isn't published yet).
Because the full show notes have a whopping 115 additional links, I'll highlight some that I think are p...
As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow working with Dr. Max Tegmark at MIT.
As you may have ascertained from the previous two segments of the interview, Dr. Park cofounded StakeOut.AI along with Harry Luk and one other cofounder whose name has been removed due to requirements of her curr...
Join me for round 2 with Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. Dr. Park was a cofounder of StakeOut.AI, a non-profit focused on making AI go well for humans, along with Harry Luk and one other individual, whose name has been removed due to requirements of her current position.
In addition to the normal links, I wanted to include the links to the petitions that Dr. ...
UPDATE: Contrary to what I say in this episode, I won't be removing any episodes that are already published from the podcast RSS feed.
After getting some advice and reflecting more on my own personal goals, I have decided to shift the direction of the podcast towards accessible content regarding "AI" instead of the show's original focus. I will still be releasing what I am calling research ride-along content to my Patreon, ...
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded StakeOut.AI, a non-profit focused on making AI go well for humans.
00:54 - Intro
03:15 - Dr. Park, x-risk, and AGI
08:55 - StakeOut.AI
12:05 - Governance scorecard
19:34 - Hollywood webinar
22:02 - Regulations.gov comments
23:48 - Open letter...
Take a trip with me through the paper Large Language Models, A Survey, published on February 9th of 2024. All figures and tables mentioned throughout the episode can be found on the Into AI Safety podcast website.
00:36 - Intro and authors
01:50 - My takes and paper structure
04:40 - Getting to LLMs
07:27 - Defining LLMs & emergence
12:12 - Overview of PLMs
15:00 - How LLMs are built
18:52 - Limitation...
Esben reviews an application that I would soon submit for Open Philanthropy's Career Transitition Funding opportunity. Although I didn't end up receiving the funding, I do think that this episode can be a valuable resource for both others and myself when applying for funding in the future.
Head over to Apart Research's website to check out their work, or the Alignment Jam website for information on upcoming hackathons.
Before I begin with the paper-distillation based minisodes, I figured we would go over best practices for reading research papers. I go through the anatomy of typical papers, and some generally applicable advice.
00:56 - Anatomy of a paper
02:38 - Most common advice
05:24 - Reading sparsity and path
07:30 - Notes and motivation
Links to all articles/papers which are mentioned throughout the episode can be found belo...
Join our hackathon group for the second episode in the Evals November 2023 Hackathon subseries. In this episode, we solidify our goals for the hackathon after some preliminary experimentation and ideation.
Check out Stellaric's website, or follow them on Twitter.
01:53 - Meeting starts
05:05 - Pitch: extension of locked models
23:23 - Pitch: retroactive holdout datasets
34:04 - Preliminary results
37:44 - Next st...
I provide my thoughts and recommendations regarding personal professional portfolios.
00:35 - Intro to portfolios
01:42 - Modern portfolios
02:27 - What to include
04:38 - Importance of visual
05:50 - The "About" page
06:25 - Tools
08:12 - Future of "Minisodes"
Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.
Darryl and I discuss his background, how he became interested in machine learning, and a project we are currently working on investigating the penalization of polysemanticity during the training of neural networks.
Check out a diagram of the decoder task used for our research!
01:46 - Interview begins
02:14 - Supernovae classification
08:58 - Penalizing polysemanticity
20:58 - Our "toy model"
30:06 - Task descrip...
A summary and reflections on the path I have taken to get this podcast started, including some resources recommendations for others who want to do something similar.
Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.
This episode kicks off our first subseries, which will consist of recordings taken during my team's meetings for the AlignmentJams Evals Hackathon in November of 2023. Our team won first place, so you'll be listening to the process which, at the end of the day, turned out to be pretty good.
Check out Apart Research, the group that runs the AlignmentJamz Hackathons.
Links to all articles/papers which are mentioned thr...
In this minisode I give some tips for staying up-to-date in the everchanging landscape of AI. I would like to point out that I am constantly iterating on these strategies, tools, and sources, so it is likely that I will make an update episode in the future.
Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.
Alice Rigg, a mechanistic interpretability researcher from Ottawa, Canada, joins me to discuss their path and the applications process for research/mentorship programs.
Join the Mech Interp Discord server and attend reading groups at 11:00am on Wednesdays (Mountain Time)!
Check out Alice's website.
Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.
We're back after a month-long hiatus with a podcast refactor and advice on the applications process for research/mentorship programs.
Check out the About page on the Into AI Safety website for a summary of the logistics updates.
Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.
This episode is a brief overview of the major takeaways I had from attending EAG Boston 2023, and an update on my plans for the podcast moving forward.
TL;DL
In this episode I discuss my initial research proposal for the 2024 Winter AI Safety Camp with one of the individuals who helps facilitate the program, Remmelt Ellen.
The proposal is titled The Effect of Machine Learning on Bioengineered Pandemic Risk. A doc-capsule of the proposal at the time of this recording can be found at this link.
Links to all articles/papers which are mentioned throughout the episode can be found bel...
Welcome to the Into AI Safety podcast! In this episode I provide reasoning for why I am starting this podcast, what I am trying to accomplish with it, and a little bit of background on how I got here.
Please email all inquiries and suggestions to intoaisafety@gmail.com.
The latest news in 4 minutes updated every hour, every day.
If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.
Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com
The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.
The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!