Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Kayla (00:00):
Long termism is the view that positively influencing the long term future is a key moral priority of our times. It's about taking seriously the sheer scale of the future and how high the stakes might be in shaping it. It means thinking about the challenge we might face in our lifetimes that could impact civilization's whole trajectory, and taking action to benefit not just the present generation, but all generations to come. Okay, we are back with cult. Or just weird. While we're making our way through the TESCREAL bundle, an acronym referring to the prominent futurist ideology currently defining fucking Silicon Valley, of all things.
Chris (00:47):
That must be the fastest we've ever talked about. Started the actual topic?
Kayla (00:53):
Oh, I haven't started the topic.
Chris (00:54):
It was instantly.
Kayla (00:55):
This is just for people who maybe haven't listened to previous episodes and are just tuning in, just catching them up to speed on what we're talking about. Essentially, what we're talking about is how this is all a cope for our innate and deeply human fear of death and whether all this stuff is a cult or if it's just weird.
Chris (01:13):
Yeah, I'm still impressed, though. I don't know. Cause it's usually. Usually we start with, how was your day today? How was a good day? How did I. I have a good day, too.
Kayla (01:21):
We'll get to that. We'll get to that right here. Cause we're doing our introductions.
Chris (01:24):
Oh, right. Okay.
Kayla (01:25):
I'm Kayla. I'm a television writer. Fear of death enthusiast, probably a lot of other things. Thanks for listening to culture. Just weird. Who are you?
Chris (01:33):
I'm Chris. I make games. I do podcasts. I sometimes look at data.
Kayla (01:37):
If you're listening to the show, you are currently supporting the show, and we really appreciate that. If you'd like to support us further, you can go to patreon.com culturesweird. And if you'd like to talk more about any of the show's topics, you can find us on discord linked in the show notes. Speaking of our Patreon, we actually have two new patrons to shout out this week.
Chris (01:56):
Yes.
Kayla (01:57):
So thank you so much to Karen and Jim for joining our Patreon. You enjoy the outtakes and the polls and some of the other stuff we got going on over there.
Chris (02:08):
Our outtakes are free.
Kayla (02:10):
The outtakes are free.
Chris (02:12):
But hey, you know what? That makes the top of the funnel really wide, because everybody listening right now can just go on over to our Patreon, listen to outtakes.
Kayla (02:19):
You can hear our cats. You can hear us burping motorcycles. A lot of motorcycles. It's a fun time to swear words. Definitely swears. Which we do not do on the show.
Chris (02:29):
Fuck no.
Kayla (02:30):
That was really good.
Chris (02:31):
Thanks. That was. Yeah. Classic. Classic. I have one more bit of business, actually.
Kayla (02:36):
Business, us.
Chris (02:37):
We have transcripts now.
Kayla (02:40):
Ooh, podcast transcripts.
Chris (02:42):
Finally. I know, I was like, oh, it only took us six seasons, but we do. So if you are listening to this and are unable to hear, then go on over to our website. Actually, the transcripts should be available wherever the podcast is available. But I know for sure they're also on the website where the episodes live on the website and you can read episodes instead of listening to episodes.
Kayla (03:04):
Or at the same time, if you are a person like me who has to have the subtitles on while you watch television.
Chris (03:09):
That's right. It actually technically is a subtitle file.
Kayla (03:13):
Cool.
Chris (03:14):
Which I thought would make a difference on YouTube, but YouTube already subtitled it.
Kayla (03:18):
YouTube does already subtitle it. Okay, well, go check out our transcripts. Enjoy. We hope it makes the show more accessible to more people. Are you ready to jump into today's topic?
Chris (03:28):
I'm already ready already.
Kayla (03:30):
So last week. I think you made that joke last week, actually.
Chris (03:32):
Did I? Okay, well then I'm not gonna do it again.
Kayla (03:34):
Well, no, we're keeping it.
Chris (03:35):
I have to cut it. Please.
Kayla (03:36):
Last week we talked about the c in test Grail cosmism. We've gone a little bit out of order on the acronym so far.
Chris (03:43):
Oh, we've been way out of order.
Kayla (03:44):
But now we're finally tackling the last two letters, EA and L. Effective altruism and longtermism.
Chris (03:52):
Okay, I have a problem with the EA. Every other letter in test grill I know is just one thing. An EA, for some reason, gets two letters in test grill. Come on.
Kayla (04:00):
I mean, it is two words. Everything else is just one word. I guess we've touched on the EA and the L a little bit as we've gone through these last 18 episodes. Obviously you talked about it with Doctor Emile Torres in the test grill episodes. A lot of this stuff came up in the rationalism episodes. Tires, so to speak. So now it's time for us to look under the hood and really get to know what these letters stand for.
Chris (04:27):
Part of my understanding, actually, of why Doctor Torres and Doctor Gebru created the test Creel acronym in the first place was because it's impossible to talk about one thing without at least touching on another. So I think it kind of makes sense that we've already sort of bumped into pretty much everything that we're gonna be talking about today, you can't.
Kayla (04:49):
It's like wading through a pool full of corpses. I don't know why. That was my.
Chris (04:52):
Wow. Is that your go to? I was gonna say it's like a cork board with yarn, but I guess corpses is good, too.
Kayla (04:59):
I guess, like, you know why that was.
Chris (05:03):
Dude, you are morbid.
Kayla (05:04):
Cause, like, if you're waiting through a pool of corpses, you'd, like, keep bumping into them.
Chris (05:08):
Oh, okay. Yeah, I guess in your mind, that would be the thing that you'd think of first.
Kayla (05:13):
I'm sorry, everyone.
Chris (05:16):
No, you're not.
Kayla (05:17):
So first, let's talk about. I just have death on the brain because this is the death season, even though we're talking about AI first. Effective altruism.
Chris (05:27):
Yes.
Kayla (05:28):
A lot of our listeners might already know a little bit about EA, even outside of our podcast, because of the whole Sam Bankman fried FTX fiasco that unfolded in 2022, which we will get deeper into. But the short version is that Sam Bankman Fried, known widely as SBF, was a cryptocurrency entrepreneur. He founded a cryptocurrency exchange called FTX, made a shit ton of money, and then got arrested and jailed for, like, a bunch of fraud related crimes. And I think generally, investors, like, lost a bunch of money. But before he got in trouble, SBF was a big, effective altruism guy, donated to a number of EA causes before his downfall. And so it was like, kind of a big deal in the news at the time.
(06:09):
And everybody, a lot of the news was talking about his EA connections, and that kind of helped bring EA into the mainstream.
Chris (06:16):
So can you help me clarify? Because I think I had this notion, but I'd never really, like, explicitly clarified it, but. So FTX, which is Sam Bankman Fried's cryptocurrency fund, that didn't in and of itself have anything to do with effective altruism, but he himself, as a person, was a big advocate for EA. And then that's what made EA. So, like, when FTX fell through and Sam Bankman Fried turned out to be a giant fraud, that's the thing that tarnished the EA image, because FTX wasn't itself about EA, right?
Kayla (06:53):
As far as I know, and we'll probably talk more about Sam Bankman Fried on the next episode rather than this episode. So hold anything we say here with a little bit of a grain of salt, as far as I know, FTX Washington, a cryptocurrency exchange. So I don't think it was about EA, but he himself was like, he made a shit ton of money. He was an extraordinarily wealthy person and.
Chris (07:15):
Was a big, like, did he make the money?
Kayla (07:18):
Ea? Well, money was there, and it was in his name.
Chris (07:21):
He acquired money.
Kayla (07:22):
Money came to be. And he, as a Silicon Valley guy, was like, power. A powerful enough figure that he was, like, getting people into EA.
Chris (07:33):
Got it.
Kayla (07:33):
And spreading the word about ea kind of thing.
Chris (07:35):
Okay.
Kayla (07:36):
As far as I know. And again, we'll talk more about it.
Chris (07:38):
No, that makes sense. A little bit later, I was like, when that first. When the news first broke on all this stuff, I was just a little confused. Cause I was like, is it. Is he in charge of some EA organization, or is it just so. It sounds like it's just. It was mainly his own personal charisma that was driving that.
Kayla (07:53):
Yeah, he was just a test realist.
Chris (07:55):
Right. Okay.
Kayla (07:56):
But effective altruism has a deeper history than just SBF. It's actually been around as a concept for over a decade. So let's go back to the beginning. Over a decade doesn't sound like that long.
Chris (08:07):
No, dude, these days, ten years. It is ten years. And not even just these days, but in the thing we're talking about, ten years is forever.
Kayla (08:16):
It's more than ten years.
Chris (08:18):
Jeez.
Kayla (08:19):
I think some of the earliest stuff we're talking about is, like, 2000.
Chris (08:22):
Wow.
Kayla (08:23):
And that's, like, ancient.
Chris (08:24):
That is super ancient. That's back when Eliezer Yudkowski was predicting the end of the world in 2008.
Kayla (08:30):
In 2011, before the world ended, an organization called giving what we can, and an organization called 80,000 hours decided to merge into a joint effort. Giving what we can had been founded at Oxford University just two years prior. Headed up by philosopher Toby Ord, his wife in physician Toby.
Chris (08:46):
Pondering my ord.
Kayla (08:48):
Pondering my ord, his wife and physician in training, Bernadette Young, and philosopher William McCaskill. I'm pausing here because I don't know how much I want to say about William McCaskill in this episode or save it for the next episode. I have so many thoughts and feelings about William McCaskill.
Chris (09:05):
You're bringing up the usual suspects here.
Kayla (09:07):
These are the usual suspects of test Grail and specifically of the EA and l. Members of giving what we can pledged to give 10% of their income or more, to, quote unquote, effective charities, which at the time were largely focused on alleviating global poverty. 80,000 hours was a nonprofit focused on researching what careers are the most, quote unquote effective in terms of positive social impact. Like, 80,000 hours refers to the average amount of time a person will spend in their career.
Chris (09:34):
Oh, you just poked a neuron. I feel like I remember 80,000 hours now.
Kayla (09:40):
There you go. I do remember that philosopher William McCaskill was also one of its founders. And, like, this guy was like, okay, how many years ago was 20? What's 37 -13 24? Yeah, this guy's, like, 24 at the time.
Chris (09:57):
I hate math. Don't make me do math.
Kayla (09:59):
When the two organizations merged, the members voted on a new name, and the center for effective Altruism was born. The convergence and kind of like, introduction of the phrase effective altruism to describe the kind of ethical approaches taken by some philosophers at the time coincided with a couple other things that would eventually kind of fall under either the EA umbrella or at least the wider test grail umbrella. Okay, we're talking charity assessment organizations. I'm gonna, like, hopefully trigger some more neurons for you. Givewell and open philanthropy, which were founded in 2007 and 2017, respectively.
Chris (10:36):
I remember both of those.
Kayla (10:37):
We're, of course, talking less wrong. The rationalist discussion forum, founded in 2009.
Chris (10:41):
I am trying to forget that one.
Kayla (10:42):
We're talking the Singularity Institute, founded to study the. I think it has a different name now, but at the time, it was the singularity Institute, and it was founded to study the safety of artificial intelligence.
Chris (10:52):
In two thousand s I AI. Yeah, so that was Elezer's thing.
Kayla (10:56):
I think it's called something else.
Chris (10:57):
And now it's Miri.
Kayla (10:58):
Miri. Thank you.
Chris (10:59):
Intelligence Research Institute.
Kayla (11:01):
And we're also talking about the now defunct future of Humanity Institute, founded to study things like existential risk for humanity in 2005.
Chris (11:09):
And that was the Nick Bostrom joint.
Kayla (11:11):
Bostrom joint, which. In Oxford, I think I may leave that to you to talk about in future episodes, because there's also a lot to say about Nick Bostrom. There's so much left to talk about here.
Chris (11:23):
Too many things.
Kayla (11:24):
Everybody is so scared of dying.
Chris (11:27):
And so am I, by the way. The fall of the future of humanity. Wait, what was it? No, not future humanity. What was it called? Oh, it was called future humanity. Oh. That's why we named our episodes. That. That was only a few months ago. It was, like, April as of publishing here.
Kayla (11:41):
Yeah, it was April 2024, I believe. More loosely related. There were also followers of this moral philosopher named Peter Singer, who also gravitated these circles. And Peter Singer, I think, started. Started his publishing in the seventies. So this stuff's been around for a while. All these groups and the people who either belonged to them, believed in them, promoted them, or followed them kind of all got munged together in the mid aughts and obviously beyond. In 2013, philanthropists hosted the first annual effective Altruism Global conference, which has taken place every year since. But what exactly is effective altruism? We'll go back to that age old question. What would you say you'd do here? William McCaskill, we talked about multiple times already. He's one of the main architects behind the movement, and he defines EA as this in his essay effective introduction.
(12:36):
Effective altruism is the project of using evidence and reason to figure out how to benefit others as much as possible and taking action on that basis. End quote.
Chris (12:45):
See, again, the first, like, when you first dip your toes into this stuff.
Kayla (12:50):
I think it's noble.
Chris (12:52):
Yeah. I'm like, that sounds great.
Kayla (12:55):
I have to say, I don't have a lot of. I went into this with a real bad attitude, and I came out of it with not a real bad attitude. I kind of turned around on it. I think that maybe next episode, I'm gonna have a bad attitude again.
Chris (13:08):
That's how it goes here, man.
Kayla (13:09):
This episode's kind of like background, and next episode's kind of gonna be more like the poking of the holes.
Chris (13:15):
Yeah, that's how we do things here. That's what we did with. Remember the Hare Christian episode? The first one was like, wow, that's so neat. They do awesome singing, and the place was cool, and it's like, cheap, good food. And then the next one was like, murders.
Kayla (13:26):
Yeah, that is a trope. On our show, William McCaskill's pinned tweet on Twitter goes a step further. Affective altruism is not a package of particular views. It's about using evidence and careful reasoning to try to do more good. What science is to the pursuit of truth, yea, is, or at least aspires to be the pursuit of good. End quote.
Chris (13:49):
That's. Man, I like that Easter egg.
Kayla (13:54):
For our listeners who may be into this stuff, I think that quote tweet was in reply to a Steven Pinker tweet about the pitfalls of Ea. I'm not gonna talk about Steven Pinker right now, but just Easter egg for anybody who might be listening and has any opinions about Steven Pinker. Largely effective altruists work to select the most effective charities to donate to and the most effective careers to dedicate their lives to, either by making the most money so that they can donate more, which is known as, quote unquote, earning to give or by choosing careers that are focused on the greater good. And as we've learned, this is not really a niche movement. It's fairly widespread across academia and has launched a number of institutes, research centers, advisory organizations, and charities.
(14:38):
It's estimated by EA critical scholars that EA based charities have donated at least several hundreds of millions of dollars. It's probably over a billion dollars at this point to their chosen causes. There's a lot of money here.
Chris (14:51):
I see. Now I'm kind of like, wondering, how are they calculating what is the most good?
Kayla (14:57):
That's why there are research centers and institutes and stuff, is that they have people whose work is to calculate and figure it out and decide and recommend it.
Chris (15:07):
Sounds like utilitarianism, the movement. Like, that's what the whole thing kind of sounds.
Kayla (15:11):
It is. There are differences that we'll get to, but there are similarities as well.
Chris (15:17):
Right.
Kayla (15:19):
What are some of those chosen causes, by the way? What are ears donating their money to the human fund? Well, yes, no. They actually, they've got some very specific things. First, before we get into the actual causes, I wanted to note that EA considers something that they call, quote unquote, cause prioritization. So, like, unlike other nonprofits who focus on a single issue, so, like Susan G. Komen, we all know that's specifically for breast cancer. Effective altruists believe the most money should be given to the cause that will do the most good. So there's not, like, there's not a human fund. There's not a, like, we are effective altruism. Donate to us, and we'll make the most money for effective altruism. They're like, we're gonna work to figure out where the money needs to go, rather than picking a specific thing.
(16:05):
They also do not subscribe to local ideals of philanthropy. So, like, helping your local community versus helping a community halfway across the world. Like, a lot of nonprofits are very, like, you know, donate to this nonprofit because it helps, like, people in your city, versus donate to EA causes because they help the most people, even if.
Chris (16:26):
It'S regardless of where.
Kayla (16:27):
Yeah, right.
Chris (16:28):
Okay.
Kayla (16:29):
Effective. Like I mentioned, effective altruists have organizations specifically for researching and analyzing cause prioritization.
Chris (16:37):
Okay.
Kayla (16:38):
That's the whole thing.
Chris (16:39):
Now, just noting here that I'm skeptical of such activities.
Kayla (16:46):
I might un skeptic you.
Chris (16:47):
Okay. I have a degree of skepticism going into it.
Kayla (16:50):
I think that you should. And I also think that I went into this being like, you guys don't do anything. And then I went, oh, my God, these guys do quite a bit, actually.
Chris (16:59):
Yeah. I'm not denying that they do a lot of work. I'm sure they do a lot of work. But you know what? I'll let you get to that.
Kayla (17:05):
Well, hold your thoughts. In general, though, to go to the specific causes, EA focuses currently on the, as we mentioned, alleviation of global poverty, tropical diseases such as malaria, and deworming initiatives, human deworming, animal welfare. Like this is a big one. A lot of especially early effective altruists focused on this. And interestingly, a number of EA critics are also animal welfare people, like animal ethics philosophers. Recently there was a book that came out that was, I forget exactly the title. I think I'm linking it in the show notes because I referenced these academics. But there was recently a book of essays that came out criticizing EA. And the three academics were like animal. Among other areas of study were animal ethics philosophers.
Chris (17:53):
That's interesting. It surprises me a little bit because I remember Emil saying in one of our, one part of our interview that, I hate to quote this because I don't remember who he was quoting, but it might have been McCaskill or might have been from somebody in the book that he wrote. And that's why I don't know if it's an EA or EA est or a long termist, but he quoted somebody as saying basically, like, if certain species go extinct, that's fine, because they're not sentient or sapient like we are, so they don't. That would be like a net positive.
Kayla (18:27):
I think that there's some. I think that they have an interesting set of ethics around animals because it does seem like eaers are very clear that, like, humans are not animals, humans are not sentient. And it also seems like they still can ascribe suffering to animals and say that animals suffer. And so it's better to not cause the suffering of the animals even though they're not sentient. Like a lot of ea people are vegan and vegetarian. Like McCaskill, I think, is a vegetarian.
Chris (18:54):
Oh, really?
Kayla (18:55):
Yes. And this is a result specifically of their EA beliefs.
Chris (18:59):
Right. Okay.
Kayla (19:00):
And last on the list of causes, the long term future and existential risk. They want to make sure we don't do catastrophic shit. Now that makes life a disaster for potential future humankind.
Chris (19:11):
Okay. Yep. There's the x risk thing.
Kayla (19:14):
First three relatively mainstream normal causes. The last one is where we start to tip over into, like, that weirder side of the test Creole, as we've already covered. That's where we get into AI risk. How do we save trillions of future humans, even if that means worsening the suffering of billions of current humans? That kind of stuff, right?
Chris (19:33):
That's the l, right?
Kayla (19:35):
In short, long termism. Yeah, but we're not there yet. We're still talking about effective altruism. I want to talk about how effective altruism really is.
Chris (19:45):
Oh, effective. Effective altruism.
Kayla (19:47):
Altruism, which, like, is kind of difficult thing to measure because it's such a big thing. And it's already hard to be like, if I donate a million dollars, how much help is this doing?
Chris (19:57):
That's hard to measure who affects the effectors.
Kayla (20:00):
But luckily for us, Scott Alexander, a rationalist blogger you may remember from our episodes on less wrong, has an essay titled in continued defense of affective altruism that does do the work of giving us some hard numbers.
Chris (20:13):
Yeah, he has a bunch of, like, famous, I guess, if you want to say posts, unless wrong. And he also graded Slate Star Codex, which is like, where part of the rationalist diaspora on the Internet went.
Kayla (20:26):
Now, these numbers were dug up by him, and I do believe that he's done the work to verify this stuff. But I only verified. I verified one of the claims personally because I'm bad at mathematic and it checked out. So he claims. This is the one that I verified. He claims that effective altruism has prevented around 200,000 deaths from malaria, citing a number from against malaria foundation, or AMF. Okay, so Givewell, the EA charity assessor we mentioned earlier, identifies against malaria foundation as one of their top recommendations. Scott Alexander says that givewell funds about 90 million of AMF's $100 million revenue. So to quote from Alexander's essay, Gibbel estimates that malaria consortium can prevent one death for $5,000. And EA has donated about $100 million per year for several years. So 20,000 lives per year times some number of years.
(21:24):
I have rounded these two sources combined off to 200,000. Side note, for me, like, yeah, I saw anywhere between like 150,000 to 185,000 to 200,000.
Chris (21:32):
Okay.
Kayla (21:33):
As a sanity check, malaria death toll declined from about 1 million to 600,000 between 20 15, mostly because of bed net programs like these, meaning EA funded donations in their biggest year were responsible for about 10% of the yearly decline, end quote.
Chris (21:50):
Okay, that sounds good. I know I've heard, like elsewhere, that malaria nets are like a thing, and that's like, you know, an effective thing.
Kayla (22:01):
I remember that being like a big Bill Gates thing, like malaria has been talked about by people with a lot of money that they're looking to donate for a long time. And clearly the deaths have gone down globally and that's a good thing.
Chris (22:14):
Good job. I agree.
Kayla (22:16):
Scott Alexander also has this to effective altruism. Has treated 25 million cases of chronic parasite infection. These are the numbers that I have not verified.
Chris (22:26):
Okay.
Kayla (22:27):
Given 5 million people access to clean drinking water. Supported clinical trials for a currently approved malaria vaccine and a malaria vaccine also on track for approval. Supported additional research into vaccines for syphilis, malaria, some other things that I don't know, hepatitis C, hepatitis E. Supported teams giving developmental economics advice in Ethiopia, India, Rwanda. Convinced farms to switch 400 million chickens from cage to cage free. That's where some of the animal ethic stuff comes in. Freed 500,000 pigs from tiny crates where they weren't able to move around and gotten 3000 companies, including Pepsi, Kellogg's, CV's, and Whole Foods to commit to selling low cruelty meat. Those are all. If we can trace those efforts back to either EA donors or EA charity assessors, that's not small shit. That's big shit.
Chris (23:18):
Big if true.
Kayla (23:19):
Big if true. My next sentence is now these are big claims. If you're like me, you might be going, okay, like, are all these things actually effective altruists? Are we just like calling some efforts EA because it's easier to like absorb something than like actually do something? Like there's, it's like a malaria foundation out there that's doing all the work and EA is taking the credit for it?
Chris (23:39):
Yeah, I'm like, and again, like, on that note, I'm also like unclear. Like, there's clearly, there's. Givewell is an EA specific organization, but isn't EA more like a movement? So if I work for XYZ charity that's doing the malaria nets, that isn't givewell. What did you call it, the name of it, against malaria. If I'm working for, against malaria and I self identify as an EA, is that being counted?
Kayla (24:07):
Well, I think what Scott Alexander was counting there was the fact that Givewell is responsible for 90% against malaria Foundation's funding and Givewell is EA specifically to him. And I agree that counts as like a quote unquote EA effort.
Chris (24:22):
Totally. Yeah. Yeah. Okay.
Kayla (24:24):
He also says this quote, I'm counting it. And this is of everything he's evaluating here. I'm counting it as an EA accomplishment if EA either provided the funding or did the work. Further explanations in the footnotes. And this is a very well footnoted essay. Okay, I'm also slightly, this is called test reel, Scott. I'm also slightly conflating EA rationalism and AI doomerism rather than doing the hard work of teasing them apart.
Chris (24:51):
See, you can't do it. If only you had the acronym.
Kayla (24:54):
Side note, Alexander does have a section on EAS impact on AI. That's where the AI doomerism comes in. But we're skipping that for now because again, the hard work of teasing them apart is hard. And for organizational purposes, discussions of AI, to me, fit better in the framework of what we're discussing next, which is longtermism. Why are we hewing long termism so closely to effective altruism? Why am I doing two episodes at once? Again is because longtermism essentially grew out of EA. There's a reason why it's the last letter in the test Grail bundle and why it follows Ea. It's because it's literally a subset or a subculture of effective altruism.
Chris (25:30):
If you take just those, it's eel.
Kayla (25:32):
It's eel. I'm viewing the l as kind of like the final boss of test Grail.
Chris (25:38):
Yeah, yeah.
Kayla (25:39):
I'm saying that now. And something worse is going to come along. Not that long termism is necessarily bad. It's not necessarily bad. And actually, I will say there is another final boss that may or may not come up in the show.
Chris (25:49):
Oh, is this like a secret boss?
Kayla (25:51):
I think there's a hidden boss.
Chris (25:53):
Hidden boss. Cool.
Kayla (25:54):
There's something. I'll just say it here. There's something called effective accelerationism. That's like a movement that's currently taking shape.
Chris (26:01):
Well, now it's not a secret boss anymore.
Kayla (26:03):
And that's the secret boss.
Chris (26:04):
Okay, is this like one of those bosses that is optional, but if you fight it's harder?
Kayla (26:11):
Yes, sure.
Chris (26:12):
Ruby. Weaponization.
Kayla (26:14):
Effective altruism is one thing. I'm just trying to explain what it is. Effective altruism is like, maybe we shouldn't let AI kill everyone and we should have some safety regulations. And effective accelerationism says fuck you. No, the only way we can save the world and the future of humanity is if we pedal to the metal. No regulations on AI get wrecked. But they're not in the task creel bundle yet.
Chris (26:40):
Mm. They're sort of like orbiting around it. By the way, speaking of letters like, do you know how hard it is for somebody in the video game industry to rework their brain around EA? Meaning, I know. Effective altruism and not electronic arts.
Kayla (26:56):
I know. Me too. One important thing to know about EA, the movement, not electronic arts, is that it's primarily a quote, unquote, like elite movement, meaning that it originated in high status educational institutions, appeals directly to the very wealthy. Obviously. It's all about like, give a lot of your money, earn to give, make a lot of money so you can give it. And it has therefore become.
Chris (27:18):
Alleviate your guilt.
Kayla (27:19):
Yeah. It's therefore become very pervasive in Silicon Valley culture. And that's where the long termist subculture incubated and hatched to define longtermism more deeply. We'll go back to Macaskill again. He long termism is the view that positively influencing the long term future is a key moral priority of our times. It's about taking seriously the sheer scale of the future and how high the stakes might be in shaping it. It means thinking about the challenge we might face in our lifetimes that could impact civilization's whole trajectory and taking action to benefit not just the present generation, but all generations to come.
Chris (27:53):
Okay. Like, again, like with every other letter on the intro bit, I'm sort of on board.
Kayla (28:01):
Yeah. It's the argument for climate change.
Chris (28:03):
Right, right. There's just a lot of broadness and assumptions there about when you say long term future, how long? What do you mean?
Kayla (28:13):
Who, who is a good question. In his recent book, what we owe the Future, Macaskill breaks it down further. And then Wikipedia pulled a great quote so I didn't have to do the hard work of going and checking the book out from the library.
Chris (28:25):
Thanks, Jimmy Wales.
Kayla (28:26):
Wikipedia describes the books as such. His argument has three parts. First, future people count morally as much as the people alive today.
Chris (28:35):
All right, now I'm off.
Kayla (28:36):
Second, the future is immense because humanity may survive for a very long time. And third, the future could be very good or very bad, and our actions could make the difference. End quote.
Chris (28:46):
Okay. Yeah. Two and three seem alright. I don't know about the valuing the future humans just as much as existing humans.
Kayla (28:56):
I got a problem with that one.
Chris (28:58):
That is like mad speculative.
Kayla (28:59):
I got a problem with that one. Yeah, I'm gonna not talk about my problems with that one yet. I'm gonna hold off.
Chris (29:05):
You're just gonna say it. You're just gonna tease it.
Kayla (29:09):
I just. This episode again, is more for like information and background. And the next episode is the color episode where I get to go like, I think that this is dumb.
Chris (29:16):
Oh, that's my favorite part.
Kayla (29:17):
I know. If you'll remember from previous episodes, this boils down to, quote, bringing more happy people into existence is good. All other things being equal, long term risks are generally focused on existential risks and preventing the destruction of humanity. Which is a good thing.
Chris (29:31):
It's a good thing. I can't disagree with that. As broadly as it's stated.
Kayla (29:34):
I'm back around on longtermism after this episode. There's problems, there's problems. But also fearing about climate change and wanting to fix it, that is a.
Chris (29:44):
Long termist issue, if that's what. For the long termists that care about that kind of thing, I agree with you.
Kayla (29:50):
A lot of them do. A lot of them do. Okay, existential risk. I keep bringing up climate change, but this can also cover nuclear war, pandemics, global totalitarianism, and then, of course, the weirder stuff like nanotechnology and the grey goose stuff, and artificial intelligence. AI AGI, that stuff.
Chris (30:10):
Grey goose is good.
Kayla (30:12):
Grey goo. Grey goo. The nanobots just turn everything into gray, goes into vodka. Yeah. Long termists seek to reduce these risks so that we can improve the number and quality of future lives over long time scales. They also believe that human. The reason why this is, like, important to them now is they believe that humanity is currently at a critical inflection point where what we do now determines the ultimate future of humanity, which has.
Chris (30:36):
Never been true before.
Kayla (30:38):
It's. I'm. I don't think they're totally right, but I also don't think they're totally wrong.
Chris (30:43):
Yeah.
Kayla (30:44):
If you look, especially, again, climate change. If you look at climate change and we hear all the time, like, if we don't get our emissions down, then it's gonna be ruining the world forever.
Chris (30:50):
My only joke there was, at all points in time, humanity is affecting what comes after us.
Kayla (30:57):
Yes, you're right.
Chris (30:59):
But, but we're extra special. You're totally right.
Kayla (31:02):
Yeah, I think we're extra special. I think that. I think that. I can't argue with the climate change thing. We are extra special in that.
Chris (31:09):
Yes. And also, it's not. Climate change isn't the first environmental catastrophe that we've had to contend with.
Kayla (31:15):
Oh, really?
Chris (31:16):
Yeah.
Kayla (31:18):
You're sound like a climate change denier.
Chris (31:21):
No, I'm not saying it's. It's not the first man made environmental.
Kayla (31:25):
We all know.
Chris (31:26):
Just don't be upset that you're. You're taking the l here. You're doing the l episode.
Kayla (31:30):
There absolutely is no l here for.
Chris (31:32):
Me to take all kinds of l's. It's raining l's.
Kayla (31:35):
But again, we go back to the question, what would you say you do here. And then we go back to Scott Alexander's article on the effectiveness of these movements. And I'm going to now focus on the AI section, because, again, that's such a big subset for long termists. So, quoting from Scott Alexander's article, things that they have done include founded the feel of AI safety and incubated it from nothing up until the point where many people are talking about this, endorsing it. We've got Sam Altman, which, oh, boy, do we need to talk about that next episode. We've got Bill Gates, we've got big names, and even, I think, the us government. We're all talking about AI safety, right?
Chris (32:16):
We have enough of a notion of it that Andreessen Horowitz can just steamroll right over.
Kayla (32:22):
He's an IC guy.
Chris (32:23):
I know.
Kayla (32:24):
Another thing is, EA helped convince OpenAI to dedicate 20% of company resources to a team working on aligning future super intelligences. They've gotten major AI companies, including OpenAI, to work with arcevals and evaluate their models for dangerous behavior before releasing them. They became so influential in AI related legislation that political. Accuses effective altruists of having, quote, taken over Washington and, quote, largely dominating the UK's efforts to regulate advanced AI.
Chris (32:53):
Ooh, that's some language.
Kayla (32:56):
They helped the british government create its frontier AI task force. And I like this assertion from Scott Alexander. Won the PR war. A recent poll shows that 70% of us voters believe that mitigating extinction risk from AI should be, a, quote, global priority.
Chris (33:13):
Wonder where that poll came from.
Kayla (33:15):
I believe that quote comes from the Artificial intelligence Policy Institute, or AIPI.
Chris (33:21):
Okay, so they did some polling.
Kayla (33:22):
Did some polling. It was conducted by YouGov.
Chris (33:26):
It was conducted by the t 101.
Kayla (33:28):
It was definitely conducted by.
Chris (33:30):
It came door to door. Hello. Are you afraid of my metal body?
Kayla (33:39):
And it's the ones that say no. You really got to watch out for a couple non AI. But still, long termist related wins were helped organize the secured DNA consortium, which helps DNA synthesis companies figure out a. What their customers are requesting and avoid accidentally selling bioweapons to terrorists.
Chris (33:57):
That's good.
Kayla (33:58):
Yeah. That's also, like, a thing that people buy on the dark web. I watched this show on Netflix that I told you about. Remember the roommate from hell or whatever that show was called?
Chris (34:08):
Oh, yeah.
Kayla (34:09):
And one of the people had a roommate that was constantly trying to poison and kill her. And she ordered. She didn't order staph infection. She ordered a worse, unsurvivable version of staph. Infection off of the dark web.
Chris (34:23):
Jesus Christ.
Kayla (34:23):
And, like, luckily the FBI found it or something.
Chris (34:27):
Don't do that. Don't do that, don't.
Kayla (34:29):
They also provided a significant fraction of all funding for DC groups trying to lower the risk of nuclear war.
Chris (34:34):
Okay, that's a good one.
Kayla (34:36):
They donated tens of millions of dollars to pandemic preparedness causes years before COVID and positively influenced some countries COVID policies.
Chris (34:44):
Okay.
Kayla (34:45):
And again, these are claims from Scott Alexander. You know, take everything with a little bit of a grain of salt, but these are ea and long termist causes and things that they're talking about thinking about saying we should donate our time, attention, and money to.
Chris (34:58):
All right, keeping your Scott Alexander hat on. What do you think he would say to Elias Rudkowski's thing where he's like, it's okay if we get into a global thermonuclear war, if it prevents AI catastrophe?
Kayla (35:11):
I don't get the sense that Scott Alexander would think that was a good idea, but I don't know. I get the sense and I'm not. I haven't read the sequences, but Scott Alexander seems, maybe, I don't say more measured, but definitely seems more sequenced, less focused. Elie Isaac Dukowski is very focused on AI threat. And I think that Scott Alexander's focus is a little wider.
Chris (35:37):
A little.
Kayla (35:37):
Okay, a little broader. The key argument for long termism is basically this. Quoting from a vox article, quote, future people matter morally just as much as the people alive today. There may well be more people alive in the future than there are at the present or have ever been in the past, and we can positively affect future people's lives.
Chris (35:56):
I'm, again, exactly like I was before, down with all of that, except for I don't know where they're getting the future. Hypothetical people are as important as.
Kayla (36:06):
I don't either. I don't either. But, like, imagine if you lived 500 years from now and you lived in a world where nuclear, global nuclear war happened 500 years prior, and now you are. Your life fucking sucks. Would you have some anger at your ancestors? Would you think that they had morally owed you better?
Chris (36:33):
And this is hypothetical, so this doesn't need to be hypothetical because we already do live 500 years after other humans, and we also go 100 years after other humans. I don't particularly care for a lot of actions of my ancestors, and some of them do impact me and my fellow citizens to this day. So I think sometimes the answer to that is yes. I wish there were some effective altruists in the 18 hundreds that had ended slavery sooner. Right. That would have been nice, right. Or if they were around when redlining was a thing and had managed to have that not be. That would be nice. By the same token, I don't know. You go back far enough, and there have been world wars. Certainly there's been world wars in this past century, but even before that, there's wars that consumed all of Europe.
(37:26):
I'm not saying that's a good thing. I'm just saying that once you get far enough in the future, it's kind of like, I don't know. I don't know if that would have been better off a different way. I don't even know if I would exist.
Kayla (37:39):
But I think that's why these guys talk about x risk, because x risk is different than what previous peoples have been capable of.
Chris (37:48):
Sure. That's why they're concerned with the utter erasure of humankind. And I get that. God, now I'm, like, arguing in their favor because I'm saying, like, even more.
Kayla (37:59):
I think it's super wrong to argue in the favor. I think we'll get into some of the problems in the next episode. The problem comes from fucking people. It's always, people fuck shit up. Like, we are not perfect. And even if you take a perfect ideology, which this is not, it's gonna go in some weird ways. And it has gone in some weird ways, and it continues to go in some weird ways.
Chris (38:20):
Right.
Kayla (38:21):
And I think that issue of future people matter morally as much as the people today has gotten really warped in some of these guys brains to mean future people matter more.
Chris (38:30):
Right.
Kayla (38:31):
And we must do things to save those future people. Fuck everyone alive today. They can suffer and die. Those people matter. And that's a problem.
Chris (38:39):
That dog ends up wagging that tail with the like. Therefore, all the stuff I'm doing as a billionaire is already good. Oh, God.
Kayla (38:48):
I think that's my biggest problem with this stuff, is that these guys that are talking about it are all rich. And I don't care what they have.
Chris (38:55):
There's zero diversity. It's like they're all.
Kayla (38:57):
It's all rich white people. This is a very, very white movement.
Chris (39:01):
Yeah.
Kayla (39:02):
And there's just. There's far too much wealth here for me to, like, be comfortable with these guys talking to each other and planning stuff for my life and my children's lives and my great grandchildren's lives and.
Chris (39:12):
Your great, great.
Kayla (39:14):
And some of these people, you would be shocked. I'm sure you're shocked. Terrible records on, like, how they talk about disabled people and how they talk about. You don't say, yeah, it's not great. It's not great. But that's for a future episode.
Chris (39:30):
Yeah. I just. I don't know. I do like your. Your question, though. I do like your question of, like, if you live 500 years, because I'm thinking of, like, how much do I give a shit about what they were doing in the year 1600.
Kayla (39:43):
Right.
Chris (39:43):
You know? Like, I don't know. I don't know. I do, and I don't. I don't know.
Kayla (39:48):
Like I said, doing this episode kind of brought me back around on some of these ideologies, and then. And then I scurried away. And then they brought me back, and then I scurried away. It's like you doing the less wrong episodes. Like, these movements have contributed to some pretty inarguably good things. Malaria. Great.
Chris (40:07):
Yeah, malaria is awesome. I'm glad they contributed to it.
Kayla (40:10):
There's a lot of really bad things here, and it's. It's no fun to just talk about the good stuff. So next time on culture, just weird. We are going to get into the w part of our acronym, the weird. What the hell is going on with Eanl that's had it in the headlines over the last year? And where is it going now?
Chris (40:29):
And the J part of our acronym, Juicy.
Kayla (40:31):
Juicy. Called her juicy weird. This is Kayla, this is Chris, and.
Chris (40:36):
This has been the long term call to her. Just weird.