Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome to stuff you should know, a production of I
Heart Radio. Hey, and welcome to the podcast. I'm Josh
Clark and there's Charles w Chuck Bryant and Jerry's here,
so that appropriately makes this stuff you should know. That's right.
Before we get going, we want to give a little
(00:23):
special plug to our good friend John Hodgeman and our
buddy David Reese because they got a season two of
their awesome animated show dick Town. Yeah as in Private
Detective dick By. Yeah. I mean, it's cool enough to
get one season of the show, but if you've gotten
a second season and they're tossing you on f X
these days, you've made it. So their show is finally
(00:45):
made it and it's well deserved too, because it's a
pretty awesome cartoon. It is, it's very funny. It is
actually live now. It premiered on March three at ten
pm on f x X. You can watch it on
Who Lou and the whole jam here is that John
and I watched the whole first season, the whole their
short episodes. The whole first season was less than two
(01:07):
hours long, which really like makes a great case. We're
just streaming the whole thing and laughing a lot in
one night. But it's uh, it's about two detectives, John Hunchman,
John Hodgeman and David Purefoy David Reese, and uh, Hodgeman
is a is a private detective. He was a former
boy detective like Encyclopedia Brown type and and Dave was his,
(01:31):
uh sort of his bugs, meaning his nemesis in high
school and now is now his buddy and his sort
of muscle and his driver and his and they solve
cases together. And uh A season two I think is
even bigger and weirder, and it's sort of Scooby Doo.
It's just a lot of fun, really really fun show. Yeah,
the first season they did nothing but solve children's um,
(01:54):
which is his, and they were humiliated by that. So
they they've kind of expanded now to be grown ups.
They've resolved to be grown ups and they're solving grown
up mysteries for grown ups now, which is really something else.
So yeah, like you said, you can stream the whole
first season on Hulu, and you can catch this second
season on f x X. I wasn't aware of the
(02:15):
extra X. I don't take back what I originally said.
It's still big time but f x X that's right. Uh,
And it is rated PG thirteen, so if you're thirteen
and up you should enjoy. It's got a few swear words,
adult themes here and there, but it's it's great. It's
a lot of fun. Happy for Hodgman in Reese. Happy
Happy Hodgeman, Happy, happy Reese. And I just like saying
(02:40):
dick Town. Sure, it's a great name for a great show.
It is. Should we talk about effective altruism? Yeah, I
was gonna say, we're we're talking about that today and
this one kind of I don't know if you noticed
a similarity, but this one really kind of ties into
that short stuff that we uh we released before the
end of the year about riable giving. Did you notice
(03:01):
I did? Although in that episode it was like we're like, yeah,
you know, find a charity that speaks to you and
maybe something it's local, or if you have animals, or
if you had you know, a family member with cancer,
and this basically says don't do any of that, right, Uh,
the only way you should give is by just kind
of coldly calculating what would help a human the most
(03:25):
on planet Earth. Yes, so effective altruism is one of
those movements. It's a pretty new movement. I think it
really started in earnest around two thousand ten UM. And
it's one of those movements that like elicits passion one
way or another. It's a very polarizing idea if you
(03:47):
just take it at its bare bones, which people love
to do. And the reason why people love to take
it at its bare bones, at its extremes extremes is
because it is at heart of philosophical movement. Its rooted
in utilitarianism, and utilitarianism is even more polarizing it has
been for centuries than effective altruism is. And I think
(04:09):
if everybody would just move past the most extreme parts
of it and just kind of took effective altruism and
it's at its most middle ground, where most of it
seems to have accumulated and settled and where most of
the work is being done, it would be really difficult
to disagree with the ideas behind it. It's when you
trot out Peter Singer and some of his most extreme views,
(04:32):
or when you say, oh, it's all Silicon Valley billionaires,
you know, um, when you when you just look at
it like that, that's when people get all riled up
and they're like, I hate it infactive altruism. If you
really just kind of take it in a much more
level headed way, it's actually pretty sensible and pretty great
because at the end of the day, you're saving people's
(04:52):
lives and you're figuring out how to save the most
lives possible. Yeah. I think anything that has some of
its roots and philosophical movements of tech bros. It's it's
a hard sell for a lot of people. Uh, but
let's talk about a few things that it is, which
is the idea that, uh, there's a lot of good
(05:16):
that can be done with money, and if you can
provide for yourself and your own basic needs, um, you
should be probably giving to charity. Uh. You can take
a cold hard look at your finances by literal, strict
calculations financial calculations. If you make if you were a
(05:37):
person without kids making forty dollar a year, you are
in the ninety seven point four percentile on planet Earth
as far as your wealth goes. And that you might
not think if make forty dollars a year and then
I have taxes, and I really like people with a
lot of money should give to charities. I really don't
have enough to spare. The idea is that, no, you
(06:00):
have some to spare. You can give a little bit,
uh like ten of your money and still be in
the top ninety six percentile and you can literally save
human lives on plans. That's the big thing that they're
trying to get across here, that like, the money that
you're giving is saving lives that otherwise would be crippled
(06:21):
with disease or just not around, like they would die
if you didn't give this money. And the fact that
you are giving this money, those people are now living
what are called quality adjusted life years, where they're living
in additional healthy year or more because of that intervention
that you gave your money for. And that yes, it's
(06:44):
based on the premise that basically everyone living in the
United States is rich compared to entire swaths of the
rest of the world, and that basically anyone living in
the United States can afford to give temper scent of
their income and forego some clothes or some cars or
something like that to help other people literally survive. And
(07:09):
so right off the bat, we've reached levels of discomfort
for the average person, especially the average American that like
that are really tough to deal with. And so that's
the first challenge that effective ultrasts have to do is
kind kind of tamped down that overwhelming sense of guilt
and responsibility and shame at not doing that that that
(07:29):
people immediately kind of that crops up and people when
they hear about this. Yeah, So I think maybe let's
talk a little bit about the history and some of
the main organizations that are tackling this and maybe through
that what some of the founders describe as the core commitments.
Like you said, it took hold in about two thousand
(07:51):
and ten UH, and there's a group of organizations under
what is now an umbrella organization called the Center for
Effective Altruism c e A. And UH started off with
philosopher's Toby Ord and Will mccaskell founding a group called
Giving What we Can UH self defined as an international
community of people committed to giving more and giving more effectively.
(08:14):
A couple of years later, mccaskell and a man named
Benjamin Todd founded something called eighty thousand Hours, the ideas
that you might devote eighty thousand hours to a career,
so when choosing a career be very thoughtful on the
impact that career has for both good and evil. We'll
get way more into all this, uh, and then um,
(08:35):
there's other you know, sort of nut fringes and weird
groups but just on the outskirts called The Life You
Can Save and then animal charity evaluators, which we'll get
into how animals figure in. Um, but let's talk a
little bit. I guess about Will mccaskell and what he
sees as the core what he calls the core commitments
(08:56):
of e A. So yeah, and Will McCaskill, he's A.
He's out of Oxford, and so is Toby Ord. And
I first came across this chuck when I was researching
the End of the World podcast, and like, I deeply
admire Toby Ord on like a personal level. He actually
walks the walkkey and his whole family does, like they
donate a significant portion of their family income to charity
(09:17):
and like forego all sorts of stuff, and like he's
literally trying to save the world. So um, and that
since I'm I'm like really kind of open to the
ideas that come out of that guy's mouth. Um, and
you mentioned the End of the World with Josh Clark
available wherever you can find your podcast. Yes, you're wonderful heady,
highly produced in part series. Thank you very much. That
(09:38):
was nice of you. Where is the new tackle the
existing existential risks of the of the universe? Yes, okay,
that's I just want to make sure the right chich
one and the same. And I was not doing that
to set you up for a plug. I was doing
it and like kind of full disclosure that I'm a
little I'm probably a little less than objective at this one. Yeah,
but you know, that's a great show and it's still
(09:59):
out there just because it is, you know, a few
years old now, it's very evergreen. I think it's at
least in these times. Yeah, the world hasn't ended yet,
so it's still ever exactly good point. So, um, but
I mentioned that in part is kind of fully disclosed.
Um that I think Toby we're just one of the
greatest people walk in the earth right now. But also
(10:20):
Will mccaskell, who I don't know, seems to be in
lock step with Toby too, and so he's kind of
one of the one of the founders of this movement.
And he said that that there's um four tenants he
wrote a two thousand eighteen paper. And so there's basically
four tenants that formed the core of effective altruism. One
is maximizing the good, which we can all pretty much
(10:42):
get on board with, like you want to make as
much good as possible for as many people as possible.
The second is aligning your aligning your your ideas, your
your contributions with science using like evidence based um uh
well evian to to to create where you're going to
(11:03):
put your donations to use that to guide you rather
than your heart. It's a big one, so it's a
tough one for people's swallow. Another one's welfare is m
where by maximizing the good, you're you're improving the welfare
of others. That's the definition of good in that sense
of maximizing the good. And then last one is impartiality.
(11:24):
That's as hard for for people this swallow. That's harder
I think for people swallow than science alignment, um, because
what you're saying, then, Chuck, is that every single person
out there in the world equally deserves um your charitable contribution. Yeah.
And that's a big one because I'm trying to find
the number here of how much Americans give abroad. Where
(11:50):
is that? Okay, here we go out of the um.
What is it four hundred and seventy billion dollars that
Americans donate? Yeah, I think, yeah, one billion dollars twenty
five point nine billion of that went to went outside
of America to international affairs. So it's a lot of money,
(12:11):
but it's not a lot of money in the total
pot in the The idea for e A is is
to sort of shatter your way of thinking about, you know,
trying to help the your the people in your city,
or the people in your state or your country, and
to look at every human life is having equal value. Yes,
and not even human life, but every life. Yeah, they
(12:33):
include animals too, like you mentioned before, and we'll get
into a little more um. But the key is that
if every single person living on earth is equally important,
then and you're trying to maximize the help you can
you can do if from a from a strict e
A perspective, you're wasting your money if you're donating that money,
(12:56):
if you're an American, if you're donating it in America,
because just by virtue of the value of a dollar,
it can do exponentially more good. One dollar can in
other like developing poverty stricken areas of the world, then
it can here in the United States. So that right
there sets up for critics of that of a like
(13:17):
to point out that, well, wait a minute, wait amut,
are you saying that we shouldn't donate locally here at home,
That we shouldn't save the animals and the animal shelter,
That we shouldn't donate to your local food pantry, That
you shouldn't donate to your church. And if you really
back at effective altruists into a corner, they would say, look,
just speaking of maximizing your impact and everybody around the
(13:37):
world is equally important. No, you shouldn't be doing any
of those things, and you certainly shouldn't be donating any
money to your local museum or symphony or something like that. Yeah,
And they say that with their their head down and
they're kind of drawing on the floor with your foot.
They're saying like, yeah, that's kind of what we're saying.
That's right, yes, And that's really tough for people swallow.
(14:00):
There's like it's just this huge jagged pill that they're
asking people swallow. But if you can step back from it,
What they're ultimately saying is, look, man, you want to
do the most good with your charitable donations, here's how
to do it. You want to sit aside, you want
to feel good about it, or really do the good exactly.
And that's what they're doing. That's the whole basis of
(14:22):
effective altruism. Is they're saying, set like you're all of
your charitable giving is for you. You're doing it for yourself,
that's why you give. This takes that out of the
equation and says, now you're giving to genuinely help somebody else.
All right, I think that's a great beginning. Maybe let's
take a break now that everyone knows what this is
(14:43):
and everyone is is choking on their coffee because they
just donated to their local neighbor organization. Uh. And we'll
come back and talk about some of the other uh
philosophical founders right after this. George So a couple of
(15:15):
people we should mention really quickly because they're gonna come
up as far as organizations we did not mention GiveWell yet. Uh.
They were founded in two thousand seven. They're a big
part of the e a movement by Facebook co founder
Dustin Moskovitz and his wife. Is it carry or Carrie Tuna.
I'm going with Cary. I think so it's c A
(15:37):
r I. Uh So they have partnered up UM to
create open philanthropy, UM phil and philanthropy. It sounds weird.
I wanted to say philanthropy LOO too early in the
episode for that. Uh So they're they're big donors and
big believers in the cause UM. And then another person
(15:58):
you mentioned is well, first of all, you mentioned utilitarians, uh,
in this philosophical movement UM, they were developed by and
then we've talked about Jeremy Jeremy Bentham before and John
Stewart Mill. But the idea that people should do what
causes the most happiness and relieves the most suffering. And
(16:18):
the other guy you mentioned that uh sort of controversial,
I guess you could say is Peter Singer. Uh. He
is an author and a philosopher and a and a
ted talker who who kind of UM became I don't
know about famous, because a lot of people don't know
any modern philosophers, but in these circles became famous from
(16:39):
an idea thought experiment in V two from his essay Famine,
Affluence and morality, which is, you're going to work. You
just bought some really expensive, great new shoes. You see
a kid drowning in a shallow pond. Do you wait
in there and ruin those new shoes and rescue the
kid and make you late for work? And you know, people,
(17:02):
if asked would say, well, of course you do. You're
not gonna let that kid drown. So that the flip
to that is, well, that's happening every day all over
the world, and you're essentially saving your new shoes by
letting these kids die. Yeah, you're you're you're buying those
new shoes rather than donating that money to save a
child's life. It's morally speaking, it's the exact same thing.
(17:27):
And in the in the essay I wrote it last night,
it's really good. Do you want to feel like a
total piece of garbage for not doing enough in the world. Um.
He basically goes on to destroy any argument about, well,
that's a kid that you see in a pond. You're
you're actually physically saving that kid. He's like, well, it's
so easy to donate to help a child on the
(17:48):
other side of the world, right now, that for all
intents and purposes, it's as easy as going into a
pond to say. These days it is easier. You don't
even have to get wet. You're just calling your credit
card basically, you know. Um, So, so he just destroys
like any argument you could possibly have. And he is
an extremist utilitarian philosopher, and that he's basically saying, not
(18:12):
just that giving money to the point where you are
just above the level of poverty the as the people
you're giving, like really cutting into your your luxuries to
help other people. Not only is that a good thing
if you do that, it's actually not doing that is
(18:35):
a morally bad thing. It's morally wrong to not do that.
So he will really turn like the the hot plate
up under you. Um, and it just really make you
feel uncomfortable. But he's saying like like, this is my
philosophical argument, and it's pretty sound if you hear me out,
and if you hear him out, it is pretty sound. Um.
The problem is he's a utilitarian philosopher and a very
(18:57):
strict one too, And so um, there's a lot of
like you can take that stuff to the nth degree,
to some really terrible extremes, um to where it's it
becomes so anti sentimental that, um, it actually can be
nauseating sometimes, Like, strictly speaking, under your utilitarian view, this
(19:18):
one's often trotted out. It is morally good to murder
one person to harvest their organs to save the lives
of five other people with the the murdered person's organs.
Technically speaking in the utilitarian lens, that's that's maximizing the
good in the world. The thing is is, like, if
(19:38):
that's what you're focusing on, and you're equating effective altruisms
desire to get the most bang for your donation buck
to murdering somebody to harvest their organs to save five people,
you've just completely lost your way. Sure you can, you
can win an argument against utilitarianism in that respect, but
the fact that it's leveled and trained on on these
(19:59):
on this movement, this charitable philanthropy movement, is totally unfair,
even though yes, it is pretty much part and parcel
with utilitarianism. Yeah, the singer is a guy who I
think is one of his philosophies is the journey of life,
and that interrupting a life before it has goals or
(20:22):
after it's accomplished goals is Okay. Uh so he you know,
if you mentioned his name, there are a lot of
people will will point to this idea that he says
things like it's okay to kill a disabled baby right
after they're born in some cases, especially if it will
lead to the birth of another infant with better prospects
(20:42):
of a happy and productive life, or an older person
who has already accomplished those goals, and the idea being
that the disabled baby doesn't have goals yet. Uh, you know,
that's obviously some controversial stuff. Then he's a hard liner
and doubles down on this, but it again to to
sort of throw that in that has nothing to do
(21:04):
with effective altruism. No, he wrote that paper Famine, Affluence
and Morality, which basically provides the general contours of the
effective altruist movement. But it's not like he's just the
leading heartbeat of the movement or anything like that. That's
not their bible or anything like that. No, And unfortunately
he's an easy target that people can like point to
(21:26):
because the effective altruist movement has kind of taken some
of his ideas and they're like, oh, yeah, you like singer, Well,
what about singers arguing about this. It's like that has
nothing to do with effect of altruism. He makes a
really good, easy, easily obtained straw man that people like
to pick on. That's right. Uh. Let's talk about numbers
a little bit. We mentioned that in the United States, uh,
(21:48):
four seventy one billion dollars was donated in UM. About
three and twenty four of that came from individuals, which
is amazing. You know, those corporate guys are really pulling
their way. Yeah, no kidding, Uh, individuals, and that boils
down to about a thousand dollars per person uh in
the USA, which is not that much money if you
(22:09):
think about it. And out of that, there are a
couple of UM pledges that e A endorses. One called
Giving what we Can, which is promising to give away
ten percent of your lifetime income, and then another one
called the Founder's Pledge, where if you're a startup founder,
you promised to give away a percentage of your eventual proceeds. Uh.
(22:29):
And then there's also try Giving, which is a temporary
pledge to donate. And you know, it's only about twelve
years old. Only about eight to ten people have taken
these pledges so far, right, UM, which is still I mean,
that's a decent amount of people, especially considering that most
of the people involved in this movement are UM high earning,
(22:51):
UM extremely educated people who are probably like ten percent
of their income is going to add up to quite
a bit over the course of their careers. And that's
the thing they're saying, I'm going to give this ten
percent a year for my career. And the reason why
they really kind of targeted careers. Um, that's part of
eighty thousand hours. Eighty thousand hours is this idea that
(23:12):
we spent about eighty thousand hours working. So if you
took that eighty thousand hours and figured out how to
direct your energy the most effectively towards saving the world, um,
you can really do some good just by the virtue
of having to have this career to support yourself. And
so there's a couple of ways to do it. One
is to have a job that you make as much
(23:34):
money as you possibly can at and then you donate
as much as you comfortably can, and then maybe even
then some say ten percent or some people donate. There's
a NASA engineer named Brian Ottens who is profiled in
The Washington Post who said he specifically got the most stressful,
high earning job he could handle. Um in order to
(23:58):
give away. I think a quarter of is of his income, right,
And that's great. That's one way to do it. But
another way to do it is to say, Okay, actually,
I'm I'm going to figure out something that I really love,
but I'm going to adjust it so that it's going
to have the most impact possible. Yeah. I think it's interesting,
Like there are two ways to think about it. The
(24:21):
first one that you were talking about, they call it
earning to give. And you know, the idea that you
if you are capable of getting like a really high
paying job in like the oil industry, with the idea
that you're going to give most that a way in
the earning to give philosophy side of things, they're saying, yeah,
go do that. It doesn't have to be a socially
(24:41):
beneficial job. Make the most money you can and give
it away. Uh, don't go get the job of the
nonprofit because there are tons of people that will go
and get that job at the nonprofit like that that
someone will fill that position. UM eights doesn't. Uh, they
say that that's not the best way. There's is more.
The second when you mentioned, which is don't take a
(25:03):
job that causes a lot of harm. Being happy is
part of being productive, and you don't have to go
grind it out at a job you hate just because
you make a lot of money, so you can give
it away, like make yourself happy. Don't take a job
that causes harm. Do a job where you have uh
a talent. Um policy making is one field media. I
(25:25):
would argue that we have a job where you know,
we didn't know, but it turns out we have a
talent for doing this, and we can leverage our voice. Uh,
and we occasionally do to point out things that we
think make a difference in the world and to mobilize people. Um,
that's not the goal of our show, but we can
dabble in that, which is which is great. Uh. That's
(25:45):
not what we intended going into it. But I think
we woke up one day and found that we had
a lot of years, so we could we could throw
in episodes. I think that lead to good Yeah, I agreed,
which means we can shave a little off of that
ten percent we're morally obligated to donate every year, right,
so um, A good example of that of like figuring
(26:08):
out how to direct your career path more toward improving
the world. UM. On the I guess the eighty thousand
hours site, they profile the woman who wanted to become
a doctor, and she did some research and said, um,
well this is cool, but most doctors in Australia treat
Australians who are you know, relatively very well off and
(26:29):
very healthy. And so instead she decided that she wanted
to go into a different field of medicine. I think
she went into like epidemiology and figured out how to
get how to direct her her interest in medicine towards
getting vaccines out to market faster to get them through
the clinical trial process. And so she's not going to
get to be a doctor, but she's gonna get to
(26:51):
focus on medicine and she's going to get to have
the satisfaction that she's improving the world demonstrably through her job.
And she might not donate a die with that. I
suspect she's probably going to, because she's on the eighty
thou hours website. UM. But even if she didn't, she's
still figuring out how to use evidence UM to make
(27:14):
evidence based decisions to maximize the eights she's going to
spend in her career to make the world a better place.
Right because one of the ideas of e A and
a lot of you know, the the charity Navigator and
charity watched like good websites that we endorsed, uh that
we're not poopooing at all. But um, they tend to
focus a lot on you know, how much goes to overhead,
(27:36):
how much goes to the programs, which is which is good.
But e A is like, now, what we want to
see our data and literal scientific data measurables on how
much return you're getting for that dollar. And some charities
do this and are a little more open about it,
but they basically say, you know, every charity should say
(27:57):
here's how much your dollar, uh, here's how far your
dollar goes and exactly what it does. And the charities
of the West said, come on really nervously when they're
asked that, when they're told that they should be doing that,
because they just don't. Part of the reason why this
is very expensive to run. Um, what if effective altruists
(28:20):
like to use is the gold standard random control trials
where basically, um, you know what UX testing is user
experience testing for like a website. So there's a B
testing where you've got some people who are using your
website and they're getting one banner AD and the B
testers are getting a totally different banner AD, and you
just see which gets the most clicks. It's basically that,
(28:42):
but for a charity, for the work that the charity
is carrying out, some group gets malaria nets, another group doesn't,
and then you study which group had the best outcome,
and then you could say, oh, well, these malaria nets
increased these um these uh life adjusted years by you know,
thirty percent, which means that it comes out to um,
(29:03):
you know, point five life adjusted years UM per dollar
compared to you know, point to life adjusted years for
the control group. Ergo, we want to put our money
into these groups that distribute malaria nets in Africa because
they are demonstrably saving more lives than groups that don't.
Like they want data like that, and you just don't
(29:26):
get that with most charities. The good thing is that
they're pushing charities to do that, because if you do
care about that kind of thing, then then if you
can come up with that kind of evidence, you can
get these effective ultrus dollars and there's a lot of
dollars coming from that group, even though it is relatively small. Yeah,
it is interesting because you know, in that example, if
(29:48):
you were to just say on your website, uh, people
with with malaria nets fair better dot dot dot duh, right,
Like everyone knows that, but they really want to have
that to drill down and have that measurable where they
can point to a number and say that, you know this,
(30:08):
this is the actual result. We all know malaria nets help.
But maybe if people I mean maybe they think it
speaks to people more. Um, it certainly speaks to them,
but I guess they think it would speak to the
masses if if because these things cost money. I mean,
that's one of the criticisms of these randomized controlled trials.
It is that there is sort of expensive and like
(30:30):
maybe that money should be used to do to actually
donate instead of doing these trials. But they must think
it speaks to people to have actual data like that. Well,
it speaks to them because the way that you figure
out how to maximize your money is to have data
to look at to decide rather than your heart. It
makes sense these are techies because they're all about that
(30:50):
data very much so, and there's some problems with that,
with relying on that. There's some criticisms, I should say,
but it's problems to One is that there's a lot
of stuff that you can't quite quantify in terms like that.
Like if you're saying, like, no, I want to see
how many lives saved your your work is doing per dollar, um, well,
(31:12):
then the you know, the High Museum is going to
be like, um, zero, we're saving zero lives. But that
doesn't mean that they're not enriching or improving lives through
through the donations that they're receiving this art museum, you
know what I mean. Um. Livia, who helps us with
this article, gives an example. She's saying, like, you couldn't
really do a randomized controlled trial for the nineteen sixty
(31:35):
three March on Washington that helps solidify the civil rights movement, um,
and yet it'd be really hard to argue that that
didn't have any major like effects on the world. So
that's a that's a big that's a big argument. Then
the other thing is that sometimes these randomized controlled trials,
like you can hold it one year and then in
one part of the world and go to another part
(31:57):
of the world the next year. And it's what should
be the same is just not the same. And so
if you're basing all of your charitable giving on these things,
they better be um, reproducible or else what are you doing? Yeah,
I mean this. You get why this is such a
divisive thing and why it's such a hard sell to
people because people give with their hearts generally, Uh, they
(32:18):
give to causes they find personal to them, Like I
mentioned earlier, a family member with cancer, or a family
member with m S or just you know, name anything.
Generally people like have a personal connection somehow which makes
them want to give. And that's sort of the heart
of philanthropy has always been the heart. Uh. And it's
(32:40):
a it's a tough sell for e A to say,
is I'm sorry, you have to cut that, cut that
out of there. Um. You know, it's a very subjective
thing to what constitutes a problem, even um when it
comes to the animal thing, like when when people give
for animal charities, they're generally giving to you know, dogs
and cats and stuff like that. Um. These these great
(33:02):
organizations that do great work here in America. But the
concentration if from the e A. Perspective are factory farmed animals,
and that one percent of charitable spending in the US
goes to the suffering of farmed animals, and that's what
we should be concentrating on because of the massive, massive scale. Again,
(33:23):
to try and do the most good, you would look
at like where the most animals, and sadly they're on farms. Yeah,
I mean just from sheer numbers. Um, you can make
a you can make a case utilitarian speaking that your
money would be better off spent improving the lives of
cows that we're going to be slaughtered for beef, that
will still eventually be slaughtered for beef, but you can
(33:45):
improve their welfare during their lifetimes and that technically is maximizing, um,
the impact of your dollar by reducing suffering just because
there's so many cows awaiting slaughter in the world, humans
that are dying in Africa back. Yeah, that's a tough sell.
(34:05):
And I think this is where like, this is where
it makes sense to just kind of like maintain a
certain amount of common sense where it's like, yeah, man,
like if you really want to maximize your money, go
look at the e A sites. Go check out eighty
thousand hours, um, like like get into this and actually
do that. But there's no one who's saying, like, but
if you give one dollar to that, to that local
(34:28):
symphony that you love, you're a sucker, you're a chump,
you're an idiot. Nobody's saying that. And so maybe it
doesn't have to be all or nothing one way or
the other, which seems to be the push in the poll,
and I think the issue here, Yeah, we should read
directly from Will mccaskell um. He defends uh e A
and he says this effective altruism makes no claims about
(34:50):
what obligations of benevolence one has. Uh nor does e
A claim that all ways of helping others are morally
permissible as long as they help others the most. Indeed,
there's a strong community norm against promoting or engaging in
activities that cause harm. So they flat out say, like
the whole murder someone to harvest their organs, like, we're
(35:11):
not down with that, that's not what we're about. Please
stop mentioning Peter Singer, right, yeah, and he says it
doesn't require that I always sacrifice my own interest for
the good of others, And that's actually very contradictory of
Peter Singers. Um. Essay, he says, no, you're morally obligated
to do that, and if you don't, it's morally bad.
They're saying like, no, let's let's all just be reasonable.
(35:32):
You're like, yeah, we're philosophers, but you know we can
also like think like normal human beings too, And that's
what we're trying to do. We're trying to take this
kind of philosophical view um, based in science, based in evidence,
and try to direct money to get the biggest impact. Um. Yeah,
like you said, can we stop? Can we stop bringing
up Peter Singer place? How about we take another break
(35:54):
and uh, we'll talk a little bit about geez, what
else long termism and e A s impact right after
the case. So, long termism is part of the e
(36:23):
A movement, and this is the idea of hey, let's
not just think about helping people now. If we really
want to maximize impact to help the most people, which
is at the core of our mission statement, we need
to think about the future because there will be a
lot more people in the future to save and uh,
and so long termism is really where your dollar is
(36:46):
going to go the most if you think about like
deep into the future even yeah, um, like if if
humanity just kind of hangs around planet Earth for another
billion or so years, which is entirely possible, if we
can make it through the great filter, uh um uh,
there will be like quadrillions of human lives left to come.
And a lot of philosophers who think about this kind
(37:09):
of thing kind of make the make the case, or
can make the case if they want to, that their
lives are probably going to be vastly more um enjoyable
than ours, just from the technology available and not having
to work, and all sorts of great stuff that's going
to come along, and so technically, just by virtue of
the fact that there's so many more of them, we
(37:30):
should technically be sacrificing our own stuff now for the
benefit of these generations and generations and generations of humans
become that vastly out number the total number of humans
who have ever lived, like a hundred eight billion humans
have ever lived. We're talking quadrillions of humans left to come.
That very much devetails with the um the kind of
(37:51):
discomfort you can elicit from somebody who says that your
money is better spent relieving the suffering of cattle awaiting
slaughter than it is saving children's lives in Africa, you
know what I'm saying. Yeah, And they're not just talking
about climate change and like obviously that kind of existential risk.
They dabble in AI and stuff like that. And I
know that we don't need to go down that rabbit hole.
(38:13):
You should listen to the End of the World with
Josh Clark, the AI episode. But I mean, it's all
about that, but it does have to do with that
kind of stuff. It's not just like we need to
save the planet so it's around in a billion years. Uh,
you know, they tackle like all kinds of existential risk basically, Yeah,
and they dedicate like a lot of these guys are
dedicating their careers to figuring out how to avoid existential
(38:35):
risk because they've decided that that is the greatest threat
to the future that would cut out any possibility of
those quadrillions of lives. So that's what the that's literally
why they have dedicated themselves to thinking about and alleviating
these risks, because they're trying to save the future of
the human race. Because they've decided that that is the
best way to maximize their careers for the most good,
(38:59):
which is just astounding if you stop and think about
what they're actually doing in real life. Uh, we mentioned
the kind of money that's even though it's a UM,
not a huge movement so far. I think we said
like somewhereround eight thousand people of maybe these pledges, I
think overall. Uh. The co founder of eighty thousand Hours,
(39:20):
Benjamin Todd says about forty six billion dollars is committed
to e a going forward UM. Like you said, a lot.
You know, it's because of there are a lot of
rich people and tech people that are backing this thing.
So a lot of that money comes from people like
Duskin Moskovitz and uh, Cary Tuna and Sam bankman Fried
(39:41):
of he's a cryptocurrency guy, so a lot of that
money comes from them. But they're they're trying to just
raise awareness to get more and more regular people on
board that you know, if they have you know, two
thousand dollars or three thousand dollars to give a year,
they're saying, I think they estimate that three to four
five bucks is like the amount of money it takes
(40:03):
to save a human life and to give them additional
quality years. Yeah, so if you cough up that much
and you directed toward one of the charities that they've
identified is the most effective. UM through their sites through
like give well is a place to go look for
for charities like that that have been vetted by effective ultras.
You're literally saving the life of a child every year.
(40:23):
It's like you're saving a child from drowning in a
pond every single year, just and all you're doing is
ruining your new shoes or you know, it's interesting new shoes,
but yeah, you're you're ruining you're really nice vacation that year,
right because you you know, you sent this one thing.
I don't know where it came from, but the the
idea of someone running into a burning building and pulling
a child out or a kid out of a pond,
they're they're written in the newspaper as a hero. But
(40:46):
you can you can do that. You can save a
kid a year or more every year for the rest
of your life. Um, it's a little less dramatic. You're
not gonna have a newspaper, You're not gonna be above
the fold. It's you know what I'm saying, But uh,
that's I mean that has the a is like all
about it is the antithesis of that antithesis. Yeah, I
(41:11):
like you know what I mean, it's it's no frills
version of antithesis. The thing is too is also I mean,
it's still very relative. Likes is is relatively a very
large amount or so so size amount or not much amount,
depending on how much you you make. And again, nobody
in the effective ultraist movement is saying that you should
(41:33):
personally sacrifice unless you really want to, unless you're driven to.
But you're not morally required to personally sacrifice to cough
up that fort when it means you're you're not going
to be able to eat for a week, or you're
not gonna be able to have a place to live. Like,
nobody's saying that, and nobody's being flipping about the the
idea that isn't that much. What they're saying is can
(41:56):
literally save a child's life. And if you stop and
look at your life and think that you could come
up with that, you could donate it to a certain
place that will go save a child's life in real life.
That's that's what they're saying. Yeah, this, uh, this, this
would be a hard sell to Emily, what I'm thinking
about our our charity conversation we have every year, and
(42:20):
I'm trying to imagine myself saying, what if we don't
give to the local animal shelter and neighbor in need
like we usually do, and instead we do this. She
would just be like, uh, I see what you're saying,
but but no, get get out of my face with that.
But I mean you could be like, well, how about
we do both? You know exactly? So I think I
think that's the thing. That's my take on it, Like
(42:41):
we support co ED, and like that's I have no
qualms about supporting COD even after doing all this research
and understanding effective altruism even more, no qualms whatsoever. I'm
sure that that money could be directed better to help
other people in other parts of the world. I still
think it's money well spent and it's helping people, and
I'm I'm very happy with that. I think that's great.
(43:03):
And then I don't have any guilt or shame about
that at all. And because what you're saying at that point,
like with co ED, it is an organization dedicated to
helping children in a in a not very well off
country live better and longer live, so like it essentially
is effective altruism in a way, except effective altruism is
like no, no, no no, no no. The data says that
(43:27):
this one is look at the numbers. It's point this, this,
this better, and goes further like they really it's a
it's a numbers in a data game that makes it
tough for a lot of people to swallow. I think, yeah,
it's anti sentimentalism basically in the service of saving the
(43:47):
most lives possible. I know, it's it's it's interesting, and
it doesn't surprise me that it has its roots in
and philosophy because it is really a philosophical sort of
uh head scratcher at the end of the day. Yeah,
for sure, it's pretty interesting stuff and it really is.
I think it's I think it's fascinating. Yeah. So there's
(44:07):
I mean, there's a lot more to read, both criticisms
and um, you know, pro e a stuff, And seriously,
you could do worse than than reading um, Peter Singer's
essay what is it called Famine Affluence, Famine Affluence and Morality.
(44:30):
It's like sixteen pages. It's a really quick read. Um,
it's really good, So read that too, and just see
what you think, see what you think about yourself too,
and maybe take some time and examine you know, um,
if you could give to some of these charities, or
if you're not giving to charity at all, seriously, do
spend some time and see where you could make that change. Uh.
(44:52):
And since I said make that change and Chuck said yeah,
that means, of course it's time for a listener mail.
This is follow up to albinism. I knew we would
have someone who has albinism to write in. I'm glad
we did. And then we had listeners out there. And
this is from Brett. Hey, guys, a longtime listener. I
have albinism, so I thought i'd throw in my perspective.
(45:14):
First off, I know you were struggling to decide how
to describe it albino or albinism. My preference is using
the term albinism like you guys did, as to me,
it denotes a condition while saying if someone or something
is albino, it feels like you're delegating them to a
different species. Being called albino always used to bug me
growing up, and that was usually because they were kids
(45:35):
were trying to get a rise out of me. Fortunately,
I was a big kid, so it never really escalated
to physical bullying. Like I like this idea. Uh, like
the kid with albinism who's like huge and someone says something,
They're like, excuse me, what did you just say? I
didn't say anything. Being a child of the seventies and eighties,
(45:57):
like you're like you guys, it was pretty rough at times.
On the physical side, my eyes are very light sensitive. Uh,
they're blue. Where Again, while growing up, some of the
kids would keep asking me why my eyes were closed
it was bright. Uh. And of course the low vision
comes into play as well. I'm considered legally blind, as
pretty much every other person with albinism I have met
(46:20):
has the same issue. There were ways to adjust in school,
and ways they could assist me with large print books, magnifiers, binoculars,
or the teachers simply letting me look at their slides
afterward and have more time with them. Uh. Yeah, that's great.
As for how people with albinism are portrayed in TV
and movies, I don't think being portrayed as a hitman
(46:41):
or even someone with magical powers bug me as much
as the fact that I know that it was fake
because it would be really hard to be a hitman
with a kind of eyesight that we have. I love
that so practical. Uh And Brett had a lot of
other great things to say, but that is from Brett
and a long time listener. Thanks a lot, Brett, that
(47:01):
was great. Glad you rode in, And uh, yeah, thanks
a lot. If you want to be like Brett and
get in touch with us and say, hey, you guys
are pretty good, or hey you guys could have done
a lot better, or hey I'm mad at you guys,
or whatever you want to say, we would love to
hear from you. We can take it all and you
can address it all to stuff podcast at iHeart radio
(47:22):
dot com. Stuff you Should Know is a production of
I heart Radio. For more podcasts my heart Radio, visit
the i heart Radio app, Apple Podcasts, or wherever you
listen to your favorite shows.