Two quick reasons:
- “Effective Altruists,” as math-following utilitarians, would torture a person so that N people could experience the tiniest pleasure. (Extreme example, but it covers their revealed preferences; more below.)
- EAs generally consider continued human existence the top priority. Not only do they follow this unquestioningly, but if you disagree, they will revert to name-calling.
A bit more:
I'm not singling out EAs as worse than anyone else. We are all simply biological machines following our programming. But many EAs (and others) present themselves as understanding and transcending our inherent biases.
It was the process of writing the “Biting the Philosophical Bullet” chapter (p. 379 here) of Losing My Religions that finally clarified my thinking on utilitarianism. In the chapter, I told this story:
I knew one EA who stopped donating to animal issues to support Christian missionaries. There may be a small chance they are right about god, but if they are, the payoff for every saved soul is literally infinite! He actually put money on Pascal’s Wager!
Since then, I've covered the EA who wrote about how washing our skin is a Holocaust. The EA community chose to promote that article.
In just the past week, I've come across two more examples:
- Torturing chickens a bit less (i.e., welfare reforms) is a bad idea because ... bugs.
- “I used to think factory farming deserved my attention. Then I realized ... bugs. But now I realize I need to spend my time worried about future evil robots torturing future people.”
![]() |
Bravo |
Now, I get it. Really:
- The EA community provides positive feedback to those who write the mathiest and most "detached" essays.
- It is depressing as all get out to think about the brutality inflicted on factory-farmed animals right now, especially since nearly everyone is complicit. Making it worse is that all the efforts of the people working against factory farming seem to be accomplishing less than nothing:
So it is understandable to "choose" [natch] to worry instead about how to feed humans after a possible future nuclear war, to make sure future humans colonize the future galaxy. (I'm not making that up.)
Focus on the far future, and you can never fail. You can never find a metric upon which to evaluate your work. And, because you are "rational," you are free to move on to the next cause with a bigger expected value.
And, of course, many think their next post must have a vital, universe-spanning impact. So very important!
But not every EA. That's next time.
No comments:
Post a Comment