About the author

I am the author, co-author, secondary-author, ghost-author, and non-author of articles, speeches, book chapters, and even entire books! The most recent can be found at LosingMyReligions.net. Currently, I am President of One Step for Animals; previously, I was shitcanned from so many nonprofits that we can’t list them all here. Before my unfortunate encounter with activism, I was an aerospace engineer who wanted to work for NASA (to impress Carl Sagan). My hobbies include photography, almost dying, and {REDACTED}. I live in Tucson with my soulmate and reluctant editor Anne, along with the occasional snake and scorpion.

Wednesday, March 30, 2022

Biting the (Philosophical) Bullet

I have written a fair amount about the problems I have with utilitarianism and (somewhat) effective altruism (1, 2, 3). In this vein, I found this discussion by Holden Karnofsky to be good. His conclusion seems to be that starting with any particular position will lead to consequences that offend our intuitions.

To me, this is the key section regarding utilitarianism:

I’ll give my own set of hypothetical worlds:

    • World D has 10^18 flourishing, happy people.
    • World E has 10^18 horribly suffering people, plus some even larger number (N) of people whose lives are mediocre/fine/”worth living” but not good.

There has to be some “larger number N” such that you prefer World E to World D. That’s a pretty wacky seeming position too!


I don't think there is a number N that works here. That is, I don't think you can offset horrible suffering with any number of other people.

As Holden notes, taking this non-utilitarian position leads to some counter-intuitive outcomes, like not valuing more happy people over fewer happy people. But as I've written elsewhere, "It is the summing across individuals that really gets me. There is no entity experiencing 'all the suffering in the universe.' Only individuals suffer -- the universe doesn't suffer. [or experience aggregate happiness]"

It is just our intuition that more is better. I held this view for most of my life, but it now seems obvious to me that this intuition is flawed. It just feels right to want more total happiness, but it doesn't actually matter in our universe, where only individuals experience (finite) happiness. There is absolutely no ethical relevance to "the total net happiness in the universe." That only exists in the minds of utilitarians.

Once you see the folly in maximizing a fictitious variable, you avoid morally offensive conclusions. For example, you aren't ethically obligated to torture someone in order to provide a slight pleasure to N others. (You also aren't personally ethically obligated to have as many children as possible, a consequence of utilitarianism that Holden tries to hand-wave away.)

You might wonder why I continue to flog this issue. It is because I am continually saddened by so many smart people dedicating 80,000 hours each to trying to one-up each other's expected value while there is so much acute and unnecessary suffering in the world.

OTOH, maybe it is turtles all the way down.





No comments: