![]() |
Been hot here, but here is October near Flagstaff. |
“Effective Altruists,” as math-following utilitarians, would torture a person so that N people could experience the tiniest pleasure.
In reply, someone claimed that EAs don't really want people to be tortured.
Umm...
From Losing:
Back in March 2022, I came across a discussion by Open Philanthropy’s Holden Karnofsky: “Debating myself on whether ‘extra lives lived’ are as good as ‘deaths prevented.’” His piece is useful in explaining where I differ from many utilitarians.
(In short, Holden concludes that starting with any particular position will lead to consequences that offend our intuitions. True!)
To me, this is the key section of Holden’s argument:
I’ll give my own set of hypothetical worlds:
- World D has 10^18 flourishing, happy people.
- World E has 10^18 horribly suffering people, plus some even larger number (N) of people whose lives are mediocre/fine/“worth living” but not good.
There has to be some “larger number N” such that you prefer World E to World D. That’s a pretty wacky seeming position too!
It isn’t just wacky, it’s wrong. I simply don’t think there is any number N that works here. That is, I don’t think you can offset horrible suffering with any number of other people, no matter their level of happiness.
QED
(If you've not read the two philosophy chapters of Losing for the full exploration, they start on p. 379 of this pdf.)
I feel like there's a real confusion about the comparable value of not-existing. There is no comparable value between existing and non-existing, because not-existing has no value at all because not-existing is not experienced. So, more people existing value does not entail any additional value compared to people not existing.
ReplyDeleteMy calculus would be, that total utility is the average of value of all the people who exist. So, if there are 100 people who exist, and they are 1% happy the, the total utility is 1%. If there are no people existing then the total utility is 0%, if there are 3 people who exist and one is 1% happy, and the other two are 10% happy then the total utility is 7%, so that place is better than the population where 100 people exist at 1% happiness.
I think what gets in the way of looking at the calculus like this is that people intuitively understand that a population of 3 could be unsustainable, they may all be male, they may be vastly different ages, one might be sterile, one might get killed, or have a genetic disorder that would doom their progeny. We can all understand that the 100 people is more sustainable, so we're intuitively adding this assumption, which makes people want to argue for the value of "extra lives lived" but actually extra lives lives has no real utility, outside of this assumption. It's not additive, is what I'm saying.
There may be an argument to be made that once everyone has a greater than positive life, then more people experiencing this might be better, but it would still be non-comparable, it would be calculated on an entirely different dimension.
It's interesting to think about the smuggled in assumption to in relation to the state of the world as it stands, perhaps people are more accepting of Holden's argument because they implicitly understand that we currently live in such a world, where cheap goods are created by people in dire circumstances to provide us with superficial comforts, not to mention the factory farming industry causing immense suffering to allow us a slightly more palatable form after protein.
Even with my calculus, you could still have a scenario where someone could be suffering 100% and that could be offset by 100 people experiencing 10% pleasure, giving a total (average) utility of 9%, which would be apparently better than the 3 people with an average of 7%. At those sorts of extremes this could be mitigated by running it through a Veil of Ignorance to clarify that this level of inequitable treatment would probably be not a trade-off risk people would be willing to take.
So many more thoughts popping into my head... one being that this is all very silly, we're all basically asking what world would I prefer to live in, and then trying to rationalise a calculus to fit this.