About the author

Matt is the author, co-author, secondary-author, ghost-author, and non-author of articles, speeches, book chapters, and even entire books! Next will be the bestseller "Losing My Religions." Currently, he is President of One Step for Animals; previously, he was shitcanned from more nonprofits than there is room to list here. Before Matt’s unfortunate encounter with activism, he was an aerospace engineer who wanted to work for NASA to impress Carl Sagan. His hobbies include photography, almost dying, and {REDACTED} He lives in Tucson with Anne and no dogs, no cats, and no African tortoises (although he cares for all of these).

Wednesday, December 16, 2020

Updated: (Kinda) Against "EA" / Utilitarianism. Definitely Against Big Numbers

Whenever I hear "wild animal suffering," I want to gouge my eyes out.

I've talked elsewhere (1, 2about why I think classic utilitarianism is wrong. In the case of wild animals, it is like someone comes across a child kicking a dog, but instead of helping, says, "There are 100,000 insects around a lake in remotest Siberia. 100,000 >> 1. So I must ignore the dog and focus on the insects." (Or, "There might be a trillion robots in a million years, so I must focus on the robots.")

This might seem absurd, but this is the nature of many arguments I hear in the effective altruism (EA) community.

I know it is impolite to question an "effective" altruist. More tellingly, if you question anything said by someone claiming to be an EA, you will receive thousands of words in "rebuttal" showing how the numbers are on their side ("Trillions of lives!").

But the first time I heard someone argue re: wild animal suffering, my first reaction was: "This person thinks they are smarter than everyone else, so they felt compelled to come up with some math to prove they had an insight no one else had before." My second reaction was, "This is just a rationalization to avoid actually having to do something in the real world."

The latter makes these contentions especially painful. Tractability -- the ability to actually do something about a problem -- is supposed to be a part of effective altruists' calculations. But this goes out the window once they start talking about large enough numbers: "Even the tiniest chance of accomplishing anything (with insects, robots, etc.) 'wins' because of the gazillion of potential lives at stake."

I don't want anyone to suffer. Because of this, I wish people would have a sense of how real suffering is. Suffering isn't a game, it isn't an academic exercise. Ethical imperatives shouldn't just be a debate answered by who can come up with the largest numbers.

2020 update: I could be convinced otherwise, but I think that I would rather light money on fire than give it to certain "wild animal" organizations. 

No comments: