Thursday, April 25, 2019

Where I part ways with the smartest person I know

Not to distract from Dr. Greger's piece (which is the most important one in this thread), but I'm going to pick up on some questions from my utilitarian piece.

I'm still a consequentialist. That is, I still think the rightness or wrongness of an action is determined by the consequences, not by whether it follows some rule.

One person suggested I'm a negative utilitarian. But I think it is more complicated than that. Despite Parfit, I see a fundamental discontinuity between summing within an individual vs summing across individuals. I would cause myself to suffer some (exercise, not eating a vegan donut) to avoid worse suffering in the future (open heart surgery). But I wouldn't choose to have someone else to suffer so I could avoid worse suffering. (Well, unless that someone else was Mike Pence or Paul Ryan.)

Yes, I understand it is more complicated than that -- e.g., I would choose to have person A turn down a donut over choosing person B having a Crohn's attack. My point is that I see a difference between intra-person choices and inter-person choices.

It is the summing across individuals that really gets me. There is no entity experiencing "all the suffering in the universe." Only individuals suffer -- the universe doesn't suffer. (This also answers Parfit's repugnant conclusion to my satisfaction.)

The smartest person I know disagrees with me on this. They are focused on existential risks because summing up all the future joy from hedonistic robots vastly and absolutely swamps any concerns of the moment.

OK, Data isn't the perfect example, but huzzah for STNG.

But like in my previous post, the math is where I get off the train (in addition to not believing that someone's happiness offsets the suffering of someone else). I understand expected values, but these calculations say that having a fractional chance of lowering existential risk (a small chance of improving the likelihood that quadzillions of happy robots will take over the universe) is more important than, say, stopping something like World War II and the Holocaust and all the accompanying acute suffering.

I don't know that I'm right; as I mentioned, I've changed my mind before. I understand that many smart people think I'm simply mistaken. And I am glad there are people working to build a better long-term future.

No comments: