Friday, March 17, 2023

Two Excerpts: AI & Suffering

 Beth Orton - Unwritten


From the brilliant Suffering Focused Ethics by Magnus Vinding, which might be the most important philosophy book. Here is a bit of his discussion explaining why there is an asymmetry between pleasure and pain (and why, as I discuss in Losing My Religions, you can't offset suffering with happiness):

To get a clearer sense of the scope of the asymmetry in potential between happiness and suffering, we can imagine a situation in which we are offered ten years of maximal bliss at the price that we must experience some duration of the very worst states of suffering. How much such suffering would we be willing to endure in order to attain this happiness if the alternative were to experience a neutral and untroubled state of consciousness? Many of us would reject such an offer completely. 

Some, however, will be willing to accept the offer (at least while they are in a position where they have not yet experienced these very worst of states). Yet how big a sacrifice would such people be willing to make? Would they be willing, from the outset, to endure a full hour of the most extreme suffering? Perhaps even an entire day? Some might go as far as saying an entire day, yet very few, if any, would be willing to push the scale to anywhere near 50/50. That is, it seems safe to say that a great majority of people would firmly reject ten years of the most extreme suffering in order to attain ten years of the most sublime happiness.

This is obvious if you think about having kids. We know several couples who have happy children and could have had at least one more (probably) happy kid. But they in no way did anything "wrong" by not having another kid; they are certainly not moral monsters. 

Contrast that with anyone who chooses to knowingly bring a child into the world who would probably have a painful, unhappy life. 

Alleviating suffering is good in a way that creating happiness isn't.

And from Ezra Klein's latest column on artificial intelligence:

The stakes here are material and they are social and they are metaphysical. O’Gieblyn observes that “as A.I. continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.”

This is an inversion of centuries of thought, O’Gieblyn notes, in which humanity justified its own dominance by emphasizing our cognitive uniqueness. We may soon find ourselves taking metaphysical shelter in the subjective experience of consciousness: the qualities we share with animals but not, so far, with A.I. “If there were gods, they would surely be laughing their heads off at the inconsistency of our logic,” she writes.

No comments: