tl;dr: Being able to sense things that are harmful – nociception – developed early in animal evolution. Only much later, when animals became long-lived and limited in reproduction, did the extra, very expensive neural hardware evolve to allow for conscious, subjective experience of suffering. That allowed those long-lived animals to learn in a way that assists them later in their long life.
Our "pain" relieving medicines mostly work on our system of nociception, not our mental processes that turn those signals into pain (opioids might be an exception). So our "pain" medicines will work on other animals' nociception pathways, too, regardless of if they actually experience subjective pain.
If this isn't enough to break the idea that "pain" and morally-relevant suffering are the same, please see this much deeper explanation, the most honest, good-faith deep exploration out there (even though I disagree with Luke).
This is important for several reasons.
The first is that many empathetic people (including many vegans) are inclined to equate "pain" with "suffering" and see "suffering" everywhere. I know a vegan who heard of a robot "escaping" and felt a twinge of moral sympathy for the robot.
Indeed, many vegans spend more time trying to "prove" that bees are being exploited by the honey industry than they do actually trying to effectively decrease factory farming.
And, of course, some EAs harp on insects in an attempt to "one up" everyone else's expected value.
Both groups are filled with people utterly invested in their position and who will angrily shout down / bombard with endless rants anyone who has doubts. It is like facing people who only watch Fox News.
This is not only a huge waste of time and energy, it makes vegans and EAs look nuts. (Sorry to be blunt, but both vegans and EAs have image problems. It isn't enough to be "right." What matters is being effective, and that involves concern for appearances.)
In addition to the Open Phil report, check out these excerpts from Ed Yong's wonderful An Immense World:
We rarely distinguish between the raw act of sensing and the subjective experiences that ensue. But that’s not because such distinctions don’t exist.
Think about the evolutionary benefits and costs of pain [subjective suffering]. Evolution has pushed the nervous systems of insects toward minimalism and efficiency, cramming as much processing power as possible into small heads and bodies. Any extra mental ability – say, consciousness – requires more neurons, which would sap their already tight energy budget. They should pay that cost only if they reaped an important benefit. And what would they gain from pain?
The evolutionary benefit of nociception [sensing negative stimuli / bodily damage] is abundantly clear. It’s an alarm system that allows animals to detect things that might harm or kill them, and take steps to protect themselves. But the origin of pain, on top of that, is less obvious. What is the adaptive value of suffering? Why should nociception suck? Animals can learn to avoid dangers perfectly well without needing subjective experiences. After all, look at what robots can do.
Engineers have designed robots that can behave as if they're in pain, learn from negative experiences, or avoid artificial discomfort. These behaviors, when performed by animals, have an interpreted as indicators of pain. But robots can perform them without subjective experiences.
Insect nervous systems have evolved to pull off complex behaviors in the simplest possible ways, and robots show us how simple it is possible to be. If we can program them to accomplish all the adaptive actions that pain supposedly enables without also programming them with consciousness, then evolution – a far superior innovator that works over a much longer time frame – would surely have pushed minimalist insect brains in the same direction. For that reason, Adamo thinks it's unlikely that insects (or crustaceans) feel pain. ...
Insects often do alarming things that seem like they should be excruciating. Rather than limping, they'll carry on putting pressure on a crushed limb. Male praying mantises will continue mating with females that are devouring them. Caterpillars will continue munching on a leaf while parasitic wasp larvae eat them from the inside out. Cockroaches will cannibalize their own guts if given a chance.
6 comments:
Do you think it will ever be possible, even in principle, to empirically prove whether anything other than oneself is conscious, and therefore capable of subjective suffering?
Hi Ryan,
Short answer: No. I don't think we can prove anything like that. we could be in a simulation where our consciousness (and its contents) are all that is "real." (Sam Harris has talked a lot about this.)
But do you think this fact (i.e.that conscious experience is *in principle* unobservable, and is the only phenomenon known for certain to exist that is *in principle* unobservable) calls standard physicalism into question?
I mean, in what sense is conscious experience, which is what the philosophers call "phenomenal consciousness", physical? If it's *in principle* unobservable, has no volume or mass (or location?), and has a true, ontic existence, then in what sense is it physical?
This is why I believe physicalism is a logically untenable metaphysical worldview.
I think that, *if* we're not in a simulation, then we can find the neural correlates to consciousness (e.g., by disrupting / manipulating them). But I think it is probable that we can't *know* why that neural set up *feels* like something.
If we're in a simulation, then we can't prove / know anything, not even the laws of physics.
"Phenomenal consciousness" is physical because it is driving my fingers to type musings about phenomenal consciousness.
By my read, humans have a long history of wanting things to be "more" than just matter and energy.
Hello! May I ask what your thoughts are on negative utilitarianism? And also antinatalism? Any thoughts on David Benatar's work on these subjects? Considering your life's work, I would be curious to know. Sorry if that's too many questions at once tho, haha
Thanks for the questions. Honestly, I’m not sure of the answer. If you’d like to hear me think out loud on these topics, please check out https://www.losingmyreligions.net/
The pdf (first link) is free.
The chapter “Biting the Philosophical Bullet” is the most relevant, then “I Welcome Our Robot Overlords.”
I mention Benatar in the chapter “Fight the Power 2: To Breed or Not to Breed”
I hope you find at least some of that interesting.
Take care!
Post a Comment