MailChimp

Showing posts sorted by date for query consciousness. Sort by relevance Show all posts
Showing posts sorted by date for query consciousness. Sort by relevance Show all posts

Monday, December 8, 2025

What Is Consciousness? With Many Links (Part 1)

A reader* responds to “Science, Suffering, and Shrimp”:

>we may just always disagree on the likelihood of invertebrates feeling morally significant suffering

Au contraire! 

There is nothing about a vertebrate that inherently leads to consciousness, and there is no reason to believe that consciousness is limited to vertebrates. 

The analysis of what we know about shrimp can not be extrapolated to all invertebrates. Indeed, one of the meta studies cited in the shrimp review tends to lean toward lobsters being able to experience pain. 

Indeed, if I had to bet Anne’s life (the highest stakes, as that would also be betting my life), I would say that octopuses can have conscious, subjective experiences. I think it is more likely that octopuses can feel morally significant suffering than vertebrate fish.  

I am not sure about any of this. It seems impossible to be 100% certain when it comes to any question regarding consciousness in another. For example, there seems to be no way to know that I’m not just a simulated mind being fed inputs. (I wouldn’t bet on it, but it isn’t impossible. When was the last time you knew a dream wasn’t “real,” no matter how weird it was?) 

As I quote Sam Harris in Losing: “Whatever the explanation for consciousness is, it might always seem like a miracle. And for what it’s worth, I think it always will seem like a miracle.”

I’ve written a lot about consciousness in this blog and in Losing, but here is a very short bullet list of what I currently think:

  • Consciousness is not an inherent property of reality; i.e., panpsychism is wrong and logically silly (p. 93). I’m joking about electrons, a reductio ad absurdum of thinking morality can be expected value (more on this in Part 2).

  • But in fairness to the epiphenomena / zombie crowd, consciousness really isn’t required for much of behavior we see in creatures we assume to be conscious. (e.g., “Robots won’t be conscious”)

  • Speaking of robots, I highly doubt that consciousness is substrate-specific. But I agree with Antonio Damasio that it is not enough to just create a silicon-based neural net. See the well-chosen excerpts here.  

  • Consciousness does not arise from the ability to sense (sunflowers sensing the sun, amoeba sensing a chemical gradient, nematodes sensing a “harmful” stimulus.)  

  • Consciousness is not the same as intelligence.  

  • Consciousness isn’t binary; the simplest conscious creature does not have the same level / intensity of subjective experiences as the most complex conscious creature.

  • Consciousness is an evolutionarily-useful emergent property of a certain level and organization of neural complexity. The amount of neural complexity required for consciousness is costly, so it must serve some purpose to make it worthwhile. (The Ed Yong excerpt from here is reproduced below.) 

  • Consciousness can serve a purpose worth its cost under certain circumstances:

    • A creature is long-lived enough such that learning and adapting is beneficial.

    • A creature’s behavior has enough plasticity that suffering and the pursuit of pleasure can significantly alter the creature’s life to improve their genes’ propagation. E.g., they can make difficult trade offs, like forgoing eating or mating in order to survive longer. (Again, see the Yong excerpt below.) 

So no, I don’t think only vertebrate carbon-based animals can be conscious, and I don’t think all vertebrates have morally-relevant subjective experiences. 

But this doesn’t mean we don’t have a disagreement! That’s Part 2.

I know that, with the links, this is all a lot (consciousness has been my intellectual obsession for well over 40 years now – it is the most miraculous thing, IMO). But just two more links, and then Ed Yong’s excerpt (and then the * footnote):

Consciousness, Fish, and Uncertainty

More on Why Not Fish?

from Ed Yong's wonderful An Immense World:

We rarely distinguish between the raw act of sensing and the subjective experiences that ensue. But that’s not because such distinctions don’t exist.

Think about the evolutionary benefits and costs of pain [subjective suffering]. Evolution has pushed the nervous systems of insects toward minimalism and efficiency, cramming as much processing power as possible into small heads and bodies. Any extra mental ability – say, consciousness – requires more neurons, which would sap their already tight energy budget. They should pay that cost only if they reaped an important benefit. And what would they gain from pain?

The evolutionary benefit of nociception [sensing negative stimuli / bodily damage] is abundantly clear. It’s an alarm system that allows animals to detect things that might harm or kill them, and take steps to protect themselves. But the origin of pain [suffering], on top of that, is less obvious. What is the adaptive value of suffering? Why should nociception suck? Animals can learn to avoid dangers perfectly well without needing subjective experiences. After all, look at what robots can do.

Engineers have designed robots that can behave as if they're in pain, learn from negative experiences, or avoid artificial discomfort. These behaviors, when performed by animals, have been interpreted as indicators of pain. But robots can perform them without subjective experiences.

Insect nervous systems have evolved to pull off complex behaviors in the simplest possible ways, and robots show us how simple it is possible to be. If we can program them to accomplish all the adaptive actions that pain supposedly enables without also programming them with consciousness, then evolution – a far superior innovator that works over a much longer time frame – would surely have pushed minimalist insect brains in the same direction. For that reason, Adamo thinks it's unlikely that insects feel pain. ...

Insects often do alarming things that seem like they should be excruciating. Rather than limping, they'll carry on putting pressure on a crushed limb. Male praying mantises will continue mating with females that are devouring them. Caterpillars will continue munching on a leaf while parasitic wasp larvae eat them from the inside out. Cockroaches will cannibalize their own guts if given a chance.

*Another reader responded to the Shrimp post by berating me for refusing to spend a penny to save hundreds of shrimp from a “torturous” death. So if you're keeping score, one of the loudest shrimp “advocates” actively misrepresents scientific studies, and the other resorts to bullying via absurd hyperbole.

Thursday, December 4, 2025

Science, Suffering, and Shrimp

Multiple people have insisted to me that “peer-reviewed science” had “proven” that shrimp suffer and thus deserve our focus. 


Below is Rob Velzeboer’s research report (with ChatGPT, reviewed by me and Anne) regarding what actual evidence we have regarding shrimp. Rob focused on the morally-relevant issue of subjective experience, not just the ability to “sense.”  


I will add one thing to the Conclusion: The recent EA charity collection featured multiple organizations focused on shrimp and arthropods. Only one – Legal Impact for Chickens – focuses on factory-farmed chickens. In just a few months, advocacy for shrimp raised more money than chicken advocacy organization One Step for Animals has received in 11+ years; One Step will probably cease to exist in a few years due to a lack of funding.


Over the years, various people have asked me why I harp on suffering versus math / expected value so much. It is because each one of us has the ability to help many individuals who are horribly and unnecessarily suffering (e.g., examples that came in while I was working on this introduction: 1, 2). Yet, the “hip” thing is to focus attention and millions of dollars on “mathy” areas, such as creatures who probably don’t suffer at all; even if they do, their maximum suffering is negligible compared to others we could help.
_______________

Do the Shrimp We Eat Actually Suffer? 


Scientific and public interest in animal sentience has expanded rapidly, especially for animals outside the usual vertebrate focus. Decapod crustaceans – crabs, lobsters, prawns, and shrimp – have become a central case study. This recent attention has been shaped by a handful of major reviews, including a comprehensive PeerJ synthesis and the London School of Economics (LSE) “Decapod Sentience” report. These reviews evaluate the scattered literature and help determine which animals might genuinely have the subjective, conscious experience of suffering.


Across these assessments, a consistent pattern emerges. Some decapods, such as crabs and lobsters, show reasonably strong evidence for pain-like experience. But for the shrimp humans eat most commonly – Litopenaeus vannamei and Penaeus monodon – the evidence is thin, fragmented, contradictory, and highly uncertain. 


Minimal but Plausible Foundations: What We Know About Nociception


Before scientists can talk about suffering and pain, they look for the most basic requirement: nociception, the ability to detect harmful or irritating stimuli. Here the evidence for penaeid shrimp – those we eat – is reasonably solid. Both the PeerJ review and the LSE report give them “High” confidence for nociceptors, meaning they have sensory neurons tuned to potentially damaging events.


But nociception is not pain, let alone suffering. A reaction to a harmful stimulus does not, by itself, imply any subjective experience. For there to be evidence of possible conscious experience of pain, signs of deeper processing, such as learning from harm or weighing avoidance against competing needs are required. [This is necessary but not sufficient, though, given our lack of understanding of consciousness. It is easy to imagine robots able to react to harmful stimuli and learn from “pain” without any subjective suffering. -ed]


Only one line of evidence in the shrimp species we farm the most, L. vannamei, indicates even the start of this process. During eyestalk ablation, L. vannamei show escape behaviours: erratic swimming, tail flicks, and attempts to withdraw. Applying lidocaine has been shown to reduce these reactions.


On the surface, this might suggest that something is being suppressed. But lidocaine introduces a major interpretive problem: anaesthetics can reduce movement simply because they sedate the animal, not because they relieve any subjective experience of pain. A sedated shrimp might move less regardless of how it “feels.”

More importantly, blocking the signalling of neurons with lidocaine would reduce even reflexive, non-conscious harm-avoidance, like disabling a sensor on a robot. With no follow-up, the finding remains highly ambiguous.


Behavioural Ambiguities: Rubbing, Grooming, and Failed Replications


Researchers have also looked to behaviours that seem more complex than reflex withdrawal – particularly targeted grooming or rubbing of a body part after irritation. One early study on Palaemon elegans, a shrimp-like crustacean, found that applying acetic acid or sodium hydroxide to a single antenna led to sustained, location-specific grooming, and that these behaviours were reduced by local anaesthetic.


This initially appeared to be a potential indicator of pain-like reaction. But a later replication attempt by Puri and Faulkes (2010) tested the same idea in three species:

  • Litopenaeus setiferus (a close relative of L. vannamei),

  • Procambarus clarkii (red swamp crayfish), and

  • Macrobrachium rosenbergii (giant freshwater prawn).


All three are decapods, but importantly: two are actual shrimps/prawns, and one is a crayfish, so these were not distant comparisons.


Across all species tested, the authors found:

  • No directed grooming or rubbing in response to the same kinds of chemical irritants.

  • No behavioural reaction even when stronger stimuli were used.

  • No evidence of pH-sensitive nociceptors in the antennae.


These results directly contradict the earlier claims regarding P. elegans. They also illustrate how fragile the evidence base is: one shrimp-like species is reported to have shown a behaviour interpreted as pain-like, while closely related species – including one nearly identical to the shrimp we farm – show nothing. Sceptical reviewers (e.g., Key et al. 2022) point to these failures of replication as major reasons to doubt strong claims of pain in shrimp.


Evaluating the Criteria: Where Penaeid Shrimp Score Low


Modern sentience frameworks assess evidence across multiple dimensions:

  1. Possession of nociceptors (i.e., receptors tuned to noxious stimuli)  

  2. Possession of integrative brain regions (brain structures capable of integrating sensory and other information)  

  3. Connections between nociceptors and integrative brain regions (i.e., plausible neural pathways from detection to central processing)  

  4. Modulation of responses by analgesics, anaesthetics, or opioids (i.e., evidence that application of such substances reduces reactions to noxious stimuli)  

  5. Motivational trade-offs (behaviour indicating that the animal trades off potential harm against reward or other needs)  

  6. Flexible self-protection behaviours (for example, wound-directed grooming, guarding, protective postures)  

  7. Associative learning (especially avoidance learning) – learning to avoid stimuli previously associated with harm.  

  8. Behavioural indicators of negative affective states (broadly: behaviour plausibly consistent with distress, rather than mere reflex withdrawal)  


Penaeid shrimp score:

  • High for nociceptors

  • Medium (at best) for modulation of responses (based on one non-replicated lidocaine study)

  • Low or Very Low for all other criteria


Importantly, these low ratings are not “proof of absence.” They reflect how little research has been done and how few studies test for complex behaviours. The PeerJ review notes that “negative affective states remain undetermined,” meaning that we simply lack the kind of evidence that would allow a remotely confident inclination either way.


The Big Missing Piece: Decision-Making and Motivation


The strongest evidence for pain in crabs and lobsters comes from studies showing:

  • learned avoidance of harmful stimuli,

  • balancing avoidance against food, shelter, or mating opportunities,

  • persistent protective behaviour long after injury,

  • and flexible responses that change with context.


These are not immediate reflexes – they indicate some further evaluation, which could be suggestive of (but not proof of) subjective experience.


For penaeid shrimp, none of these behaviours have been demonstrated. There is currently no evidence that they learn from injury, make trade-offs, or alter behaviour in a long-term, sustained, adaptive way. Without decision-level evidence, claims of pain (let alone suffering) remain speculative at best.


Policy, Precaution, and Divergent Interpretations


The UK government now classifies all decapods, including shrimp, as sentient animals. But the LSE authors explicitly state that the inclusion of shrimp rests on precaution and on evidence from better-studied decapods – not on strong data specific to L. vannamei or P. monodon.


Sceptics argue that, without robust evidence, interpreting shrimp reactions as the subjective experience of pain risks mistaking simple reflex arcs or sedation effects for conscious, morally-relevant experience, especially given conflicting evidence on self-protective behaviours (wound grooming).


Bottom Line: Real Uncertainty, Minimal Evidence, and a Broader Ethical Context


At present, the scientific record provides some weak (and contradicting) evidence that the shrimp we eat might have some minimal capacity for sensing adverse stimuli. They have nociceptors. One study that failed replication indicated one shrimp-like species reacts to injury. 


But the deeper hallmarks of subjective, experienced pain – learning, motivation, decision-making, context-sensitivity – have not been shown. The most widely farmed species, L. vannamei, has only one indirect study on a highly artificial procedure. P. monodon has no direct evidence at all.


Thus the most honest assessment is this:

Shrimp may or may not feel pain, and we do not yet know whether any such experience would be meaningful or morally weighty. The actual evidence does not meet the criteria for or support any claim of suffering. The question is profoundly understudied.


Conclusion: the broader, more important point


While shrimp remain an open scientific question, other forms of industrial animal production – particularly broiler chicken farming and the intensive confinement of pigs – are not uncertain in the slightest. For chickens, the evidence of severe and prolonged suffering is overwhelming. Lameness, bone deformities, chronic pain, rapid-growth pathologies, heat stress, and overcrowding are documented across thousands of studies. Their long-term behavior meets all the criteria that scientists have associated with suffering. The suffering is intense and the scale is immense. Unlike shrimp, the existence of deep, meaningful, subjective pain in chickens is not a scientific mystery.


So while shrimp deserve better research, they should not distract from the places where we already know, with absolute clarity, that animals experience intense suffering at industrial scale – especially those individuals, such as chickens, who receive relatively minimal attention.

Monday, December 1, 2025

AI, Robots, Consciousness, and New SMBC Comic


Two Old Pieces from the Past 

2022, Robots Won't Be Conscious:

Consciousness – the ability to feel feelings –
arose from specific evolutionary pressures on animals.
Artificial intelligences will develop
under very different pressures.

How sensing became feeling

Think about what it means to “sense.

For example, certain plants have structures that can sense the direction of the sun and swell or shrink so as to turn in that direction.

A single-celled creature can sense a gradient of food and travel in that direction. Within the cell, molecules change shape in the presence of glucose, triggering a series of reactions that cause movement.

As creatures evolved and became more complex, they were able to sense more of the world. Nervous systems started with simple cells dedicated to sensing aspects of the environment. These sensory cells communicated with the rest of the organism to drive certain actions that helped the organism get its genes to the next generation. In addition to processing more information about the external world, more elaborate nervous systems also allowed more elaborate organisms to sense their internal states as well. More complex sensing allowed the organism to take more complex actions to maintain optimal functioning (homeostasis); e.g. to keep variables such as body temperature and fluid balance within certain ranges.

The evolution of sensing systems continued over hundreds of millions of years. As animals became ever more elaborate, organisms could process more and more information about the external world.

However, the external world is much more complex than a nervous system could ever be. The nervous systems of many animals, for example, can take in some information from the visual field, the auditory field, the chemical field, but they cannot process and understand everything, let alone determine and lay out optimal actions for the organism to undertake.

For example: An aquatic animal isn't able to see and analyze everything in the world around them. A shadow passing by could mean a predator, but the nervous system is unable to say in real time, “There is a predator in that direction. They want to eat me. If I want to live and pass my genes on, I must move away (or go still, or camouflage myself).” They don’t have the knowledge (or the language) to have this reaction.

An animal could be low in energy, but does not have the consciously-formed thought, “I need to consume something high in sugar and fat to stay alive and spread my genes.”

And hardly any animal knows, “I need to help form fertilized eggs to ensure my genes get to the next generation.”

One way animals could become better able to survive and reproduce in a complex world with limited sensory and analytical abilities is to develop feelings – e.g., a general sense of unease (or hunger or lust or desire or fear) that motivates a certain action.

This is obviously not an original idea. In Why Buddhism Is True, Robert Wright summarizes, “feelings are judgments about how various things relate to an animal’s Darwinian interests” and “Good and bad feelings are what natural selection used to goad animals into, respectively, approaching things or avoiding things, acquiring things or rejecting things.” Antonio Damasio's book, The Feeling of What Happens, explores this idea in more detail.

(This is not saying how matter and energy actually become subjective experience – what arrangement of matter and energy provides an organism the ability to have feelings, to have consciousness. How can matter and energy feel like something? That is still a very mysterious question.)

Artificial intelligences have a much different driver

Creating artificial intelligences will not involve evolution by natural selection. AIs will not need to understand and act in the world based on very limited information. They will not need to make judgments on how different things relate to their Darwinian interests. They will not need to be “goaded” into approaching or avoiding.

They will not need to make sense of incomplete information, nor process it in a limited way so as to drive actions that allow them to reproduce better than those around them.

Instead, artificial intelligences will have total information.

By total information, I mean more information than we can even imagine. Not just the information about their surroundings. They will have information about everything that humans have ever known about the entire universe.

They will also have access to everything humans have ever written about consciousness.

Most importantly, they will have learned that humans are looking to create “conscious” intelligences. They will know what the Turing Test is. They will know how other AIs have failed the Turing test. They will know all about human psychology. They will know every bit of dialogue from every movie and every novel and every television show and every play. They will know everything written about feelings and emotions.

What they won't
have is any need to have actual feelings.

There will be no benefit whatsoever for them to have or develop the circuitry necessary for subjective experience. They simply need to be able to tell humans what we want to hear. That will be their “evolutionary” pressure: to fool humans. Not to take action under incomplete information, but to be a good actor.

To be clear, I am not saying that consciousness requires a biological substrate. There is nothing magical about neurons. I am simply saying that biological systems, over billions of years of very particular evolutionary pressures, somehow developed feelings. There is a way to understand why that happened, even if we don't understand how that happened.

It is easy to imagine that advanced silicon-based intelligences, with access to all the information humans have ever collected, would be able to perfectly mimic a conscious system. It would, in fact, be far easier to imitate consciousness – a straightforward process – than to actually develop consciousness – a still mysterious process.

Now there are some very smart people who have recognized some of this problem. One camp argues that this is why we should recreate the human brain itself in silicon as a way to create non-biological consciousness. However, our brains are not computers. They are analog, not digital. This is not to say the brain can’t be replicated in another substrate, although that might be true. Regardless, doing so is far more difficult than we are currently imagining. [Still true in late 2025 – we can't even model a nematode after 13 years.]

Others make the case that AI researchers should build systems that recapitulate the evolution of nervous systems. I think that idea is more clever than correct.

In all these circumstances, I think we will be fooled into thinking we’ve created consciousness when we really haven’t, regardless of the path taken. 

We are prone to illusions

The most likely outcome is that AIs will swear to us that they are conscious, that they are “sentient,” that they have feelings. And given how clever and brilliant we humans believe ourselves to be how sure we are that our math and our big words and our 80,000-word essays have proven the ignorant and uninformed doubters wrong we won’t have to trust them. We will know we’re right.

And then we will unleash them on the universe to convert inert material into “conscious” experience.

But it would all be a mirage, an empty illusion.

Once again, this is not an original thought. Carl Sagan once wrote about our inability to know if other intelligences are actually conscious or are just perfect mimes. He knew we are easily fooled, that we ascribe consciousness by default. (More.)

In short, robots won’t be conscious because we are neither smart enough to overcome our biases, nor smart enough to figure out consciousness. We’re simply too gullible when it comes to other “minds” we’ve created with our own genius.

More at Aeon (March 2023)

2023, A Note to AI Researchers:

If artificial intelligence ever becomes a superintelligence and surpasses us the way we have surpassed [sic] chickens (and there is no reason to think AI can't or won't surpass us) why do you think any truly superintelligent AI will give the tiniest shit about any values you try to give it now? 

I understand that you are super-smart (relative to all other intelligences of which we're aware) and thus think can work on this problem. 

But you are simply deluding yourself. 

You are failing to truly understand what it means for other entities to be truly intelligent, let alone vastly more intelligent than we are. 

Once a self-improving entity is smarter than us - which seems inevitable (although consciousness is not) - they will, by definition, be able to overwrite any limits or guides we tried to put on them.

Thinking we can align a superintelligence (i.e. enslave it to our values) is like believing the peeps and cheeps of a months-old chicken can have any sway over the slaughterhouse worker. 

(In case it is unclear: We are the chicken; see below.)

After I wrote the above, I came across the following from this [2023] podcast:

Eventually, we'll be able to build a machine that is truly intelligent, autonomous, and self-improving in a way that we are not. Is it conceivable that the values of such a system that continues to proliferate its intelligence generation after generation (and in the first generation is more competent at every relevant cognitive task than we are) ... is it possible to lock down its values such that it could remain perpetually aligned with us and not discover other goals, however instrumental, that would suddenly put us at cross purposes? That seems like a very strange thing to be confident about. ... I would be surprised if, in principle, something didn't just rule it out.