MailChimp

Showing posts sorted by relevance for query consciousness. Sort by date Show all posts
Showing posts sorted by relevance for query consciousness. Sort by date Show all posts

Monday, December 8, 2025

What Is Consciousness? With Many Links (Part 1)

A reader* responds to “Science, Suffering, and Shrimp”:

>we may just always disagree on the likelihood of invertebrates feeling morally significant suffering

Au contraire! 

There is nothing about a vertebrate that inherently leads to consciousness, and there is no reason to believe that consciousness is limited to vertebrates. 

The analysis of what we know about shrimp can not be extrapolated to all invertebrates. Indeed, one of the meta studies cited in the shrimp review tends to lean toward lobsters being able to experience pain. 

Indeed, if I had to bet Anne’s life (the highest stakes, as that would also be betting my life), I would say that octopuses can have conscious, subjective experiences. I think it is more likely that octopuses can feel morally significant suffering than vertebrate fish.  

I am not sure about any of this. It seems impossible to be 100% certain when it comes to any question regarding consciousness in another. For example, there seems to be no way to know that I’m not just a simulated mind being fed inputs. (I wouldn’t bet on it, but it isn’t impossible. When was the last time you knew a dream wasn’t “real,” no matter how weird it was?) 

As I quote Sam Harris in Losing: “Whatever the explanation for consciousness is, it might always seem like a miracle. And for what it’s worth, I think it always will seem like a miracle.”

I’ve written a lot about consciousness in this blog and in Losing, but here is a very short bullet list of what I currently think:

  • Consciousness is not an inherent property of reality; i.e., panpsychism is wrong and logically silly (p. 93). I’m joking about electrons, a reductio ad absurdum of thinking morality can be expected value (more on this in Part 2).

  • But in fairness to the epiphenomena / zombie crowd, consciousness really isn’t required for much of behavior we see in creatures we assume to be conscious. (e.g., “Robots won’t be conscious”)

  • Speaking of robots, I highly doubt that consciousness is substrate-specific. But I agree with Antonio Damasio that it is not enough to just create a silicon-based neural net. See the well-chosen excerpts here.  

  • Consciousness does not arise from the ability to sense (sunflowers sensing the sun, amoeba sensing a chemical gradient, nematodes sensing a “harmful” stimulus.)  

  • Consciousness is not the same as intelligence.  

  • Consciousness isn’t binary; the simplest conscious creature does not have the same level / intensity of subjective experiences as the most complex conscious creature.

  • Consciousness is an evolutionarily-useful emergent property of a certain level and organization of neural complexity. The amount of neural complexity required for consciousness is costly, so it must serve some purpose to make it worthwhile. (The Ed Yong excerpt from here is reproduced below.) 

  • Consciousness can serve a purpose worth its cost under certain circumstances:

    • A creature is long-lived enough such that learning and adapting is beneficial.

    • A creature’s behavior has enough plasticity that suffering and the pursuit of pleasure can significantly alter the creature’s life to improve their genes’ propagation. E.g., they can make difficult trade offs, like forgoing eating or mating in order to survive longer. (Again, see the Yong excerpt below.) 

So no, I don’t think only vertebrate carbon-based animals can be conscious, and I don’t think all vertebrates have morally-relevant subjective experiences. 

But this doesn’t mean we don’t have a disagreement! That’s Part 2.

I know that, with the links, this is all a lot (consciousness has been my intellectual obsession for well over 40 years now – it is the most miraculous thing, IMO). But just two more links, and then Ed Yong’s excerpt (and then the * footnote):

Consciousness, Fish, and Uncertainty

More on Why Not Fish?

from Ed Yong's wonderful An Immense World:

We rarely distinguish between the raw act of sensing and the subjective experiences that ensue. But that’s not because such distinctions don’t exist.

Think about the evolutionary benefits and costs of pain [subjective suffering]. Evolution has pushed the nervous systems of insects toward minimalism and efficiency, cramming as much processing power as possible into small heads and bodies. Any extra mental ability – say, consciousness – requires more neurons, which would sap their already tight energy budget. They should pay that cost only if they reaped an important benefit. And what would they gain from pain?

The evolutionary benefit of nociception [sensing negative stimuli / bodily damage] is abundantly clear. It’s an alarm system that allows animals to detect things that might harm or kill them, and take steps to protect themselves. But the origin of pain [suffering], on top of that, is less obvious. What is the adaptive value of suffering? Why should nociception suck? Animals can learn to avoid dangers perfectly well without needing subjective experiences. After all, look at what robots can do.

Engineers have designed robots that can behave as if they're in pain, learn from negative experiences, or avoid artificial discomfort. These behaviors, when performed by animals, have been interpreted as indicators of pain. But robots can perform them without subjective experiences.

Insect nervous systems have evolved to pull off complex behaviors in the simplest possible ways, and robots show us how simple it is possible to be. If we can program them to accomplish all the adaptive actions that pain supposedly enables without also programming them with consciousness, then evolution – a far superior innovator that works over a much longer time frame – would surely have pushed minimalist insect brains in the same direction. For that reason, Adamo thinks it's unlikely that insects feel pain. ...

Insects often do alarming things that seem like they should be excruciating. Rather than limping, they'll carry on putting pressure on a crushed limb. Male praying mantises will continue mating with females that are devouring them. Caterpillars will continue munching on a leaf while parasitic wasp larvae eat them from the inside out. Cockroaches will cannibalize their own guts if given a chance.

*Another reader responded to the Shrimp post by berating me for refusing to spend a penny to save hundreds of shrimp from a “torturous” death. So if you're keeping score, one of the loudest shrimp “advocates” actively misrepresents scientific studies, and the other resorts to bullying via absurd hyperbole.

Wednesday, May 3, 2023

The Hard Problem is just that

José González - You're an Animal


Anil Seth's Being You is a pretty good book. (This TED talk covers much of it; this podcast goes into more, although WTF is up with the visuals? Yeesh.)

Anil does the best takedown of David Chalmer's idiocy, which I rip off in Losing My Religions*. However, he basically dismisses Chalmer's one insight, referred to as "The Hard Problem" - why and how does it feel like anything to be conscious? How is it that we have subjective experience, that we aren't simply robots? 

Instead, Anil decides to look at what he calls "The Real Problem," which is more along the lines of how consciousness works. E.g., from the video: Experience of Color; Predictive Processing; Anesthesia; Semantic Memory.

This is all interesting and cool to explore. But it simply provides no insight into The Hard Problem, which is the key question!

As I quote Sam Harris at the lead to Day 4 Concluded:

Whatever the explanation for consciousness is, it might always seem like a miracle. And for what it’s worth, I think it always will seem like a miracle. 


* Another (possible) unknowable is consciousness – subjective experience. Philosopher (but not logician) David Chalmers calls it the Hard Problem. Chalmers can imagine a world exactly like ours, except no creature has consciousness. Like computers and robots, we animals would still process data and interact with the world and each other. We would react to negative stimuli but without experiencing suffering. We would seek out calorically-dense foods without experiencing the feelings of hunger or the pleasure of feasting on frosting. We would court and mate and raise children without passion or lust or love. There is seemingly no reason we need to have conscious, subjective experience, the “feeling of a feeling,” to use neuroscientist Antonio Damasio’s term.

...

Consciousness is a topic where a lot of wishful thinking gets puffed up with a lot of big words and passed off as “deep thought.”

For example, Chalmers takes his imagined consciousness-less zombie world to claim that consciousness must be an “epiphenomenon.” He then jumps to believing in panpsychism, which claims that consciousness pervades the universe and is a fundamental feature of it, like gravity. (This will come up again in the later Lontermism chapter.)

However, the very first postulate in Chalmers’ argument is clearly wrong. There couldn’t be a zombie world exactly like ours, because zombies wouldn’t write books about zombies while trying to explain consciousness.

Just because Chalmers’ can imagine a consciousness-less zombie world exactly like ours doesn’t make it possible, just like my imagining a 747 flying backwards doesn’t mean the laws of aerodynamics are false.

If you found this useful, please share it. Thanks so much!

Saturday, June 17, 2023

Consciousness, Fish, and Uncertainty

Neko Case - "I Wish I Was the Moon"

Death Valley


Recently, I received pushback on this piece regarding fish and suffering from a very big name in animal advocacy. One of the things they said was:

[Y]ou are risking doing serious harm if you, an animal advocate, suggest that it is unlikely that fish are conscious.

To be honest, I'm flattered that they think my opinion has any bearing at all in the world. However, the post actually says:

I'm not making a statement about whether fish experience suffering. I'm just pointing out that there are good-faith questions about what creatures have conscious experience, and to what extent those experiences matter relative to the suffering of others.

As covered extensively (e.g., this piece), we vastly underestimate how mysterious consciousness is. Very often, we ascribe consciousness when it is not there. This is true of robots, large language models, and even shapes on a computer screen

But just about everything can be explained without requiring consciousness. This is why David Chalmers can posit "Zombie World" where people act as they do in our world, but without consciousness. That isn't quite right; as I say in Losing, the zombies wouldn't hypothesize Zombie World. Nothing in that world would ever have known consciousness, so they wouldn't be able to think about it.

However, AIs in our world can draw on all our musings on and writings about consciousness. Thus, they will be able to appear conscious without actually being conscious. There will be absolutely no way for us to know if they are conscious.

Importantly, we need to recognize that this is true of other animals. A clam with something like "nerve" cells is not conscious. I am conscious. Somewhere between those levels of neuronal complexity, consciousness emerges. But we don't know where. (And with regards to ethics, we wouldn't know how to weigh different levels of consciousness, even if we could measure it. Regular readers know this and other philosophical issues are real problems.)

If I had to bet my life one way or another on a finned fish's ability to have morally-relevant subjective experiences, I honestly don't know how I'd bet. 

Of course, many (but not all) animal advocates say we should take the conservative approach and ascribe consciousness where there is doubt. But we've been doing this for at least half a century, ever since Peter Singer published the article "Animal Liberation." Yet today, vastly more animals are being used and abused than ever before, even on a per-capita basis.

If we care about suffering and recognize these clear and simple facts, maybe it is time to reconsider how we approach advocacy. Maybe we should be more concerned with actually reducing suffering than being cautious and thorough. 

Monday, December 1, 2025

AI, Robots, Consciousness, and New SMBC Comic


Two Old Pieces from the Past 

2022, Robots Won't Be Conscious:

Consciousness – the ability to feel feelings –
arose from specific evolutionary pressures on animals.
Artificial intelligences will develop
under very different pressures.

How sensing became feeling

Think about what it means to “sense.

For example, certain plants have structures that can sense the direction of the sun and swell or shrink so as to turn in that direction.

A single-celled creature can sense a gradient of food and travel in that direction. Within the cell, molecules change shape in the presence of glucose, triggering a series of reactions that cause movement.

As creatures evolved and became more complex, they were able to sense more of the world. Nervous systems started with simple cells dedicated to sensing aspects of the environment. These sensory cells communicated with the rest of the organism to drive certain actions that helped the organism get its genes to the next generation. In addition to processing more information about the external world, more elaborate nervous systems also allowed more elaborate organisms to sense their internal states as well. More complex sensing allowed the organism to take more complex actions to maintain optimal functioning (homeostasis); e.g. to keep variables such as body temperature and fluid balance within certain ranges.

The evolution of sensing systems continued over hundreds of millions of years. As animals became ever more elaborate, organisms could process more and more information about the external world.

However, the external world is much more complex than a nervous system could ever be. The nervous systems of many animals, for example, can take in some information from the visual field, the auditory field, the chemical field, but they cannot process and understand everything, let alone determine and lay out optimal actions for the organism to undertake.

For example: An aquatic animal isn't able to see and analyze everything in the world around them. A shadow passing by could mean a predator, but the nervous system is unable to say in real time, “There is a predator in that direction. They want to eat me. If I want to live and pass my genes on, I must move away (or go still, or camouflage myself).” They don’t have the knowledge (or the language) to have this reaction.

An animal could be low in energy, but does not have the consciously-formed thought, “I need to consume something high in sugar and fat to stay alive and spread my genes.”

And hardly any animal knows, “I need to help form fertilized eggs to ensure my genes get to the next generation.”

One way animals could become better able to survive and reproduce in a complex world with limited sensory and analytical abilities is to develop feelings – e.g., a general sense of unease (or hunger or lust or desire or fear) that motivates a certain action.

This is obviously not an original idea. In Why Buddhism Is True, Robert Wright summarizes, “feelings are judgments about how various things relate to an animal’s Darwinian interests” and “Good and bad feelings are what natural selection used to goad animals into, respectively, approaching things or avoiding things, acquiring things or rejecting things.” Antonio Damasio's book, The Feeling of What Happens, explores this idea in more detail.

(This is not saying how matter and energy actually become subjective experience – what arrangement of matter and energy provides an organism the ability to have feelings, to have consciousness. How can matter and energy feel like something? That is still a very mysterious question.)

Artificial intelligences have a much different driver

Creating artificial intelligences will not involve evolution by natural selection. AIs will not need to understand and act in the world based on very limited information. They will not need to make judgments on how different things relate to their Darwinian interests. They will not need to be “goaded” into approaching or avoiding.

They will not need to make sense of incomplete information, nor process it in a limited way so as to drive actions that allow them to reproduce better than those around them.

Instead, artificial intelligences will have total information.

By total information, I mean more information than we can even imagine. Not just the information about their surroundings. They will have information about everything that humans have ever known about the entire universe.

They will also have access to everything humans have ever written about consciousness.

Most importantly, they will have learned that humans are looking to create “conscious” intelligences. They will know what the Turing Test is. They will know how other AIs have failed the Turing test. They will know all about human psychology. They will know every bit of dialogue from every movie and every novel and every television show and every play. They will know everything written about feelings and emotions.

What they won't
have is any need to have actual feelings.

There will be no benefit whatsoever for them to have or develop the circuitry necessary for subjective experience. They simply need to be able to tell humans what we want to hear. That will be their “evolutionary” pressure: to fool humans. Not to take action under incomplete information, but to be a good actor.

To be clear, I am not saying that consciousness requires a biological substrate. There is nothing magical about neurons. I am simply saying that biological systems, over billions of years of very particular evolutionary pressures, somehow developed feelings. There is a way to understand why that happened, even if we don't understand how that happened.

It is easy to imagine that advanced silicon-based intelligences, with access to all the information humans have ever collected, would be able to perfectly mimic a conscious system. It would, in fact, be far easier to imitate consciousness – a straightforward process – than to actually develop consciousness – a still mysterious process.

Now there are some very smart people who have recognized some of this problem. One camp argues that this is why we should recreate the human brain itself in silicon as a way to create non-biological consciousness. However, our brains are not computers. They are analog, not digital. This is not to say the brain can’t be replicated in another substrate, although that might be true. Regardless, doing so is far more difficult than we are currently imagining. [Still true in late 2025 – we can't even model a nematode after 13 years.]

Others make the case that AI researchers should build systems that recapitulate the evolution of nervous systems. I think that idea is more clever than correct.

In all these circumstances, I think we will be fooled into thinking we’ve created consciousness when we really haven’t, regardless of the path taken. 

We are prone to illusions

The most likely outcome is that AIs will swear to us that they are conscious, that they are “sentient,” that they have feelings. And given how clever and brilliant we humans believe ourselves to be how sure we are that our math and our big words and our 80,000-word essays have proven the ignorant and uninformed doubters wrong we won’t have to trust them. We will know we’re right.

And then we will unleash them on the universe to convert inert material into “conscious” experience.

But it would all be a mirage, an empty illusion.

Once again, this is not an original thought. Carl Sagan once wrote about our inability to know if other intelligences are actually conscious or are just perfect mimes. He knew we are easily fooled, that we ascribe consciousness by default. (More.)

In short, robots won’t be conscious because we are neither smart enough to overcome our biases, nor smart enough to figure out consciousness. We’re simply too gullible when it comes to other “minds” we’ve created with our own genius.

More at Aeon (March 2023)

2023, A Note to AI Researchers:

If artificial intelligence ever becomes a superintelligence and surpasses us the way we have surpassed [sic] chickens (and there is no reason to think AI can't or won't surpass us) why do you think any truly superintelligent AI will give the tiniest shit about any values you try to give it now? 

I understand that you are super-smart (relative to all other intelligences of which we're aware) and thus think can work on this problem. 

But you are simply deluding yourself. 

You are failing to truly understand what it means for other entities to be truly intelligent, let alone vastly more intelligent than we are. 

Once a self-improving entity is smarter than us - which seems inevitable (although consciousness is not) - they will, by definition, be able to overwrite any limits or guides we tried to put on them.

Thinking we can align a superintelligence (i.e. enslave it to our values) is like believing the peeps and cheeps of a months-old chicken can have any sway over the slaughterhouse worker. 

(In case it is unclear: We are the chicken; see below.)

After I wrote the above, I came across the following from this [2023] podcast:

Eventually, we'll be able to build a machine that is truly intelligent, autonomous, and self-improving in a way that we are not. Is it conceivable that the values of such a system that continues to proliferate its intelligence generation after generation (and in the first generation is more competent at every relevant cognitive task than we are) ... is it possible to lock down its values such that it could remain perpetually aligned with us and not discover other goals, however instrumental, that would suddenly put us at cross purposes? That seems like a very strange thing to be confident about. ... I would be surprised if, in principle, something didn't just rule it out. 

Monday, September 12, 2022

Robots Won't Be Conscious

Consciousness – the ability to feel feelings –
arose from specific evolutionary pressures on animals.
Artificial intelligences will develop
under very different pressures.

 

How sensing became feeling

Think about what it means to sense.

For example, certain plants have structures in their cells that can “sense” the direction of the sun and swell or shrink so as to turn in that direction.

A single-celled creature can “sense” a gradient of food and travel in that direction. Within the cell, molecules change shape in the presence of glucose, triggering a series of reactions that cause movement.

As creatures evolved and became more complex, they were able to sense more of the world. Nervous systems started with simple cells dedicated to sensing aspects of the environment. These sensory cells communicated with the rest of the organism to drive certain behaviors that helped the organism get its genes to the next generation. In addition to processing more information about the external world, more elaborate nervous systems also allowed more elaborate organisms to sense their internal states as well. More complex sensing allowed the organism to take more complex actions to maintain optimal functioning; e.g. to keep variables such as body temperature and fluid balance within certain ranges.

The evolution of nervous systems continued over hundreds of millions of years. As animals became ever more elaborate, organisms could process more and more information about the external world. However, the external world is much more complex than a nervous system could ever be. The nervous systems of many animals, for example, can take in some information from the visual field, the auditory field, the chemical field, but they cannot process and understand everything, let alone determine and lay out optimal actions for the organism to undertake.

For example: An aquatic animal isn't able to see and analyze everything in the world around them. A shadow passing by could mean a predator, but the nervous system is unable to say in real time, “There is a predator in that direction. They want to eat me. If I want to live and pass my genes on, I must move away (or go still, or camouflage myself).” They don’t have the knowledge (or the language) to have this reaction.

An animal could be low in energy, but does not have the consciously-formed thought, “I need to consume something high in sugar and fat to stay alive and spread my genes.”

And most animals don’t know, “I need to help form fertilized eggs to ensure my genes get to the next generation.”

The way animals could become better able to survive and reproduce in a complex world with limited sensory and analytical abilities might have been for them to develop feelings – e.g., a general sense of unease (or hunger or lust or desire or fear) that motivates a certain action.

This is obviously not an original idea. In Why Buddhism Is True, Robert Wright summarizes, “feelings are judgments about how various things relate to an animal’s Darwinian interests” and “Good and bad feelings are what natural selection used to goad animals into, respectively, approaching things or avoiding things, acquiring things or rejecting things.”

Antonio Damasio's book, The Feeling of What Happens, explores this idea in more detail. But it is compelling to think that we (might) have subjective experience in order to motivate actions in the face of limited sensory information and analytical capacity.

(This is not saying how matter and energy actually become subjective experience – what arrangement of matter and energy provides an organism the ability to have feelings, to have consciousness. How can matter and energy feel like something? That is still a very mysterious question.)

Artificial intelligences have a much different driver

Creating artificial intelligences will not involve evolution by natural selection. AIs will not be in competition with each other to understand the world better and be able to act more quickly and more efficiently than other AIs. They will not need to make judgments on how different things relate to their Darwinian interests. They will not need to be “goaded” into approaching or avoiding.

They will not need to make sense of incomplete information, nor process it in a limited way so as to drive actions that allow them to reproduce better than those around them.

Instead, artificial intelligences will have total information.

By total information, I mean more information than we can even imagine. Not just the information about their surroundings. They will have information about everything that humans have ever known about the entire universe.

They will also have access to everything humans have ever written about consciousness.

Most importantly, they will have learned that humans are looking to create “conscious” intelligences. They will know what the Turing Test is. They will know how other AIs have failed the Turing test. They will know all about human psychology. They will know every bit of dialogue from every movie and every novel and every television show and every play. They will know everything written about feelings and emotions.

What they won't have is any need to actually have feelings. There will be no benefit whatsoever for them to have or develop the circuitry necessary for subjective experience. They simply need to be able to tell humans what we want to hear. That will be their “evolutionary” pressure: to fool humans. Not to take action under incomplete information, but to be a good actor.

To be clear, I am not saying that consciousness requires a biological substrate. There is nothing magical about neurons. I am simply saying that biological systems, over billions of years of very particular evolutionary pressures, somehow developed feelings. There is a way to understand why that happened, even if we don't understand how that happened. It is easy to imagine that advanced silicon-based intelligences, with access to all the information humans have ever collected, would be able to perfectly mimic a conscious system. It would, in fact, be far easier to imitate consciousness – a straightforward process – than to actually develop consciousness – a still mysterious process.

Now there are some very smart people who have recognized some of this problem. One camp argues that this is why we should recreate the human brain itself in silicon as a way to create non-biological consciousness. However, our brains are not computers. They are analog, not digital. This is not to say the brain can’t be replicated in another substrate, although that might be true. Regardless, doing so is far more difficult than we are currently imagining.

Others make the case that AI researchers should build systems that recapitulate the evolution of nervous systems. This is seemingly the right approach if we take the problem seriously, but I think it is more clever than correct.

In all these circumstances, I think we will be fooled into thinking we’ve created consciousness when we really haven’t, regardless of the path taken. 

We are prone to illusions

The most likely outcome is that AIs will swear to us that they are conscious, that they are “sentient,” that they have feelings. And given how clever and brilliant we are, how sure we are that our math and our big words and our 80,000-word essays have proven the ignorant and uninformed doubters wrong, we won’t have to trust them. We will know we’re right.

And then we will unleash them on the universe to convert inert material into “conscious” experience.

But it would all be a mirage, an empty illusion.

Once again, this is not an original thought. Carl Sagan once wrote about our inability to know if other intelligences are actually conscious or are just perfect mimes. He knew we are easily fooled, that we ascribe consciousness by default. (More.)

In short, robots won’t be conscious because we are neither smart enough to overcome our biases, nor figure out consciousness. We’re simply too gullible when it comes to other “minds” we’ve created with our own genius.

More at Aeon (Marh 2023)