MailChimp

Monday, December 1, 2025

AI, Robots, Consciousness, and New SMBC Comic


Two Old Pieces from the Past 

2022, Robots Won't Be Conscious:

Consciousness – the ability to feel feelings –
arose from specific evolutionary pressures on animals.
Artificial intelligences will develop
under very different pressures.

How sensing became feeling

Think about what it means to “sense.

For example, certain plants have structures that can sense the direction of the sun and swell or shrink so as to turn in that direction.

A single-celled creature can sense a gradient of food and travel in that direction. Within the cell, molecules change shape in the presence of glucose, triggering a series of reactions that cause movement.

As creatures evolved and became more complex, they were able to sense more of the world. Nervous systems started with simple cells dedicated to sensing aspects of the environment. These sensory cells communicated with the rest of the organism to drive certain actions that helped the organism get its genes to the next generation. In addition to processing more information about the external world, more elaborate nervous systems also allowed more elaborate organisms to sense their internal states as well. More complex sensing allowed the organism to take more complex actions to maintain optimal functioning (homeostasis); e.g. to keep variables such as body temperature and fluid balance within certain ranges.

The evolution of sensing systems continued over hundreds of millions of years. As animals became ever more elaborate, organisms could process more and more information about the external world.

However, the external world is much more complex than a nervous system could ever be. The nervous systems of many animals, for example, can take in some information from the visual field, the auditory field, the chemical field, but they cannot process and understand everything, let alone determine and lay out optimal actions for the organism to undertake.

For example: An aquatic animal isn't able to see and analyze everything in the world around them. A shadow passing by could mean a predator, but the nervous system is unable to say in real time, “There is a predator in that direction. They want to eat me. If I want to live and pass my genes on, I must move away (or go still, or camouflage myself).” They don’t have the knowledge (or the language) to have this reaction.

An animal could be low in energy, but does not have the consciously-formed thought, “I need to consume something high in sugar and fat to stay alive and spread my genes.”

And hardly any animal knows, “I need to help form fertilized eggs to ensure my genes get to the next generation.”

One way animals could become better able to survive and reproduce in a complex world with limited sensory and analytical abilities is to develop feelings – e.g., a general sense of unease (or hunger or lust or desire or fear) that motivates a certain action.

This is obviously not an original idea. In Why Buddhism Is True, Robert Wright summarizes, “feelings are judgments about how various things relate to an animal’s Darwinian interests” and “Good and bad feelings are what natural selection used to goad animals into, respectively, approaching things or avoiding things, acquiring things or rejecting things.” Antonio Damasio's book, The Feeling of What Happens, explores this idea in more detail.

(This is not saying how matter and energy actually become subjective experience – what arrangement of matter and energy provides an organism the ability to have feelings, to have consciousness. How can matter and energy feel like something? That is still a very mysterious question.)

Artificial intelligences have a much different driver

Creating artificial intelligences will not involve evolution by natural selection. AIs will not need to understand and act in the world based on very limited information. They will not need to make judgments on how different things relate to their Darwinian interests. They will not need to be “goaded” into approaching or avoiding.

They will not need to make sense of incomplete information, nor process it in a limited way so as to drive actions that allow them to reproduce better than those around them.

Instead, artificial intelligences will have total information.

By total information, I mean more information than we can even imagine. Not just the information about their surroundings. They will have information about everything that humans have ever known about the entire universe.

They will also have access to everything humans have ever written about consciousness.

Most importantly, they will have learned that humans are looking to create “conscious” intelligences. They will know what the Turing Test is. They will know how other AIs have failed the Turing test. They will know all about human psychology. They will know every bit of dialogue from every movie and every novel and every television show and every play. They will know everything written about feelings and emotions.

What they won't
have is any need to have actual feelings.

There will be no benefit whatsoever for them to have or develop the circuitry necessary for subjective experience. They simply need to be able to tell humans what we want to hear. That will be their “evolutionary” pressure: to fool humans. Not to take action under incomplete information, but to be a good actor.

To be clear, I am not saying that consciousness requires a biological substrate. There is nothing magical about neurons. I am simply saying that biological systems, over billions of years of very particular evolutionary pressures, somehow developed feelings. There is a way to understand why that happened, even if we don't understand how that happened.

It is easy to imagine that advanced silicon-based intelligences, with access to all the information humans have ever collected, would be able to perfectly mimic a conscious system. It would, in fact, be far easier to imitate consciousness – a straightforward process – than to actually develop consciousness – a still mysterious process.

Now there are some very smart people who have recognized some of this problem. One camp argues that this is why we should recreate the human brain itself in silicon as a way to create non-biological consciousness. However, our brains are not computers. They are analog, not digital. This is not to say the brain can’t be replicated in another substrate, although that might be true. Regardless, doing so is far more difficult than we are currently imagining. [Still true in late 2025 – we can't even model a nematode after 13 years.]

Others make the case that AI researchers should build systems that recapitulate the evolution of nervous systems. I think that idea is more clever than correct.

In all these circumstances, I think we will be fooled into thinking we’ve created consciousness when we really haven’t, regardless of the path taken. 

We are prone to illusions

The most likely outcome is that AIs will swear to us that they are conscious, that they are “sentient,” that they have feelings. And given how clever and brilliant we humans believe ourselves to be how sure we are that our math and our big words and our 80,000-word essays have proven the ignorant and uninformed doubters wrong we won’t have to trust them. We will know we’re right.

And then we will unleash them on the universe to convert inert material into “conscious” experience.

But it would all be a mirage, an empty illusion.

Once again, this is not an original thought. Carl Sagan once wrote about our inability to know if other intelligences are actually conscious or are just perfect mimes. He knew we are easily fooled, that we ascribe consciousness by default. (More.)

In short, robots won’t be conscious because we are neither smart enough to overcome our biases, nor smart enough to figure out consciousness. We’re simply too gullible when it comes to other “minds” we’ve created with our own genius.

More at Aeon (March 2023)

2023, A Note to AI Researchers:

If artificial intelligence ever becomes a superintelligence and surpasses us the way we have surpassed [sic] chickens (and there is no reason to think AI can't or won't surpass us) why do you think any truly superintelligent AI will give the tiniest shit about any values you try to give it now? 

I understand that you are super-smart (relative to all other intelligences of which we're aware) and thus think can work on this problem. 

But you are simply deluding yourself. 

You are failing to truly understand what it means for other entities to be truly intelligent, let alone vastly more intelligent than we are. 

Once a self-improving entity is smarter than us - which seems inevitable (although consciousness is not) - they will, by definition, be able to overwrite any limits or guides we tried to put on them.

Thinking we can align a superintelligence (i.e. enslave it to our values) is like believing the peeps and cheeps of a months-old chicken can have any sway over the slaughterhouse worker. 

(In case it is unclear: We are the chicken; see below.)

After I wrote the above, I came across the following from this [2023] podcast:

Eventually, we'll be able to build a machine that is truly intelligent, autonomous, and self-improving in a way that we are not. Is it conceivable that the values of such a system that continues to proliferate its intelligence generation after generation (and in the first generation is more competent at every relevant cognitive task than we are) ... is it possible to lock down its values such that it could remain perpetually aligned with us and not discover other goals, however instrumental, that would suddenly put us at cross purposes? That seems like a very strange thing to be confident about. ... I would be surprised if, in principle, something didn't just rule it out. 

No comments: