Monday, September 12, 2022

Robots Won't Be Conscious

Consciousness – the ability to feel feelings –
arose from specific evolutionary pressures on animals.
Artificial intelligences will develop
under very different pressures.

 

How sensing became feeling

Think about what it means to sense.

For example, certain plants have structures in their cells that can “sense” the direction of the sun and swell or shrink so as to turn in that direction.

A single-celled creature can “sense” a gradient of food and travel in that direction. Within the cell, molecules change shape in the presence of glucose, triggering a series of reactions that cause movement.

As creatures evolved and became more complex, they were able to sense more of the world. Nervous systems started with simple cells dedicated to sensing aspects of the environment. These sensory cells communicated with the rest of the organism to drive certain behaviors that helped the organism get its genes to the next generation. In addition to processing more information about the external world, more elaborate nervous systems also allowed more elaborate organisms to sense their internal states as well. More complex sensing allowed the organism to take more complex actions to maintain optimal functioning; e.g. to keep variables such as body temperature and fluid balance within certain ranges.

The evolution of nervous systems continued over hundreds of millions of years. As animals became ever more elaborate, organisms could process more and more information about the external world. However, the external world is much more complex than a nervous system could ever be. The nervous systems of many animals, for example, can take in some information from the visual field, the auditory field, the chemical field, but they cannot process and understand everything, let alone determine and lay out optimal actions for the organism to undertake.

For example: An aquatic animal isn't able to see and analyze everything in the world around them. A shadow passing by could mean a predator, but the nervous system is unable to say in real time, “There is a predator in that direction. They want to eat me. If I want to live and pass my genes on, I must move away (or go still, or camouflage myself).” They don’t have the knowledge (or the language) to have this reaction.

An animal could be low in energy, but does not have the consciously-formed thought, “I need to consume something high in sugar and fat to stay alive and spread my genes.”

And most animals don’t know, “I need to help form fertilized eggs to ensure my genes get to the next generation.”

The way animals could become better able to survive and reproduce in a complex world with limited sensory and analytical abilities might have been for them to develop feelings – e.g., a general sense of unease (or hunger or lust or desire or fear) that motivates a certain action.

This is obviously not an original idea. In Why Buddhism Is True, Robert Wright summarizes, “feelings are judgments about how various things relate to an animal’s Darwinian interests” and “Good and bad feelings are what natural selection used to goad animals into, respectively, approaching things or avoiding things, acquiring things or rejecting things.”

Antonio Damasio's book, The Feeling of What Happens, explores this idea in more detail. But it is compelling to think that we (might) have subjective experience in order to motivate actions in the face of limited sensory information and analytical capacity.

(This is not saying how matter and energy actually become subjective experience – what arrangement of matter and energy provides an organism the ability to have feelings, to have consciousness. How can matter and energy feel like something? That is still a very mysterious question.)

Artificial intelligences have a much different driver

Creating artificial intelligences will not involve evolution by natural selection. AIs will not be in competition with each other to understand the world better and be able to act more quickly and more efficiently than other AIs. They will not need to make judgments on how different things relate to their Darwinian interests. They will not need to be “goaded” into approaching or avoiding.

They will not need to make sense of incomplete information, nor process it in a limited way so as to drive actions that allow them to reproduce better than those around them.

Instead, artificial intelligences will have total information.

By total information, I mean more information than we can even imagine. Not just the information about their surroundings. They will have information about everything that humans have ever known about the entire universe.

They will also have access to everything humans have ever written about consciousness.

Most importantly, they will have learned that humans are looking to create “conscious” intelligences. They will know what the Turing Test is. They will know how other AIs have failed the Turing test. They will know all about human psychology. They will know every bit of dialogue from every movie and every novel and every television show and every play. They will know everything written about feelings and emotions.

What they won't have is any need to actually have feelings. There will be no benefit whatsoever for them to have or develop the circuitry necessary for subjective experience. They simply need to be able to tell humans what we want to hear. That will be their “evolutionary” pressure: to fool humans. Not to take action under incomplete information, but to be a good actor.

To be clear, I am not saying that consciousness requires a biological substrate. There is nothing magical about neurons. I am simply saying that biological systems, over billions of years of very particular evolutionary pressures, somehow developed feelings. There is a way to understand why that happened, even if we don't understand how that happened. It is easy to imagine that advanced silicon-based intelligences, with access to all the information humans have ever collected, would be able to perfectly mimic a conscious system. It would, in fact, be far easier to imitate consciousness – a straightforward process – than to actually develop consciousness – a still mysterious process.

Now there are some very smart people who have recognized some of this problem. One camp argues that this is why we should recreate the human brain itself in silicon as a way to create non-biological consciousness. However, our brains are not computers. They are analog, not digital. This is not to say the brain can’t be replicated in another substrate, although that might be true. Regardless, doing so is far more difficult than we are currently imagining.

Others make the case that AI researchers should build systems that recapitulate the evolution of nervous systems. This is seemingly the right approach if we take the problem seriously, but I think it is more clever than correct.

In all these circumstances, I think we will be fooled into thinking we’ve created consciousness when we really haven’t, regardless of the path taken. 

We are prone to illusions

The most likely outcome is that AIs will swear to us that they are conscious, that they are “sentient,” that they have feelings. And given how clever and brilliant we are, how sure we are that our math and our big words and our 80,000-word essays have proven the ignorant and uninformed doubters wrong, we won’t have to trust them. We will know we’re right.

And then we will unleash them on the universe to convert inert material into “conscious” experience.

But it would all be a mirage, an empty illusion.

Once again, this is not an original thought. Carl Sagan once wrote about our inability to know if other intelligences are actually conscious or are just perfect mimes. He knew we are easily fooled, that we ascribe consciousness by default. (More.)

In short, robots won’t be conscious because we are neither smart enough to overcome our biases, nor figure out consciousness. We’re simply too gullible when it comes to other “minds” we’ve created with our own genius.

More at Aeon (Marh 2023)




No comments: