The Little Man in the Brain
by Sadaputa Dasa
a.k.a. "Brain, neurons, vision, soul, consciousness, observer, optic nerve, computer"
During a TV show entitled “Inside Information,” vision scientist V. S. Ramachandran of the University of California at San Diego made some interesting points about how we see. He said that if you ask the man in the street how vision works, he would say there is an image on the retina of the eye. The optic nerve faithfully transmits this image to a screen in the brain, in what we call the visual cortex. And that image is what you see.
Ramachandran pointed out that this explanation leads to a logical fallacy. If you create an image inside the head, then you need another person in the head—a little man in the brain—who looks at that image. Then you have to postulate an even smaller person inside his head to explain how he sees, and so on, ad infinitum. This is obvious nonsense, and Ramachandran said that inside the brain there really is no replica of the external world. Rather, there is an abstract, symbolic description of that world. Brain scientists are like cryptographers trying to crack the code the brain uses in perceiving its environment.
So how does perception work? Suppose you are looking at a car traveling down a street. You perceive the shape of the car, its color, and its motion all at once. You may realize at once that it’s red, that it’s a Volkswagen bug, and that it’s slow enough and far enough away so you’ll have time to cross the street in front of it.
Recent research in brain science shows that the brain houses three separate visual systems to see shape, color, and motion. All three systems use information coming down the optic nerves from the eyes, but the systems are distinct both anatomically and functionally. The systems are named after the complex anatomical pathways they occupy in the brain.
The parvo-interblob-pale-stripe system deals with color contrast along borders of objects, but not color per se. It responds to the shapes of objects, but says nothing about their colors. The blob-thin-stripe-V4 system determines colors and shades of gray, but it has low resolution for shapes. The magno-4B-thick-stripe-MT system tells about movement and depth, but it’s colorblind and doesn’t react to stationary images.1 All three systems work together when you see the Volkswagen coming down the street.
To illustrate how such a visual system works; I’ve devised a simple example with computer logic. The figure above shows an “eye” consisting of a vertical plastic tube with ten photoelectric cells in a vertical row. The tube has room for one to ten small plastic balls. The photoelectric cells detect the balls. Each output wire from a photocell is “on” if a ball is present in front of the cell, and “off” if a ball is not present.
The ten wires from the cells form a kind of “optic nerve.” This nerve divides into three branches leading to three processing systems.
The multiple-of-2 system tells whether the number of balls is odd or even. It works through logic gates, represented by the gray and black triangles. The wires going to the top of a triangle are its input wires, and the wire going down from the triangle’s point is its output wire. A gray triangle represents a none-gate. Its output wire is on if none of its input wires are on, and otherwise it is off. A black triangle represents an all-gate. Its output wire is on if all its input wires are on, and not otherwise. The gates in the multiple-of- 2 system are arranged so that the final output wire emerging from the bottom of the system is on if the number of balls is even, and off if the number is odd.
Similarly, the gates in the multiple-of-3 system are arranged so that the final output wire is on only if the number of balls is a multiple of 3 (i.e., 3, 6, or 9). In the multiple-of-5 system, the output wire is on only if the number of balls is a multiple of 5 (i.e., 5 or 10).
This network recognizes three distinct features of the number of balls.
It does this using three distinct subsystems of gates that operate in parallel, each subsystem using simple logical operations in response to binary information (represented by “on” and “off” or 1 and 0). These sub- systems resemble the brain’s subsystems for recognizing the shape, color, and motion of an image. The brain’s subsystems use distinct sets of neurons, which work with binary information. (When a neuron is stimulated, it either fires or it doesn’t, with no response in between.)
Our example of a computer network gives some idea of how the brain can process information with which to respond to its environment. Data from the senses, encoded as patterns of nerve impulses, can travel to a wide variety of brain subsystems, where networks of neurons extract various kinds of information. This information can then be combined to yield further information, which in turn can be used to generate brain output.
But can this explain how we see? This view of the brain avoids an infinite regress of little men looking at screens in one another’s heads. And it gives us an idea of how the brain can identify complex patterns and respond to them. But it tells us nothing about how we are aware of a Volkswagen coming down the street.
Look again at our computer network model. We could easily build this model out of electrical hardware, and we could hook up the output wires from the three subsystems to colored lights labeled 2, 3, and 5. Suppose we did this, put 6 balls in the tube, and saw lights 2 and 3 turn on, and the light 5 stay off. Would the electrical network be aware that 6 is a multiple of 2 and 3? Is there any reasonable basis for saying the network would be aware of anything?
The answer is no. We can fully understand what the network is doing. We can understand the flows of electrical current within its wires and the operation of its logic gates. But this understanding tells us nothing about whether or not the network is aware of anything. And if someone were to declare that the network actually is conscious of something, we would be at a loss to understand how or why that should be.
This is all well and good with electrical networks. Perhaps they are completely devoid of consciousness. But what about human brains? When I see a Volkswagen coming down the street, I’m having a conscious experience, and I know directly that this is so. I assume that since other people are similar to me, they too have real conscious experiences. Can we understand this phenomenon in terms of networks of neurons in the brain?
The answer seems to be no. Our electrical network could be built using neurons instead of wires. That network would recognize patterns the same way, and we would understand it the same way. The essence of the network lies in the pattern of its logic gates, not in the substance making up these gates. But this means we can’t understand consciousness in neural networks any better than in electrical networks.
One might point out that there are more than 10 billion neurons in the cerebral cortex and only a few logic gates in our example. Couldn’t it be that consciousness emerges from the interaction of billions of neurons? Perhaps, but how? With billions of logical units in a network, one can certainly handle patterns much more complex than simple columns of 1 to 10 balls. But this tells us nothing about consciousness.
One idea is that consciousness may arise at the level where the brain organizes information from separate systems, like those for shape, color, and motion, and integrates it into one unified gestalt. One problem with this proposal: Does such unification actually occur? To write down a lot of information you need many letters, and if you code the information in patterns of nerve impulses, you need a lot of neurons to store it. No matter how much you try to compress it by careful coding, it remains spread out and not truly unified. And if you mix together all the information in one spread-out region of the cerebral cortex, you have in effect re-created the screen in the original story of the little man in the brain.
The basic fallacy of the little man in the brain argument is that it assumes implicitly that consciousness can be understood in physical terms. One tries to explain consciousness by describing a machine that creates a certain display of information. Then one recognizes that the mere presence of displayed information fails to account for consciousness of that information. Then one proposes another mechanism to interpret the information and finally generate consciousness. When that attempt also fails, one takes refuge in the overwhelming complexity of the brain and says that a consciousness-producing mechanism must be hidden in there somewhere. All we have to do is find it.
One way to escape from the little man fallacy is to forget about consciousness and restrict our attention to the brain’s data processing. But this leaves a crucial aspect of life permanently outside the domain of science.
Another way to escape the fallacy is to consider that consciousness just might be due to a nonphysical entity—dare we say a soul? —that reads the data displays of the brain just as we read the letters of a book.
Although this idea is anathema to scientists who insist that everything must obey known physical laws, it promises to greatly expand the frontiers of science. It could very well be true. And to realize its potential for enriching our scientific understanding, all we have to do is seriously consider it.