A brain and body are made the same way. From the bottom up.
Instead of towns, you begin with simple behaviors -- instincts and
reflexes. You make a little circuit that does a simple job, and you get
a lot of them going. Then you overlay a secondary level of complex
behavior that can emerge out of that bunch of working reflexes. The
original layer keeps working whether the second layer works or not. But
when the second layer manages to produce a more complex behavior, it
subsumes the action of the layer below it.
Here is the generic recipe for distributed control that Brooks's
mobot lab developed. It can be applied to most creations: -
1) Do simple
things first. -
2) Learn to do them flawlessly. -
3) Add new layers of activity over the results of the simple tasks.
-
4) Don't change the simple things. -
5) Make the new layer work as flawlessly as the simple. -
6) Repeat, ad infinitum.
This script could also be called a recipe for managing complexity of
any type, for that is what it is.
What you don't want is to organize the work of a nation by a
centralized brain. Can you imagine the string of nightmares you'd stir
up if you wanted the sewer pipe in front of your house repaired and you
had to call the Federal Sewer Pipe Repair Department in Washington,
D.C., to make an appointment?
The most obvious way to do something complex, such as govern 100
million people or walk on two skinny legs, is to come up with a list of
all the tasks that need to be done, in the order they are to be done,
and then direct their completion from a central command, or brain. The
former Soviet Union's economy was wired in this logical but immensely
impractical way. Its inherent instability of organization was evident
long before it collapsed.
Central-command bodies don't work any better than central-command
economies. Yet a centralized command blueprint has been the main
approach to making robots, artificial creatures, and artificial
intelligences. It is no surprise to Brooks that braincentric folks
haven't even been able to raise a creature complex enough to collapse.
Brooks has been trying to breed systems without central brains so
that they would have enough complexity worth a collapse. In one paper he
called this kind of intelligence without centrality "intelligence
without reason," a delicious yet subtle pun. For not only would this
type of intelligence -- one constructed layer by layer from the bottom
up -- not have the architecture of "reasoning," it would also emerge from
the structure for no apparent reason at all.
The USSR didn't collapse because its economy was strangled by a
central command model. Rather it collapsed because any
central-controlled complexity is unstable and inflexible. Institutions,
corporations, factories, organisms, economies, and robots will all fail
to thrive if designed around a central command.
Yes, I hear you say, but don't I as a human have a centralized
brain?
Humans have a brain, but it is not centralized, nor does the brain
have a center. "The idea that the brain has a center is just wrong. Not
only that, it is radically wrong," claims Daniel Dennett. Dennett is a
Tufts University professor of philosophy who has long advocated a
"functional" view of the mind: that the functions of the mind, such as
thinking, come from non-thinking parts. The semimind of a insectlike
mobot is a good example of both animal and human minds. According to
Dennett, there is no place that controls behavior, no place that creates
"walking," no place where the soul of being resides. Dennett: "The thing
about brains is that when you look in them, you discover that there's
nobody home."
Dennett is slowly persuading many psychologists that consciousness
is an emergent phenomenon arising from the distributed network of many
feeble, unconscious circuits. Dennett told me, "The old model says there
is this central place, an inner sanctum, a theater somewhere in the
brain where consciousness comes together. That is, everything must feed
into a privileged representation in order for the brain to be conscious.
When you make a conscious decision, it is done in the summit of the
brain. And reflexes are just tunnels through the mountain that avoid the
summit of consciousness."
From this logic (very much the orthodox dogma in brain science) it
follows, says Dennett, that "when you talk, what you've got in your
brain is a language output box. Words are composed by some speech
carpenters and put in the box. The speech carpenters get directions from
a sub-system called the 'conceptualizer' which gives them a preverbal
message. Of course the conceptualizer has to gets its message from some
source, so it all goes on to an infinite regress of control."
Dennett calls this view the "Central Meanor." Meaning descends from
some central authority in the brain. He describes this perspective
applied to language -- making as the "idea that there is this sort of
four-star general that tells the troops, 'Okay, here's your task. I want
to insult this guy. Make up an English insult on the appropriate topic
and deliver it.' That's a hopeless view of how speech happens."
Much more likely, says Dennett, is that "meaning emerges from
distributed interaction of lots of little things, no one of which can
mean a damn thing." A whole bunch of decentralized modules produce raw
and often contradictory parts -- a possible word here, a speculative word
there. "But out of the mess, not entirely coordinated, in fact largely
competitive, what emerges is a speech act."
We think of speech in literary fashion as a stream of consciousness
pouring forth like radio broadcasts from a News Desk in our mind.
Dennett says, "There isn't a stream of consciousness. There are multiple
drafts of consciousness; lots of different streams, no one of which will
be singled out as the stream." In 1874, pioneer psychologist William
James wrote, "...the mind is at every stage a theatre of simultaneous
possibilities. Consciousness consists in the comparisons of these with
each other, the selection of some, and the suppression of the rest...."
The idea of a cacophony of alternative wits combining to form what
we think of as a unified intelligence is what Marvin Minsky calls
"society of mind." Minsky says simply "You can build a mind from many
little parts, each mindless by itself." Imagine, he suggests, a simple
brain composed of separate specialists each concerned with some
important goal (or instinct) such as securing food, drink, shelter,
reproduction, or defense. Singly, each is a moron; but together,
organized in many different arrangements in a tangled hierarchy of
control, they can create thinking. Minsky emphatically states, "You
can't have intelligence without a society of mind. We can only get smart
things from stupid things."
The society of mind doesn't sound very much different from a
bureaucracy of mind. In fact, without evolutionary and learning
pressures, the society of mind in a brain would turn into a
bureaucracy. However, as Dennett, Minsky, and Brooks envision it, the
dumb agents in a complex organization are always both competing and
cooperating for resources and recognition. There is a very lax
coordination among the vying parts. Minsky sees intelligence as
generated by "a loosely-knitted league of almost separate agencies with
almost independent goals." Those agencies that succeed are preserved,
and those that don't vanish over time. In that sense, the brain is no
monopoly, but a ruthless cutthroat ecology, where competition breeds an
emergent cooperation.
The slightly chaotic character of mind goes even deeper, to a degree
our egos may find uncomfortable. It is very likely that intelligence, at
bottom, is a probabilistic or statistical phenomenon -- on par with the law
of averages. The distributed mass of ricocheting impulses which form the
foundation of intelligence forbid deterministic results for a given
starting point. Instead of repeatable results, outcomes are merely
probabilistic. Arriving at a particular thought, then, entails a bit of
luck.
Dennett admits to me, "The thing I like about this theory is that
when people first hear about it they laugh. But then when they think
about it, they conclude maybe it is right! Then the more they think
about it, they realize, no, not maybe right, some version of it has to
be right!"
As Dennett and others have noted, the odd occurrence of Multiple
Personalities Syndrome (MPS) in humans depends at some level on the
decentralized, distributed nature of human minds. Each personality -- Billy
vs. Sally -- uses the same pool of personality agents, the same community
of actors and behavior modules to generate visibly different personas.
Humans with MPS present a fragmented facet (one grouping) of their
personality as a whole being. Outsiders are never sure who they are
talking to. The patient seems to lack an "I."
But isn't this what we all do? At different times of our life, and
in different moods, we too shift our character. "You are not the person
I used to know," screams the person we hurt by manifesting a different
cut on our inner society. The "I" is a gross extrapolation that we use
as an identity for ourselves and others. If there wasn't an "I" or "Me"
in every person then each would quickly invent one. And that, Minsky
says, is exactly what we do. There is no "I" so we each invent one.
There is no "I" for a person, for a beehive, for a corporation, for
an animal, for a nation, for any living thing. The "I" of a vivisystem
is a ghost, an ephemeral shroud. It is like the transient form of a
whirlpool held upright by a million spinning atoms of water. It can be
scattered with a fingertip.
But a moment later, the shroud reappears, driven together by the
churning of a deep distributed mob. Is the new whirlpool a different
form, or the same? Are you different after a near-death experience, or
only more mature? If the chapters in this book were arranged in a
different order, would it be a different book or the same? When you
can't answer that question, then you know you are talking about a
distributed system.
continue...
|