Centralized communication is not the only problem with a
central brain. Maintaining a central memory is equally debilitating. A
shared memory has to be updated rigorously, timely, and accurately -- a
problem that many corporations can commiserate with. For a robot,
central command's challenge is to compile and update a "world model," a
theory, or representation, of what it perceives -- where the walls are, how
far away the door is, and, by the way, beware of the stairs over there.
What does a brain center do with conflicting information from many
sensors? The eye says something is coming, the ear says it is leaving.
Which does the brain believe? The logical way is to try to sort them
out. A central command reconciles arguments and recalibrates signals to
be in sync. In presubsumption robots, most of the great computational
resources of a centralized brain were spent in trying to make a coherent
map of the world based on multiple-vision signals. Different parts of
the system believed wildly inconsistent things about their world derived
from different readings of the huge amount of data pouring in from
cameras and infrared sensors. The brain never got anything done because
it never got everything coordinated.
So difficult was the task of coordinating a central world view that
Brooks discovered it was far easier to use the real world as its own
model: "This is a good idea as the world really is a rather good model
of itself." With no centrally imposed model, no one has the job of
reconciling disputed notions; they simply aren't reconciled. Instead,
various signals generate various behaviors. The behaviors are sorted out
(suppressed, delayed, activated) in the web hierarchy of subsumed
control.
In effect, there is no map of the world as the robot sees it (or as
an insect sees it, Brooks might argue). There is no central memory, no
central command, no central being. All is distributed. "Communication
through the world circumvents the problem of calibrating the vision
system with data from the arm," Brooks wrote. The world itself becomes
the "central" controller; the unmapped environment becomes the map. That
saves an immense amount of computation. "Within this kind of
organization," Brooks said, "very small amounts of computation are
needed to generate intelligent behaviors."
With no central organization, the various agents must perform or
die. One could think of Brooks's scheme as having, in his words,
"multiple agents within one brain communicating through the world to
compete for the resources of the robot's body." Only those that succeed
in doing get the attention of other agents.
Astute observers have noticed that Brooks's prescription is an exact
description of a market economy: there is no communication between
agents, except that which occurs through observing the effects of
actions (and not the actions themselves) that other agents have on the
common world. The price of eggs is a message communicated to me by
hundreds of millions of agents I have never met. The message says (among
many other things): "A dozen eggs is worth less to us than a pair of
shoes, but more than a two-minute telephone call across the country."
That price, together with other price messages, directs thousands of
poultry farmers, shoemakers, and investment bankers in where to put
their money and energy.
Brooks's model, for all its radicalism in the field of artificial
intelligence, is really a model of how complex organisms of any type
work. We see a subsumption, web hierarchy in all kinds of vivisystems.
He points out five lessons from building mobots. What you want is:
-
Incremental construction -- grow complexity, don't install it -
Tight coupling of sensors to actuators -- reflexes, not thinking
-
Modular independent layers -- the system decomposes into viable
subunits -
Decentralized control -- no central planning -
Sparse communication -- watch results in the world, not wires
When Brooks crammed a bulky, headstrong monster into a tiny,
featherweight bug, he discovered something else in this miniaturization.
Before, the "smarter" a robot was to be, the more computer components it
needed, and the heavier it got. The heavier it got, the larger the
motors needed to move it. The heavier the motors, the bigger the
batteries needed to power it. The heavier the batteries, the heavier the
structure needed to move the bigger batteries, and so on in an
escalating vicious spiral. The spiral drove the ratio of thinking parts
to body weight in the direction of ever more body.
But the spiral worked in the other direction even nicer. The smaller
the computer, the lighter the motors, the smaller the batteries, the
smaller the structure, and the stronger the frame became relative to its
size. This also drove the ratio of brains to body towards a mobot with a
proportionally larger brain, small though its brain was. Most of
Brooks's mobots weighed less than ten pounds. Genghis, assembled out of
model car parts, weighed only 3.6 pounds. Within three years Brooks
would like to have a 1-mm (pencil-tip-size) robot. "Fleabots" he calls
them.
Brooks calls for an infiltration of robots not just on Mars but on
Earth as well. Rather than try to bring as much organic life into
artificial life, Brooks says he's trying to bring as much artificial
life into real life. He wants to flood the world (and beyond) with
inexpensive, small, ubiquitous semi-thinking things. He gives the
example of smart doors. For only about $10 extra you could put a chip
brain in a door so that it would know you were about to go out, or it
could hear from another smart door that you are coming, or it could
notify the lights that you left, and so on. If you had a building full
of these smart doors talking to each other, they could help control the
climate, as well as help traffic flow. If you extend that invasion to
all kinds of other apparatus we now think of as inert, putting fast,
cheap, out-of-control intelligence into them, then we would have a
colony of sentient entities, serving us, and learning how to serve us
better.
When prodded, Brooks predicts a future filled with artificial
creatures living with us in mutual dependence -- a new symbiosis. Most of
these creatures will be hidden from our senses, and taken for granted,
and engineered with an insect approach to problems -- many hands make light
work, small work done ceaselessly is big work, individual units are
dispensable. Their numbers will outnumber us, as do insects. And in
fact, his vision of robots is less that they will be R2D2s serving us
beers, than that they will be an ecology of unnamed things just out of
sight.
One student in the Mobot Lab built a cheap, bunny-size robot that
watches where you are in a room and calibrates your stereo so it is
perfectly adjusted as you move around. Brooks has another small robot in
mind that lives in the corner of your living room or under the sofa. It
wanders around like the Collection Machine, vacuuming at random whenever
you aren't home. The only noticeable evidence of its presence is how
clean the floors are. A similar, but very tiny, insectlike robot lives
in one corner of your TV screen and eats off the dust when the TV isn't
on.
Everybody wants programmable animals. "The biggest difference
between horses and cars," says Keith Hensen, a popular
techno-evangelist, "is that cars don't need attention every day, and
horses do. I think there will be a demand for animals that can be
switched on and off."
"We are interested in building artificial beings," Brooks wrote in a
manifesto in 1985. He defined an artificial being as a creation that can
do useful work while surviving for weeks or months without human
assistance in real environment. "Our mobots are Creatures in the sense
that on power-up they exist in the world and interact with it, pursuing
multiple goals. This is in contrast to other mobile robots that are
given programs or plans to follow for a specific mission." Brooks was
adamant that he would not build toy (easy, simple) environments for his
beings, as most other robotists had done, saying "We insist on building
complete systems that exist in the real world so that we won't trick
ourselves into skipping hard problems."
To date, one hard problem
science has skipped is jump-starting a pure mind. If Brooks is right, it
probably never will. Instead it will grow a mind from a dumb body.
Almost every lesson from the Mobot Lab seems to teach that there is no
mind without body in a real unforgiving world. "To think is to act, and
to act is to think," said Heinz von Foerster, gadfly of the 1950s
cybernetic movement. "There is no life without movement."
continue...
|