A group of researchers in Milan, Italy, have come up with a few new
varieties of evolution and learning. Their methods fill a few holes in
Ackley's proposed "space of all possible types of computation." Because
they were inspired by the collective behavior of ant colonies, the Milan
group call their searches "Ant Algorithms."
Ants have distributed parallel systems all figured out. Ants are the
history of social organization and the future of computers. A colony may
contain a million workers and hundreds of queens, and the entire mass of
them can build a city while only dimly aware of one another. Ants can
swarm over a field and find the choicest food in it as if the swarm were
a large compound eye. They weave vegetation together in coordinated
parallel rows, and collectively keep their nest at a steady temperature,
although not a single ant has ever lived who knows how to regulate
temperature.
An army of ants too dumb to measure and too blind to see far can rapidly
find the shortest route across a very rugged landscape. This calculation
perfectly mirrors the evolutionary search: dumb, blind, simultaneous
agents trying to optimize a path on a computationally rugged landscape.
Ants are a parallel processing machine.
Real ants communicate with each other by a chemical system called
pheromones. Ants apply pheromones on each other and on their
environment. These aromatic smells dissipate over time. The odors can
also be relayed by a chain of ants picking up a scent and
remanufacturing it to pass on to others. Pheromones can be thought of as
information broadcasted or communicated within the ant system.
The Milan group (Alberto Colorni, Marco Dorigo, and Vittorio Maniezzo)
constructed formulas modeled on ant logic. Their virtual ants ("vants")
were dumb processors in a giant community operating in parallel. Each
vant had a meager memory, and could communicate locally. Yet the rewards
of doing well were shared by others in a kind of distributed
computation.
The Italians tested their ant machine on a standard benchmark, the
traveling salesman problem. The riddle was: what is the shortest route
between a large number of cities, if you can only visit each city once?
Each virtual ant in the colony would set out rambling from city to city
leaving a trail of pheromones. The shorter the path between cities, the
less the pheromone evaporated. The stronger the pheromone signal, the
more other ants followed that route. Shorter paths were thus
self-reinforcing. Run for 5,000 rounds or so, the ant group-mind would
evolve a fairly optimal global route.
The Milan group played with variations. Did it make any difference if
the vants all started at one city or were uniformly distributed?
(Distributed was better.) Did it make any difference how many vants one
ran concurrently? (More was better until you hit the ratio of one ant
for every city, when the advantage peaked.) By varying parameters, the
group came up with a number of computational ant searches.
Ant algorithms are a type of Lamarckian search. When one ant stumbles
upon a short route, that information is indirectly broadcast to the
other vants by the trail's pheromone strength. In this way learning in
one ant's lifetime is indirectly incorporated into the whole colony's
inheritance of information. Individual ants effectively broadcast what
they have learned into their hive. Broadcasting, like cultural teaching,
is a part of Lamarckian search. Ackley: "There are ways to exchange
information other than sex. Like the evening news."
The cleverness of the ants, both real and virtual, is that the amount of
information invested into "broadcasting" is very small, done very
locally, and is very weak. The notion of introducing weak broadcasting
into evolution is quite appealing. If there is any Lamarckism in earthly
biology it is buried deep. But there remains a universe full of strange
types of potential computation that might employ various modes of
Lamarckian broadcasting. I know of programmers fooling around with
algorithms to mimic "memetic" evolution -- the flow of ideas (memes) from
one mind to another, trying to capture the essence and power of cultural
evolution. Out of all the possible ways to connect the nodes in
distributed computers, only a very few, such as the ant algorithms, have
even been examined.
As late as 1990, parallel
computers were derided by experts as controversial, specialized, and
belonging the lunatic fringe. They were untidy and hard to program. The
lunatic fringe disagreed. In 1989, Danny Hillis boldly made a widely
publicized bet with a leading computer expert that as early as 1995,
more bits per month would be processed by parallel machines than by
serial machines. He is looking right. As serial computers audibly
groaned under the burden of pushing complex jobs through the tiny funnel
of von Neumann's serial processor, a change in expert opinion suddenly
swept through the computer industry. Peter Denning signaled the new
perspective when he wrote in a paper published by Science ("Highly
Parallel Computation," November 30, 1990), "Highly parallel computing
architectures are the only means to achieve the computational rates
demanded by advanced scientific problems." John Koza of Stanford's
Computer Science Department says flatly, "Parallel computers are the
future of computing. Period."
But parallel computers remain hard to manage. Parallel software is a
tangled web of horizontal, simultaneous causes. You can't check such
nonlinearity for flaws since it's all hidden corners. There is no
narrative to step through. The code has the integrity of a water
balloon, yielding in one spot as another bulges. Parallel computers can
easily be built but can't be easily programmed.
Parallel computers embody the challenge of all distributed swarm
systems, including phone networks, military systems, the planetary
24-hour financial web, and large computer networks. Their complexity is
taxing our ability to steer them. "The complexity of programming a
massively parallel machine is probably beyond us," Tom Ray told me. "I
don't think we'll ever be able to write software that fully uses the
capacity of parallelism."
Little dumb creatures in parallel that can "write" better software than
humans can suggests to Ray a solution for our desire for parallel
software. "Look," he says, "ecological interactions are just parallel
optimization techniques. A multicellular organism essentially runs
massively parallel code of an astronomical scale. Evolution can 'think'
of parallel programming ways that would take us forever to think of. If
we can evolve software, we'll be way ahead." When it comes to
distributed network kinds of things, Rays says, "Evolution is the
natural way to program."
The natural way to program! That's an ego-deflating lesson. Humans
should stick to what they do best: small, elegant, minimal systems that
are fast and deep. Let natural evolution (artificially injected) do the
messy big work.
continue...
|