Stuart Kauffman's simulations are as rigorous, original, and well-
respected among scientists as any mathematical model can be. Maybe more
so, because he is using a real (computer) network to model a
hypothetical network, rather than the usual reverse of using a
hypothetical to model the real. I grant, though, it is a bit of a
stretch to apply the results of a pure mathematical abstraction to
irregular arrangements of reality. Nothing could be more irregular than
online networks, biological genetic networks, or international economic
networks. But Stuart Kauffman is himself eager to extrapolate the
behavior of his generic test-bed to real life. The grand comparison
between complex real-world networks and his own mathematical simulations
running in the heart of silicon is nothing less than Kauffman's holy
grail. He says his models "smell like they are true." Swarmlike
networks, he bets, all behave similarly on one level. Kauffman is fond
of speculating that "IBM and E. coli both see the world in the same
way."
I'm inclined to bet in his favor. We own the technology to connect
everyone to everyone, but those of us who have tried living that way are
finding that we are disconnecting to get anything done. We live in an
age of accelerating connectivity; in essence we are steadily climbing
Kauffman's hill. But we have little to stop us from going over the top
and sliding into a descent of increasing connectivity but diminishing
adaptability. Disconnection is a brake to hold the system from
overconnection, to keep our cultural system poised on the edge of
maximal evolvability.
The art of evolution is the art of managing dynamic complexity.
Connecting things is not difficult; the art is finding ways for them to
connect in an organized, indirect, and limited way.
From his experiments in artificial life in swarm models, Chris Langton,
Kauffman's Santa Fe Institute colleague, derived an abstract quality
(called the lambda parameter) that predicts the likelihood that a
particular set of rules for a swarm will produce a "sweet spot" of
interesting behavior. Systems built upon values outside this sweet spot
tend to stall in two ways. They either repeat patterns in a crystalline
fashion, or else space out into white noise. Those values within the
range of the lambda sweet spot generate the longest runs of interesting
behavior.
By tuning the
lambda parameter Langton can tune a world so that evolution or learning
can unroll most easily. Langton describes the threshold between a frozen
repetitious state and a gaseous noise state as a "phase transition" -- the
same term physicists use to describe the transition from liquid to gas
or liquid to solid. The most startling result, though, is Langton's
contention that as the lambda parameter approaches that phase
transition -- the sweet spot of maximum adaptability -- it slows down. That
is, the system tends to dwell on the edge instead of zooming through it.
As it nears the place it can evolve the most from, it lingers. The image
Langton likes to raise is that of a system surfing on an endless perfect
wave in slow motion; the more perfect the ride, the slower time
goes.
This critical slowing down at the "edge" could help explain why a
precarious embryonic vivisystem could keep evolving. As a random system
neared the phase transition, it would be "pulled in" to rest at that
sweet spot where it would undergo evolution and would then seek to
maintain that spot. This is the homeostatic feedback loop making a lap
for itself. Except that since there is little "static" about the spot,
the feedback loop might be better named "homeodynamic."
Stuart Kauffman also speaks of "tuning" the parameters of his simulated
genetic networks to the "sweet spot." Out of all the uncountable ways to
connect a million genes, or a million neurons, some relatively few
setups are far more likely to encourage learning and adaptation
throughout the network. Systems balanced to this evolutionary sweet spot
learn fastest, adapt more readily, or evolve the easiest. If Langton and
Kauffman are right, an evolving system will find that spot on its
own.
Langton discovered a clue as to how that may happen. He found that this
spot teeters right on the edge of chaotic behavior. He says that systems
that are most adaptive are so loose they are a hairsbreadth away from
being out of control. Life, then, is a system that is neither stagnant
with noncommunication nor grid-locked with too much communication.
Rather life is a vivisystem tuned "to the edge of chaos" -- that lambda
point where there is just enough information flow to make everything
dangerous.
Rigid systems can always do better by loosening up a bit, and turbulent
systems can always improve by getting themselves a little more
organized. Mitch Waldrop explains Langton's notion in his book
Complexity, thusly: if an adaptive system is not riding on the happy
middle road, you would expect brute efficiency to push it toward that
sweet spot. And if a system rests on the crest balanced between rigidity
and chaos, then you'd expect its adaptive nature to pull it back onto
the edge if it starts to drift away. "In other words," writes Waldrop,
"you'd expect learning and evolution to make the edge of chaos stable."
A self-reinforcing sweet spot. We might call it dynamically stable,
since its home migrates. Lynn Margulis calls this fluxing, dynamically
persistent state "homeorhesis" -- the honing in on a moving point. It is
the same forever almost-falling that poises the chemical pathways of the
Earth's biosphere in purposeful disequilibrium.
Kauffman takes up the theme by calling systems set up in the lambda
value range "poised systems." They are poised on the edge between chaos
and rigid order. Once you begin to look around, poised systems can be
found throughout the universe, even outside of biology. Many
cosmologists, such as John Barrow, believe the universe itself to be a
poised system, precariously balanced on a string of remarkably delicate
values (such as the strength of gravity, or the mass of an electron)
that if varied by a fraction as insignificant as 0.000001 percent would
have collapsed in its early genesis, or failed to condense matter. The
list of these "coincidences" is so long they fill books. According to
mathematical physicist Paul Davies, the coincidences "taken
together...provide impressive evidence that life as we know it depends
very sensitively on the form of the laws of physics, and on some
seemingly fortuitous accidents in the actual values that nature has
chosen for various particle masses, force strengths, and so on." In
brief, the universe and life as we know are poised on the edge of
chaos.
What if poised systems could tune themselves, instead of being tuned by
creators? There would be tremendous evolutionary advantage in biology
for a complex system that was auto-poised. It could evolve faster, learn
more quickly, and adapt more readily. If evolution selects for a
self-tuning function, Kauffman says, then "the capacity to evolve and
adapt may itself be an achievement of evolution." Indeed, a self-tuning
function would inevitably be selected for at higher levels of evolution.
Kauffman proposes that gene systems do indeed tune themselves by
regulating the number of links, size of genome, and so on, in their own
systems for optimal flexibility.
Self-tuning may be the mysterious key to evolution that doesn't stop -- the
holy grail of open-ended evolution. Chris Langton formally describes
open-ended evolution as a system that succeeds in ceaselessly
self-tuning itself to higher and higher levels of complexity, or in his
imagery, a system that succeeds in gaining control over more and more
parameters affecting its evolvability and staying balanced on the edge.
In Langton's and Kauffman's framework, nature begins as a pool of
interacting polymers that catalyze themselves into new sets of
interacting polymers in such a networked way that maximal evolution can
occur. This evolution-rich environment produces cells that also learn to
tune their internal connectivity to keep the system at optimal
evolvability. Each step extends the stance at the edge of chaos, poised
on the thin path of optimal flexibility, which pumps up its complexity.
As long as the system rides this upwelling crest of evolvability, it
surfs along.
What you want in artificial systems, Langton says, is something similar.
The primary goal that any system seeks is survival. The secondary search
is for the ideal parameters to keep the system tuned for maximal
flexibility. But it is the third order search that is most exciting: the
search for strategies and feedback mechanisms that will increasingly
self-tune the system each step on the way. Kauffman's hypothesis is that
if systems constructed to self-tune "can adapt most readily, then they
may be the inevitable target of natural selection. The ability to take
advantage of natural selection would be one of the first traits
selected."
As Langton and colleagues explore the space of possible worlds searching
for that sweet spot where life seems poised on the edge, I've heard them
call themselves surfers on an endless summer, scouting for that slo-mo
wave.
Rich Bageley, another Santa Fe Institute fellow, told me "What I'm
looking for are things that I can almost predict, but not quite." He
explained further that it was not regular but not chaotic either. Some
almost-out-of-control and dangerous edge in between.
"Yeah," replied Langton who overheard our conversation. "Exactly. Just
like ocean waves in the surf. They go thump, thump, thump, steady as a
heartbeat. Then suddenly, WHUUUMP, an unexpected big one. That's what we
are all looking for. That's the place we want to find."
continue...
|