"We've got to get used to dealing in billions of things!" Kauffman
once told an audience of scientists. Huge multitudes of anything are
different: the more polymers, the exponentially more possible
interactions where one polymer can trigger the manufacture of yet
another polymer. Therefore, at some point, a droplet loaded up with
increasing diversity and numbers of polymers will reach a threshold
where a certain number of polymers in the set will suddenly fall out
into a spontaneous lap circle. They will form an auto-generated,
self-sustaining, self-transforming network of chemical pathways. As long
as energy flows in, the network hums, and the loop stands.
Codes, chemicals, or inventions can in the right circumstances produce
new codes, chemicals, or inventions. It is clear this is the model of
life. An organism produces new organisms which in turn create newer
organisms. One small invention (the transistor) produces other
inventions (the computer) which in turn permit yet other inventions
(virtual reality). Kauffman wants to generalize this process
mathematically to say that functions in general spawn newer functions
which in turn birth yet other functions.
"Five years ago," recalls Kauffman, "Brian Goodwin [an evolutionary
biologist] and I were sitting in some World War I bunker in northern
Italy during a rainstorm talking about autocatalytic sets. I had this
profound sense then that there's a deep similarity between natural
selection -- what Darwin told us -- and the wealth of nations -- what Adam Smith
told us. Both have an invisible hand. But I didn't know how to proceed
any further until I saw Walter Fontana's work with autocatalytic sets,
which is gorgeous."
I mentioned to Kauffman the controversial idea that in any society with
the proper strength of communication and information connection,
democracy becomes inevitable. Where ideas are free to flow and generate
new ideas, the political organization will eventually head toward
democracy as an unavoidable self-organizing strong attractor. Kauffman
agreed with the parallel: "When I was a sophomore in '58 or '59 I wrote
a paper in philosophy that I labored over with much passion. I was
trying to figure out why democracy worked. It's obvious that democracy
doesn't work because it's the rule of the majority. Now, 33 years later,
I see that democracy is a device that allows conflicting minorities to
reach relative fluid compromises. It keeps subgroups from getting stuck
on some locally good but globally inferior solution."
It is not difficult to imagine Kauffman's networks of Boolean logic and
random genomes mirroring the workings of town halls and state capitals.
By structuring miniconflicts and microrevolutions as a continuous
process at the local level, large scale macro- and mega-revolutions are
avoided, and the whole system is neither chaotic nor stagnant. Perpetual
change is fought out in small towns, while the nation remains admirably
stable -- thus creating a climate to keep the small towns in ceaseless
compromise-seeking modes. That circular support is another lap game, and
an indication that such systems are similar in dynamics to the
self-supporting vivisystems.
"This is just intuitive," Kauffman cautions me, "but you can feel your
way from Fontana's 'string-begets-string-begets-string' to
'invention-begets-invention-begets-invention' to cultural evolution and
then to the wealth of nations." Kauffman makes no bones about the scale
of his ambition: "I am looking for the self-consistent big picture that
ties everything together, from the origin of life, as a self-organized
system, to the emergence of spontaneous order in genomic regulatory
systems, to the emergence of systems that are able to adapt, to
nonequilibrium price formation which optimizes trade among organisms, to
this unknown analog of the second law of thermodynamics. It is all one
picture. I really feel it is. But the image I'm pushing on is this: Can
we prove that a finite set of functions generates this infinite set of
possibilities?"
Whew. I call that a "Kauffman machine." A small but well-chosen set of
functions that connect into an auto-generating ring and produce an
infinite jet of more complex functions. Nature is full of Kauffman
machines. An egg cell producing the body of a whale is one. An evolution
machine generating a flamingo over a billion years from a bacterial blob
is another. Can we make an artificial Kauffman machine? This may more
properly be called a von Neumann machine because von Neumann asked the
same question in the early 1940s. He wondered, Can a machine make
another machine more complex that itself? Whatever it is called, the
question is the same: How does complexity build itself up?
"You can't ask the experimental question until, roughly speaking, the
intellectual framework is in place. So the critical thing is asking
important questions," Kauffman warned me. Often during our
conversations, I'd catch Kauffman thinking aloud. He'd spin off wild
speculations and then seize one and twirl it around to examine it from
various directions. "How do you ask that question?" he asked himself
rhetorically. His quest was for the Question of All Questions rather
than the Answer of All Answers. "Once you've asked the question," he
said, "there's a good chance of finding some sort of answer.
A Question Worth Asking. That's what Kauffman thought of his notion of
self-organized order in evolutionary systems. Kauffman confided to me:
"Somehow, each of us in our own heart is able to ask questions that we
think are profound in the sense that the answer would be truly
important. The enormous puzzle is why in the world any of us ask the
questions that we do."
There were many times when I felt that Stuart Kauffman, medical doctor,
philosopher, mathematician, theoretical biologist, and MacArthur Award
recipient, was embarrassed by the wild question he had been dealt.
"Order for free" flies in the face of a conservative science that has
rejected every past theory of creative order hidden in the universe. It
would probably reject his. While the rest of the contemporary scientific
world sees butterflies of random chance sowing out-of-control, nonlinear
effects in every facet of the universe, Kauffman asks if perhaps the
butterflies of chaos sleep. He wakes the possibility of an overarching
design dwelling within creation, quieting disorder and birthing an
ordered stillness. It's a notion that for many sounds like mysticism. At
the same time, the pursuit and framing of this single huge question is
the quasar source of Kauffman's considerable pride and energy: "I would
be lying if I didn't tell you that when I was 23 and started wondering
how in the world a genome with 100,000 genes controls the emergence of
different cell types, I felt that I had found something profound, I had
found a profound question. And I still feel that way. I think God was
very nice to me."
"If you write something about this," Kauffman says softly, "make sure
you say that this is only something crazy that people are thinking
about. But wouldn't it be wonderful if somehow there are laws that make
laws that make laws, so that the universe is, in John Wheeler's words,
something that is looking in at itself!? The universe posts its own
rules and emerges out of a self-consistent thing. Maybe that's not
impossible, this notion that quarks and gluons and atoms and elementary
particles have invented the laws by which they transform one another."
Deep down Kauffman felt that his systems built themselves. In some way
he hoped to discover, evolutionary systems controlled their own
structure. From the first glimpse of his visionary network image, he had
a hunch that in those connections lay the answer to evolution's
self-governance. He was not content to show that order emerged
spontaneously and inevitably. He also felt that control of that order
also emerged spontaneously. To that end he charted thousands of runs of
random ensembles in computer simulation to see which type of connections
permitted a swarm to be most adaptable. "Adaptable" means the ability of
system to adjust its internal links so that it fits its environment over
time. Kauffman views an organism, a fruitfly say, as adjusting the
network of its genes over time so that the result of the genetic
network -- a fly body -- best fits its changing surroundings of food, shelter,
and predators. The Question Worth Asking was: what controlled the
evolvability of the system? Could the organism itself control its
evolvability?
The prime variable Kauffman played with was the connectivity of the
network. In a sparsely connected network, each node would on average
only connect to one other node, or less. In a richly connected network,
each node would link to ten or a hundred or a thousand or a million
other nodes. In theory the limit to the number of connections per node
is simply the total number of nodes, minus one. A million-headed network
could have a million-minus-one connections at each node; every node is
connected to every other node. To continue our rough analogy, every
employee of GM could be directly linked to all 749,999 other employees
of GM.
As Kauffman varied this connectivity parameter in his generic networks,
he discovered something that would not surprise the CEO of GM. A system
where few agents influenced other agents was not very adaptable. The
soup of connections was too thin to transmit an innovation. The system
would fail to evolve. As Kauffman increased the average number of links
between nodes, the system became more resilient, "bouncing back" when
perturbed. The system could maintain stability while the environment
changed. It would evolve. The completely unexpected finding was that
beyond a certain level of linking density, continued connectivity would
only decrease the adaptability of the system as a whole.
Kauffman graphed this effect as a hill. The top of the hill was optimal
flexibility to change. One low side of the hill was a sparsely connected
system: flat-footed and stagnant. The other low side was an overly
connected system: a frozen grid-lock of a thousand mutual pulls. So many
conflicting influences came to bear on one node that whole sections of
the system sank into rigid paralysis. Kauffman called this second
extreme a "complexity catastrophe." Much to everyone's surprise, you
could have too much connectivity. In the long run, an overly linked
system was as debilitating as a mob of uncoordinated loners.
Somewhere in the middle was a peak of just-right connectivity that gave
the network its maximal nimbleness. Kauffman found this measurable
"Goldilocks'" point in his model networks. His colleagues had trouble
believing his maximal value at first because it seemed counterintuitive
at the time. The optimal connectivity for the distilled systems Kauffman
studied was very low, "somewhere in the single digits." Large networks
with thousands of members adapted best with less than ten connections
per member. Some nets peaked at less than two connections on average per
node! A massively parallel system did not need to be heavily connected
in order to adapt. Minimal average connection, done widely, was
enough.
Kauffman's second unexpected finding was that this low optimal value
didn't seem to fluctuate much, no matter how many members comprised a
specific network. In other words, as more members were added to the
network, it didn't pay (in terms of systemwide adaptability) to increase
the number of links to each node. To evolve most rapidly, add members
but don't increase average link rates. This result confirmed what Craig
Reynolds had found in his synthetic flocks: you could load a flock up
with more and more members without having to reconfigure its
structure.
Kauffman found that at the low end, with less than two connections per
agent or organism, the whole system wasn't nimble enough to keep up with
change. If the community of agents lacked sufficient internal
communication, it could not solve a problem as a group. More exactly,
they fell into isolated patches of cooperative feedback but didn't
interact with each other.
At the ideal number of connections, the ideal amount of information
flowed between agents, and the system as a whole found the optimal
solutions consistently. If their environment was changing rapidly, this
meant that the network remained stable -- persisting as a whole over
time.
Kauffman's Law states that above a certain point, increasing the
richness of connections between agents freezes adaptation. Nothing gets
done because too many actions hinge on too many other contradictory
actions. In the landscape metaphor, ultra-connectance produces
ultra-ruggedness, making any move a likely fall off a peak of adaptation
into a valley of nonadaptation. Another way of putting it, too many
agents have a say in each other's work, and bureaucratic rigor mortis
sets in. Adaptability conks out into grid-lock. For a contemporary
culture primed to the virtues of connecting up, this low ceiling of
connectivity comes as unexpected news.
We postmodern communication addicts might want to pay attention to this.
In our networked society we are pumping up both the total number of
people connected (in 1993, the global network of networks was expanding
at the rate of 15 percent additional users per month!), and the number
of people and places to whom each member is connected. Faxes, phones,
direct junk mail, and large cross-referenced data bases in business and
government in effect increase the number of links between each person.
Neither expansion particularly increases the adaptability of our system
(society) as a whole.
continue...
|