The final section in my book is a short course in what we, or at
least I, don't know about complex adaptive systems and the nature of
control. It's a list of questions, a catalogue of holes. A lot of the
questions may seem silly, obvious, trivial, or hardly worth worrying
about, even for nonscientists. Scientists in the pertinent fields may
say the same: these questions are distractions, the ravings of a amateur
science-groupie, the ill-informed musing of a techno-transcendentalist.
No matter. I am inspired to follow this unorthodox short course by a
wonderful paragraph written by Douglas Hofstadter in an forward to
Pentti Kanerva's obscure technical monograph on sparse distributed
computer memory. Hofstadter writes:
I begin with the nearly trivial observation that members of a familiar
perceptual category automatically evoke the name of the category. Thus,
when we see a staircase (say), no matter how big or small it is, no
matter how twisted or straight, no matter how ornamented or plain,
modern or old, dirty or clean, the label "staircase" spontaneously jumps
to center stage without any conscious effort at all. Obviously, the same
goes for telephones, mailboxes, milkshakes, butterflies, model
airplanes, stretch pants, gossip magazines, women's shoes, musical
instruments, beachballs, station wagons, grocery stores, and so on.
This phenomenon, whereby an external physical stimulus indirectly
activates the proper part of our memory, permeates human life and
language so thoroughly that most people have a hard time working up any
interest in it, let alone astonishment, yet it is probably the most key
of all mental mechanisms.
To be astonished by a question no one else can get worked up about, or
to be astonished by a matter nobody considers a problem, is perhaps a
better paradigm for the progress of science.
This book is based on my astonishment that nature and machines work at
all. I wrote it by trying to explain my amazement to the reader. When I
came to something I didn't understand, I wrestled with it, researched,
or read until I did, and then started writing again until I came to the
next question I couldn't readily answer. Then I'd do the cycle again,
round and round. Eventually I would come to a question that stopped me
from writing further. Either no one had an answer, or they provided the
stock response and would not see my perplexity at all. These halting
questions never seemed weighty at first encounter -- just a question that
seems to lead to nowhere for now. But in fact they are protoanomalies.
Like Hofstadter's unappreciated astonishment at our mind's ability to
categorize objects before we recognize them, out of these quiet riddles
will come future insight, and perhaps revolutionary understanding, and
eventually recognition that we must explain them.
Readers may be perplexed themselves when they see that most of these
questions appear to be the very ones I seemed to have answered in the
preceding chapters! But really all I did was drive around these
questions, surveying their girth, hill-climbing up them until I was
stuck on a false summit. In my experience most good questions come while
stuck on a partial answer somewhere else. This book has been an endeavor
to find interesting questions. But on the way, some of the rather
ordinary questions stopped me. They follow below.
I often use the word "emergent" in this book. As used by the
practitioners of complexity, it means something like: "that organization
which is generated out of parts acting in concert." But the meaning of
emergent begins to disappear when scrutinized, leaving behind a vague
impression that the word is, at bottom, meaningless. I tried
substituting the word "happened" in every instance I used "emerged" and
it seemed to work. Try it. Global order happens from local rules. What
do we mean by emergent?
And what is "complexity" anyway? I looked forward to the two 1992
science books identically titled Complexity, one by Mitch Waldrop and
one by Roger Lewin, because I was hoping one or the other would provide
me with a practical measurement of complexity. But both authors wrote
books on the subject without hazarding a guess at a usable definition.
How do we know one thing or process is more complex than another? Is a
cucumber more complex that a Cadillac? Is a meadow more complex than a
mammal brain? Is a zebra more complex than a national economy? I am
aware of three or four mathematical definitions for complexity, none of
them broadly useful in answering the type of questions I just asked. We
are so ignorant of complexity that we haven't yet asked the right
question about what it is.
If evolution tends to grow more complex, why? And if it really does
not, then why does it appear to? Is complexity in fact more efficient
than simplicity?
There seems to be a "requisite variety" -- a minimum complexity or
diversity of parts -- for such processes as self-organization, evolution,
learning, and life. How do we know for sure when enough variety is
enough? We don't even have a good measure for diversity. We have
intuitive feelings but we can't translate that into anything very
precise. What is variety?
The "edge of chaos" often sounds like "moderation in all things." Is
it merely playing Goldilocks to define the values at which systems are
maximally adaptable, as "just right for adaptation?" Is this yet another
necessary tautology?
In computer science there is a famous conjecture called the
Church/Turing hypothesis which undergirds much of the reasoning in
artificial intelligence and artificial life. The hypothesis says: a
universal computing machine can compute anything that another universal
computing machine can compute, given unlimited time and an infinite
tape. But my goodness! Unlimited time and space is the precise
difference between the living and the dead. The dead have infinite time
and space. The living live in finitude. So while, within a certain
range, computational processes are independent of the hardware they run
on (one machine can emulate anything another can), there are real limits
to the fungibility of processes. Artificial life is based on the premise
that life can be extracted from its carbon-based hardware and set to run
on a different matrix somewhere else. The experiments so far have shown
that to be true more than was expected. But where are the limits in real
time and real space?
What, if anything, cannot be simulated?
The quest for artificial intelligence and artificial life is wrapped
up (some say bogged down) in the important riddle of whether a
simulation of an extremely complex system is a fake or something real in
its own right. Maybe it is hyperreal, or maybe the term hyperreality
just ducks the question. No one doubts the ability of a model to imitate
an original thing. The questions are: What sort of reality do we assign
a simulation of a thing? What, if any, are the distinctions between a
simulation and a reality?
How far can you compress a meadow into seeds? This was the question
the prairie restorers inadvertently asked. Can you reduce the treasure
of information contained in an entire ecosystem into several bushels of
seeds, which, when watered, would reconstitute the awesome complexity of
prairie life? Are there important natural systems which simply cannot be
reduced and modeled accurately? Such a system would be its own smallest
expression, its own model. Are there any artificial large systems that
cannot be compressed or abstracted?
I'd like to know more about stability. If we build a "stable"
system, is there some way we can define that? What are the boundary
conditions, the requirements, for stable complexity? When does change
cease to be change?
Why do species ever go extinct? If all of nature is hourly working
to adapt, never resting in its effort to outwit competitors and exploit
its environment, why do certain classes of species fail? Perhaps some
certain organisms are better adapted than others. But why would the
universal mechanism of nature sometimes work and sometimes not for
entire types of organisms, allowing particular groups to lag and others
to advance? More precisely, why would the dynamics of adaptation work
for some organisms but not others? Why does nature allow some biological
forms to be pushed into forms that are inherently inefficient? There is
a case of an oysterlike bivalve that evolved a more and more spiraled
shell until, just before extinction, the valves could barely open. Why
doesn't the organism return to the range of the workable? And why does
extinction run in families and groups, as if bad genes may be
responsible? How could nature produce a group of bad genes? Perhaps,
extinctions are caused by something outside, like comets and asteroids.
Paleontologist Dave Raup postulates that 75 percent of all extinction
events were caused by asteroid impacts. If there were no asteroids would
there be no extinctions? If there were no extinctions of species on
Earth, what would life look like now? Why, for that matter, do complex
systems of any sort fail or die?
On the other hand, why, in this coevolutionary world, is anything at
all stable?
Every figure I've heard for both natural and artificial
self-sustaining systems puts the self-stabilizing mutation rate between
1 percent and 0.01 percent. Are mutation rates universal?
What are the down sides of connecting everything to everything?
In the space of all possible lifes, life on Earth is but a tiny
sliver -- one attempt at creativity. Is there a limit to how much life a
given quantity of matter can hold? Why isn't there more variety of life
on Earth? How come the universe is so small?
Are the laws of the universe evolvable? If the laws governing the
universe arose from within the universe, might they be susceptible to
the forces of self-adjustment? Perhaps the very foundational laws
upholding all sensible laws are in flux. Are we playing in a game where
all the rules are constantly being rewritten?
Can evolution evolve its own teleological purpose? If organisms,
which are but a federation of mindless agents, can originate goals, can
evolution itself, equally blind and dumb but in a way a very slow
organism, also evolve a goal?
And what about God? God gets no honor in the academic papers of
artificial lifers, evolutionary theorists, cosmologists, or
simulationists. But much to my surprise, in private conversations these
same researchers routinely speak of God. As used by scientists, God is a
coolly nonreligious technical concept, closer to god -- a local creator.
When talking of worlds, both real and modeled, God is an almost
algebraically precise notation standing for whatever "X" operating
outside a world that has created that world. "Okay, you're God..." says
one computer scientist during a demo when he means that I'm now setting
the rules for the world. God is a shorthand for the uncreated observer
making things real. God thus becomes a scientific term, and a scientific
concept. It doesn't have the philosophical subtleties of prime cause, or
the theological finery of Creator; it is merely a handy way to talk
about the necessary initial conditions to run a world. So what are the
requirements for godhood. What makes a good god?
continue...
|