In the 1970s, after thousands of years of telling tales about the
Earth's past and creation, the inhabitants of planet Earth began to tell
their first story of what might happen to the planet in the future.
Rapid communications of the day gave them their first comprehensive
real-time view of their home. The portrait from space was enchanting -- a
cloudy blue marble hanging delicately in the black deep. But down on the
ground the emerging tale wasn't so pretty. Reports from every quadrant
of the globe said the Earth was unraveling.
Tiny cameras in space brought back photographs of the whole Earth that
were awesome in the old-fashioned sense of the word: at once inspiring
and frightening. The cameras, together with reams of ground data pouring
in from every country, formed a distributed mirror reflecting a picture
of the whole system. The entire biosphere was becoming more transparent.
The global system began to look ahead -- as systems do -- wanting to know what
might come next, say, in the next 20 years.
The first impression arising from the data-collecting membrane around
the world was that the planet was wounded. No static world map could
verify (or refute) this picture. No globe could chart the ups and downs
of pollution and population over time, or decipher the interconnecting
influence of one factor upon another. No movie from space could play out
the question, what if this continues? What was needed was a planetary
prediction machine, a global what-if spreadsheet.
In the computer labs of MIT, an unpretentious engineer cobbled together
the first global spreadsheet. Jay Forrester had been dabbling in
feedback loops since 1939, perfecting machinery-steering
servomechanisms. Together with Norbert Wiener, his colleague at MIT,
Forrester followed the logical path of servomechanisms right into the
birth of computers. As he helped invent digital computers, Forrester
applied the first computing machines to an area outside of typical
engineering concerns. He created computer models to assist the
management of industrial firms and manufacturing processes. The
usefulness of these company models inspired Forrester to tackle a
simulation of a city, which he modeled with the help of a former mayor
of Boston. He intuitively, and quite correctly, felt that cascading
feedback loops -- impossible to track with paper and pencil, but child's
play for a computer -- were the only way to approach the web of influences
between wealth, population, and resources. Why couldn't the whole world
be modeled?
Sitting on an airplane on the way home from a conference on "The
Predicament of Mankind" held in Switzerland in 1970, Forrester began to
sketch out the first equations that would form a model he called "World
Dynamics."
It was rough. A thumbnail sketch. Forrester's crude model mirrored the
obvious loops and forces he intuitively felt governed large economies.
For data, he grabbed whatever was handy as a quick estimate. The Club of
Rome, the group that had sponsored the conference, came to MIT to
evaluate the prototype Forrester had tinkered up. They were encouraged
by what they saw. They secured funding from the Volkswagen Foundation to
hire Forrester's associate, Dennis Meadows, to develop the model to the
next stage. For the rest of 1970, Forrester and Meadows improved the
World Dynamics model, designing more sophisticated process loops and
scouring the world for current data.
Dennis Meadows, together with his wife Dana and two other coauthors,
published the souped-up model, now filled with real data, as the "Limits
to Growth." The simulation was wildly successful as the first global
spreadsheet. For the first time, the planetary system of life, earthly
resources, and human culture were abstracted, embodied into a
simulation, and set free to roam into the future. The Limits to Growth
also succeeded as a global air raid siren, alerting the world to the
conclusions of the authors: that almost every extension of humankind's
current path led to civilization's collapse.
The result of the Limits to Growth model ignited thousands of
editorials, policy debates, and newspaper articles around the world for
many years following its release. "A Computer Looks Ahead and Shudders"
screamed one headline. The gist of the model's discovery was this: "If
the present growth trends in world population, industrialization,
pollution, food production, and resource depletion continue unchanged,
the limits to growth on this planet will be reached sometime within the
next 100 years." The modelers ran the simulation hundreds of times in
hundreds of slightly different scenarios. But no matter how they made
tradeoffs, almost all the simulations predicted population and living
standards either withering away or bubbling up quickly to burst shortly
thereafter.
Primarily because the policy implications were stark, clear, and
unwelcome, the model was highly controversial and heavily scrutinized.
But it forever raised the discussion of resources and human activity to
the necessary planetary scale.
The Limits to Growth model was less successful in spawning better
predictive models, which the authors had hoped to spark with their
pioneer efforts. Instead, in the intervening 20 years, world models came
to be mistrusted, in large part because of the controversy of Limits to
Growth. Ironically, the only world model visible in the public eye now
(two decades later) is the Limits to Growth. The authors have reissued
it on its 20th anniversary, with only slight changes.
As currently implemented, the Limits to Growth model runs on a software
program called Stella. Stella takes the dynamic systems approach worked
out by Jay Forrester on mainframe computers and ports it over to the
visual interface of a Macintosh. The Limits to Growth model is woven out
of an impressive web of "stocks" and "flows." Stocks (money, oil, food,
capital, etc.) flow into certain nodes (representing general processes
such as farming), where they trigger outflows of other stocks. For
instance money, land, fertilizer, and labor flow into farms to trigger
an outflow of raw food. Food, oil, and other stocks flow into factories
to produce fertilizer, to complete one feedback loop. A spaghetti maze
of loops, subloops, and cross-loops constitute the entire world. The
leverage each loop has upon the others is adjustable and determined by
ratios found in real-world data: how much food is produced per hectare
per kilo of fertilizer and water, generating how much pollution and
waste. As is true in all complex systems, the impact of a single
adjustment cannot be calculated beforehand; it must be played out in the
whole system to be measured.
Vivisystems must anticipate to survive. Yet the complexity of the
prediction apparatus must not overwhelm the vivisystem itself. As an
example of the difficulties inherent in prediction machinery, we can
examine the Limits to Growth model in detail. There are four reasons to
choose this particular model. The first is that its reissue demands that
it be (re)considered as a reliable anticipatory apparatus for human
endeavor. Second, the model provides a handy 20-year period over which
to evaluate it. Did the patterns it detected 20 years ago still prevail?
Third, one of the virtues of the Limits to Growth model is that it is
critiqueable. It generates quantifiable results rather than vague
descriptions. It can be tested. Fourth, nothing could be more ambitious
than to model the future of human life on Earth. The success or failure
of this prominent attempt can teach much about using models to predict
extremely complex adaptive systems. Indeed one has to ask: Can such a
seemingly unpredictable process as the world be simulated or anticipated
with any confidence at all? Can feedback-driven models be reliable
predictors of complex phenomenon?
The Limits to Growth model has many things going for it. Among them: It
is not overly complex; it is pumped by feedback loops; it runs
scenarios. But among the weaknesses I see in the model are the
following:
Narrow overall scenarios. Rather than explore possible futures of any
real diversity, Limits to Growth plays out a multitude of minor
variations upon one fairly narrow set of assumptions. Mostly the
"possible futures" it explores are those that seem plausible to the
authors. Twenty years ago they ignored scenarios not based on what they
felt were reasonable assumptions of expiring finite resources. But
resources (such as rare metals, oil, and fertilizer) didn't diminish.
Any genuinely predictive model must be equipped with the capability to
generate "unthinkable" scenarios. It is important that a system have
sufficient elbowroom in the space of possibilities to wander in places
we don't expect. There is an art to this, because a model with too many
degrees of freedom becomes unmanageable, while one too constrained
becomes unreliable.
Wrong assumptions. Even the best model can be sidetracked by false
premises. The original key assumption of the model was that the world
contains only a 250-year supply of nonrenewable resources, and that the
demands on that supply are exponential. Twenty years later we know both
those assumptions are wrong. Reserves of oil and minerals have grown;
their prices have not increased; and demand for materials like copper
are not exponential. In the 1992 reissue of the model, these assumptions
were adjusted. Now the foundational assumption is that pollution must
rise with growth. I can imagine that premise needing to be adjusted in
the next 20 years, if the last 20 are a guide. "Adjustments" of this
basic nature have to be made because the Limits to Growth model
has...
No room for learning. A group of early critics of the model once joked
that they ran the Limits to Growth simulation from the year 1800 and by
1900 found a "20-foot level of horse manure on the streets." At the rate
horse transportation was increasing then, this would have been a logical
extrapolation. The half-jesting critics felt that the model made no
provisions for learning technologies, increasing efficiencies, or the
ability of people to alter their behavior or invent solutions.
There is a type of adaptation wired into the model. As crises arise
(such as increase in pollution), capital assets are shifted to cover it
(so the coefficient of pollution generated is lowered). But this
learning is neither decentralized nor open-ended. In truth, there's no
easy way to model either. Much of the research reported elsewhere in
this book is about the pioneering attempts to achieve distributed
learning and open-ended growth in manufactured settings, or to enhance
the same in natural settings. Without decentralized open-ended learning,
the real world will overtake the model in a matter of days.
In real life, the populations of India, Africa, China, and South America
don't change their actions based upon the hypothetical projections of
the Limits to Growth model. They adapt because of their own immediate
learning cycle. For instance, the Limits to Growth model was caught
off-guard (like most other forecasts) by global birth rates that dropped
faster than anyone predicted. Was this due to the influence of doomsday
projections like Limits to Growth? The more plausible mechanism is that
educated women have less children and are more prosperous, and that
prosperous people are imitated. They don't know about, or care about,
global limits to growth. Government incentives assist local dynamics
already present. People anywhere act (and learn) out of immediate
self-interest. This holds true for other functions such as crop
productivity, arable land, transportation, and so on. The assumptions
for these fluctuating values are fixed in Limits to Growth model, but in
reality the assumptions themselves have coevolutionary mechanisms that
flux over time. The point is that the learning must be modeled as an
internal loop residing within the model. In addition to the values, the
very structure of the assumptions in the simulation -- or in any simulation
that hopes to anticipate a vivisystem -- must be adaptable.
World averages. The Limits to Growth model treats the world as uniformly
polluted, uniformly populated, and uniformly endowed with resources.
This homogenization simplifies and uncomplicates the world enough to
model it sanely. But in the end it undermines the purpose of the model
because the locality and regionalism of the planet are some of its most
striking and important features. Furthermore, the hierarchy of dynamics
that arise out of differing local dynamics provides some of the key
phenomena of Earth. The Limits to Growth modelers recognize the power of
subloops -- which is, in fact, the chief virtue of Forrester's system
dynamics underpinning the software. But the model entirely ignores the
paramount subloop of a world: geography. A planetary model without
geography is...not the world. Not only must learning be distributed
throughout a simulation; all functions must be. It is the failure to
mirror the distributed nature -- the swarm nature -- of life on Earth that is
this model's greatest failure.
The inability to model open-ended growth of any kind. When I asked Dana
Meadows what happened when they ran the model from 1600, or even 1800,
she replied that they never tried it. I found that astonishing since
backcasting is a standard reality test for forecasting models. In this
case, the modelers suspected that the simulation would not cohere. That
should be a warning. Since 1600 the world has experienced long-term
growth. If a world model is reliable, it should be able to simulate four
centuries of growth -- at least as history. Ultimately, if we are to
believe Limits to Growth has anything to say about future growth, the
simulation must, in principle, be capable of generating long-term growth
through several periods of transitions. As it is, all that Limits to
Growth can prove is that it can simulate one century of collapse.
"Our model is astonishingly 'robust,' " Meadows told me. "You have to do
all kinds of things to keep it from collapsing....Always the same
behavior and basic dynamic emerges: overshoot and collapse." This is a
pretty dangerous model to rely on for predictions of society's future.
All the initial parameters of the system quickly converge upon
termination, when history tells us human society is a system that
displays marvelous continuing expansion.
Two years ago I spent an evening talking to programmer Ken Karakotsios
who was building a tiny world of ecology and evolution. His world (which
eventually became the game of SimLife) provides tools to god-players who
can then create up to 32 virtual species of animals and 32 species of
plants. The artificial animals and plants interact, compete, prey upon
each other and evolve. "What's the longest you've had your world
running?" I asked him. "Oh," he moans, "only a day. You know it's really
hard to keep one of these complex worlds going. They do like to
collapse."
The scenarios in Limits to Growth collapse because that's what the
Limits to Growth simulation is good at. Nearly every initial condition
in the model leads to either apocalypse or (very rarely) to
stability -- but never to a new structure -- because the model is inherently
incapable of generating open-ended growth. The Limits to Growth cannot
mimic the emergence of the industrial evolution from the agrarian age.
"Nor," admits Meadows, "can it take the world from the Industrial
Revolution to whatever follows next beyond that." She explains, "What
the model shows is that the logic of the industrial revolution runs into
an inevitable wall of limits. The model does two things, either it
begins to collapse, or we intervene as modelers and make changes to save
it."
ME: "Wouldn't a better world model possess the dynamics to transform
itself to the next level on its own?"
DANA MEADOWS: "It strikes me as a little bit fatalistic to think that
this is designed in the system to happen and we just lean back and watch
it. Instead we modeled ourselves into it. Human intelligence comes in,
perceives the whole situation, and makes changes in the human societal
structure. So this reflects our mental picture of how the system
transcends to the next stage -- with intelligence that reaches in and
restructures the system."
That's Save-The-World mode, as well as inadequate modeling of how an
ever complexifying world works. Meadows is right that intelligence
reaches in to human culture and restructures it. But that isn't done
just by modelers, and it doesn't happen only at cultural thresholds.
This restructuring happens in six billion minds around the world, every
day, in every era. Human culture is a decentralized evolutionary system
if there ever was one. Any predictive model that fails to incorporate
this distributed ongoing daily billion-headed microrevolution is doomed
to collapse, as civilization itself would without it.
Twenty years later, the Limits to Growth simulation needs not a mere
update, but a total redo. The best use for it is to stand as a challenge
and a departure point to make a better model. A real predictive model of
a planetary society would:
1) spin significantly varied scenarios,
2) start with more flexible and informed assumptions,
3) incorporate distributed learning,
4) contain local and regional variation, and
5) if possible, demonstrate increasing complexification.
I do not focus on the Limits to Growth world model because I want to
pick on its potent political implications (the first version did, after
all, inspire a generation of antigrowth activists). Rather, the model's
inadequacies precisely parallel several core points I hope to make in
this book. In bravely attempting to simulate an extremely complex
adapting system (the human infrastructure of living on Earth), in order
to feed-forward a scenario of this system into the future, the
Forrester/Meadows model highlights not the limits to growth but the
limits of certain simulations.
The dream of Meadows is the same as that of Forrester, the U.S. Command
Central wargamers, Farmer and the Prediction Company, and myself, for
that matter: to create a system (a machine) that sufficiently mirrors
the real evolving world so that this miniature can run faster than real
life and thus project its results into the future. We'd like prediction
machinery not for a sense of predestiny but for guidance. And ideally it
must be a Kauffman or von Neumann machine that can create things more
complex that itself.
To do that, the model must possess a "requisite complexity." This is a
term coined in the 1950s by the cybernetician Ross Ashby who built some
of the first electronically adaptive models. Every model must distill a
myriad of fine details about the real into a compressed representation;
one of the most important traits it must condense is reality's
complexity. Ashby concluded from his own experiments in making minimal
models out of vacuum tubes that if a model simplifies the complexity too
steeply, it misses the mark. A simulation's complexity has to be within
the ballpark of the complexity of the modeled; otherwise the model can't
keep up with the zig and zags of the thing modeled. Another
cybernetician, Gerald Weinberg, supplies a fine metaphor for requisite
complexity in his book On the Design of Stable Systems. Imagine,
Weinberg suggests, a guided missile aimed at an enemy jet. The missile
does not have to be a jet itself, but it must embody a requisite degree
of complex flight behavior to parallel the behavior of the jet. If the
missile is not at least as fast and aerodynamically nimble as the
targeted jetfighter, then it cannot hit its target.
continue...
|