title: Thinking in Systems
tags: [Systems theory, Science, Philosophy]
author: Donella H. Meadows
date: '2008'


Thinking in Systems

Introduction

"Now once again. What made the Slinky bounce up and down?"

The answer clearly lies within the Slinky itself. The hands that manipulate it suppress or release some behavior that is latent within the structure of the spring.

That is a central insight of systems theory.

Systems theory seeks to understand the relationship between structure and behavior. It finds the root causes of problems and enables envisioning new opportunities.

A system is a set of things interconnected in such a way that they produce their own pattern of behavior over time. The system interacts with external forces, but the system's response to the forces is characteristic of itself.

We have intuitive understanding of systems ("don't put all your eggs in one basket," "the rich get richer and the poor get poorer") but that has historically been disrupted by simplified analyses (e.g. popular understanding of addiction, the economy, the immune system, etc.). We have a tendency to identify the "problem" as "external": the virus, the product, the pill, the Jews... So many of our problems and solutions seem to be reductive. Why?

Diagrams are used heavily in this book because words and sentences are linear but systems happen all at once. They are connected in many directions simultaneously.

Archetypes or system traps (and opportunities) are common structures that produce characteristic behaviors in systems. Think "tragedy of the commons," "addiction," "eroding goals," etc.

I don’t think the systems way of seeing is better than the reductionist way of thinking. I think it’s complementary, and therefore revealing. You can see some things through the lens of the human eye, other things through the lens of a microscope, others through the lens of a telescope, and still others through the lens of systems theory. Everything seen through each kind of lens is actually there. Each way of seeing allows our knowledge of the wondrous world in which we live to become a little more complete.

The behavior of a system cannot be known just by knowing the elements of which the system is made.

Chapter 1: The Basics

A system must consist of three kinds of things: elements, interconnections, and a function or purpose. Should the last one of those kinds of things being normative be worrying? (Asking a normative question.) Maybe not. The function of the digestive system is to break down food into its basic nutrients and to transfer those nutrients into the bloodstream (another system), while discarding unusable wastes. Maybe "purpose" is the scary word. "Function" describes a system's natural activity. It itself is an abstraction, an interpretation of the activity of the system. Systems might have multiple functions and one may be seen as the primary function. What is the "purpose" of capitalism? To produce surplus value? To create and allocate use values?

Systems can be nested (embedded in other systems). An animal is a system in the system of the forest in the system of the earth and so on.

The are "non-system things" like a pile of sand placed by happenstance on the side of the road. Adding sand or taking away sand -- you still just have sand. But what if you add enough sand so that the roadway collapses or a sandstorm occurs? Can a system be "created" out of a "non-system"? (Yes, according to the next paragraph.) Quantity becomes quality.

Systems can decay. Living creatures die and lose their "system-ness," although they still belong to the broader system of the food chain.

Systems can change, adapt, respond to events, seek goals, mend injuries, and attend to their own survival in lifelike ways, although they may contain or consist of nonliving things. Systems can be self-organizing, and often are self-repairing over at least some range of disruptions. They are resilient, and many of them are evolutionary. Out of one system other completely new, never-before-imagined systems can arise.

Elements do not have to be physical things. Intangibles are also elements of a system. In a university, school pride and academic prowess are two intangibles that can be very important elements of the system.

Can't two analyses identify completely different elements that make up the system? As if the lines of demarcation between "elements" are quite fuzzy. What makes the commodity the cell of bourgeois society rather than money or the individual worker or capitalist or seller or buyer or water or air or language or whatever else?

Bunch of stuff vs. system: Can you identify parts? Do the parts affect each other? Do the parts together produce an effect that is different from the effect of each part on its own? Does the effect, the behavior over time, persist in a variety of circumstances?

Interconnections are the relationships that hold the elements together. The interconnections of the tree system are the physical flows and chemical reactions that govern the tree's metabolic processes. The roots have dry soil? The loss of water pressure signals the leaves to close their pores to preserve water. Interconnections in systems often operate through the flow of information, i.e. signals.

No one understands all the relationships that allow a tree to do what it does. That lack of knowledge is not surprising. It’s easier to learn about a system’s elements than about its interconnections.

Is it? Perhaps its easy to select some set of elements. But the correct set?

The best way to deduce the system’s purpose is to watch for a while to see how the system behaves. [...] Purposes are deduced from behavior, not from rhetoric or stated goals.

The word function is generally used for a nonhuman system, the word purpose for a human one, but the distinction is not absolute, since so many systems have both human and nonhuman elements.

One purpose of a national economy is, judging from its behavior, to keep growing larger. An important function of almost every system is to ensure its own perpetuation.

The purpose of a university is to discover and preserve knowledge and pass it on to new generations. Within the university, the purpose of a student may be to get good grades, the purpose of a professor may be to get tenure, the purpose of an administrator may be to balance the budget. Any of those sub-purposes could come into conflict with the overall purpose—the student could cheat, the professor could ignore the students in order to publish papers, the administrator could balance the budget by firing professors. Keeping sub-purposes and overall system purposes in harmony is an essential function of successful systems. I’ll get back to this point later when we come to hierarchies.

Emergent behaviors. Class conflict.

Replacing elements of a system often has very little effect. Your body replaces most of its cells every few weeks, but the system of your body remains basically unscathed. A corporation is replaced with a cooperative. The cooperative remains exceptional or gets out-competed. The interconnections and purpose of the system remains.

Is function not just an emergent behavior of the elements and its interconnections? Where does the "function" come from? The function is the most crucial determinant of the system's behavior, but how could it be any other way?

If the interconnections change, the system may be greatly altered. It may even become unrecognizable, even though the same players are on the team. Change the rules from those of football to those of basketball, and you’ve got, as they say, a whole new ball game. If you change the interconnections in the tree—say that instead of taking in carbon dioxide and emitting oxygen, it does the reverse—it would no longer be a tree.

The latter example seems weak. Wouldn't the reversal of the carbon-dioxide-to-oxygen property require completely different elements, i.e. new sub-systems? In fact, all of the replaced cells in the body are qualitatively the seem as the old cells. They are of the same sub-system. If you were to replace some of the elements in the body system with dirt, for example, the system would indeed drastically change. The interconnections would not work any more, if the elements are swapped out with qualitatively different elements. Dirt does not have the same capacity to be interconnected with other sorts of materials. In this way, interconnections depend on the elements as much as the other way around.

To ask whether elements, interconnections, or purposes are most important in a system is to ask an unsystemic question. All are essential. All interact. All have their roles. [...] The elements, the parts of systems we are most likely to notice, are often (not always) least important in defining the unique characteristics of the system—unless changing an element also results in changing relationships or purpose.

[...] because land, factories, and people are long-lived, slowly changing, physical elements of the system, there is a limit to the rate at which any leader can turn the direction of a nation.

A stock is the foundation of any system. Stocks are the elements of the system that you can see, feel, count, or measure at any given time. A system stock is just what it sounds like: a store, a quantity, an accumulation of material or information that has built up over time.

Stocks change over time through the actions of a flow. Stocks are a "memory" of the history of changing flows within a system. A present snapshot of behavioral history.

1
2
3
4
5
6
7
                                                     +------------------+  lumber sales   +------+
                         +-----------+ logging       | lumber inventory | --------------> | SINK |
+--------+  tree growth  |  wood in  |-------------> +------------------+                 +------+
| SOURCE |-------------->|  living   | tree deaths
+--------+               |  trees    |-------------> +------+
                         +-----------+               | SINK |
                                                     +------+

The arrows in the diagram above are the flows. The sources and sinks are abstracted away, inlets and outlets connected to endpoints of other systems (or cyclically to the own system itself?). The other nodes are stocks.

Dynamic equilibrium occurs when stocks stay at a constant level, when inflows are equal to outflows. A bathtub is in dynamic equilibrium when the faucet is filling the tub at the same rate that water is being drained.

All models, whether mental models or mathematical models, are simplifications of the real world.

Evaporation of the bathtub's water was abstracted away.

A stock can be increased by decreasing its outflow rate as well as by increasing its inflow rate. There’s more than one way to fill a bathtub!

A stock takes time to change, because flows take time to flow. [...] Stocks generally change slowly, even when the flows into or out of them change suddenly. Therefore, stocks act as delays or buffers or shock absorbers in systems. [...] Once an economy has a lot of oil-burning furnaces and automobile engines, it cannot change quickly to furnaces and engines that burn a different fuel, even if the price of oil suddenly changes. [...] A population that has learned many skills doesn’t forget them immediately.

Stocks allow inflows and outflows to be decoupled and to be independent and temporarily out of balance with each other. [...] Human beings have invented hundreds of stock-maintaining mechanisms to make inflows and outflows independent and stable. Reservoirs enable residents and farmers downriver to live without constantly adjusting their lives and work to a river’s varying flow, especially its droughts and floods. Banks enable you temporarily to earn money at a rate different from how you spend. Inventories of products along a chain from distributors to wholesalers to retailers allow production to proceed smoothly although customer demand varies, and allow customer demand to be filled even though production rates vary.

Those decisions add up to the ebbs and flows, successes and problems, of all sorts of systems. Systems thinkers see the world as a collection of stocks along with the mechanisms for regulating the levels in the stocks by manipulating flows. [...] That means system thinkers see the world as a collection of “feedback processes.”

A feedback loop is a mechanism creating a consistent behavior in which outputs are rerouted back into a system as inputs. Interest rates are an instance of a feedback loop. Your bank statement causing alerting you to work more or less to grow or shrink your checking account stock to the acceptable amount is a feedback loop. This is an example of a stabilizing feedback loop. "Balancing feedback loops are goal-seeking or stability-seeking."

Whoever or whatever is monitoring the stock’s level begins a corrective process, adjusting rates of inflow or outflow (or both) and so changing the stock’s level. The stock level feeds back through a chain of signals and actions to control itself.

The function of the system that cools a hot cup of coffee (or an iced cup of coffee) to room temperature is to bring the discrepancy between the two temperatures to zero, no matter the direction of the discrepancy.

Feedback mechanisms are not immune to failure. They can be too weak or delayed or resource-constrained or otherwise ineffective (e.g. overcome by other forces) to achieve balance of the system.

The second kind of feedback loop is amplifying, reinforcing, self-multiplying, snowballing—a vicious or virtuous circle that can cause healthy growth or runaway destruction. It is called a reinforcing feedback loop [...].

These can either be negative or positive feedback loops.

My brother pushes me, so I push him back, so he pushes me harder, so I push him back harder... Until the system breaks down because of its context and we're both grounded.

Reinforcing loops are found wherever a system element has the ability to reproduce itself or to grow as a constant fraction of itself. [...] the more machines and factories (collectively called“capital”) you have, the more goods and services (“output”) you can produce. The more output you can produce, the more you can invest in new machines and factories. The more you make, the more capacity you have to make even more. This reinforcing feedback loop is the central engine of growth in an economy.

Watch out! If you see feedback loops everywhere, you’re already in danger of becoming a systems thinker! Instead of seeing only how A causes B, you’ll begin to wonder how B may also influence A—and how A might reinforce or reverse itself. When you hear in the nightly news that the Federal Reserve Bank has done something to control the economy, you’ll also see that the economy must have done something to affect the Federal Reserve Bank. When someone tells you that population growth causes poverty, you’ll ask yourself how poverty may cause population growth.

Systems are not limited to only one instance of one kind of feedback loop at a time. Real systems are full of many feedback loops at once, so simple exponential growth or equilibrium or logarithmic die-off are not the common case.

Chapter 2: A Brief Visit to the Systems Zoo

For [some systems with the structure of competing balancing loops], the fact that the stock goes on changing while you’re trying to control it can create real problems. For example, suppose you’re trying to maintain a store inventory at a certain level. You can’t instantly order new stock to make up an immediately apparent shortfall.

The information delivered by a feedback loop—even nonphysical feedback—can only affect future behavior; it can’t deliver a signal fast enough to correct behavior that drove the current feedback. Even nonphysical information takes time to feedback into the system.

Many economic models make a mistake in this matter by assuming that consumption or production can respond immediately, say, to a change in price. That’s one of the reasons why real economies tend not to behave exactly like many economic models.

If you do not account for all the flows in a system, you will not be able to predict its outcome.

If shifting dominance of feedback loops (in a system with more than one loop) occur, the quantity behavior of the stock over time will change.

At first, when fertility is higher than mortality, the reinforcing growth loop dominates the system and the resulting behavior is exponential growth. But that loop is gradually weakened as fertility falls. Finally, it exactly equals the strength of the balancing loop of mortality. At that point neither loop dominates, and we have dynamic equilibrium.

Complex behaviors of systems often arise as the relative strengths of feedback loops shift, causing first one loop and then another to dominate behavior.

Dynamic systems studies usually are not designed to predict what will happen. Rather, they’re designed to explore what would happen, if a number of driving factors unfold in a range of different ways.

Isn't your ability to predict the future dependent on how accurate your systems model is, and how accurate your models are of those systems that are interacting with the main system?

One of the central insights of systems theory, as central as the observation that systems largely cause their own behavior, is that systems with similar feedback structures produce similar dynamic behaviors, even if the outward appearance of these systems is completely dissimilar.

Delays in a balancing feedback loop often cause oscillations. Stock will go up and down around a gravitational point. If the delays are not "right," the oscillations can spiral out of control. Imagine if a speculator was too reactive to the market. Like the Irish potato blight situation.

Changing the length of a delay may (or may not, depending on the type of delay and the relative lengths of other delays) make a large change in the behavior of a system.

The anarchy of the market makes it impossible to control all of these levers entirely. Conglomerates might form and so production efficiency increases under monopoly capitalism as the systems become under more highly centralized control.

Up until now, all of the systems spoken about have been unconstrained by the environment.

In physical, exponentially growing systems, there must be at least one reinforcing loop driving the growth and at least one balancing loop constraining the growth, because no physical system can grow forever in a finite environment.

  • Nonrenewable resources are stock-limited. Example: Oil reserves. (Well... oil does conceivably renew, just very very slowly.)
  • Renewable resources are flow limited. If they are extracted faster than they renew, they may be eventually driven below a threshold and become extinct (effectively nonrenewable). Example: overfishing. It is possible with renewable resources to:
    • Overshoot and adjust to a sustainable equilibrium (fishing restricted by laws, balancing feedback loop of capital).
    • Overshoot beyond the equilibrium followed by oscillation around it (fish are able to grow in population in protected environment).
    • Overshoot followed by collapse of the resource and thus the system.

Chapter 3: Why Systems Work So Well

  • Resilience
  • Self-organization
  • Hierarchy

Resilience is the ability for something to "bounce or spring back into shape, position, etc., after being pressed or stretched." Brittleness is the opposite. Resilience arises from a structure of many feedback loops that can work in different ways to restore a system even after a large perturbation. "Meta-resilience" is granted by feedback loops that can restore feedback loops. "Meta-meta-resilience" comes from feedback loops that can evolve more complex, restorative, meta-resilient feedback loops. And so on. Systems that can do this are "self-organizing." The human being is a great example of a meta-meta-resilient system. And yet, there are always limits to resilience. The human always dies. Resilience is not the same thing as stasis. Short-term oscillations, periodic instability, or long waves may all be the "normal" that resilience acts to restore. Static systems can also be non-resilient, persisting only because of convenient conditions.

Just-in-time deliveries of products to retailers or parts to manufacturers have reduced inventory instabilities and brought down costs in many industries. The just-in-time model also has made the production system more vulnerable, however, to perturbations in fuel supply, traffic flow, computer breakdown, labor availability, and other possible glitches.

A resilient system has a big plateau, a lot of space over which it can wander, with gentle, elastic walls that will bounce it back, if it comes near a dangerous edge. As a system loses its resilience, its plateau shrinks, and its protective walls become lower and more rigid, until the system is operating on a knife edge, likely to fall off in one direction or another whenever it makes a move.

Reminds me of the fragility of imperialist technological monopolies where research labs in the core compete on extremely thin margins.

Awareness of resilience enables one to see many ways to enhance a system's own restorative powers: "Give a man a fish, he'll eat for a day. Teach a man to fish..."

A system's ability to make its own structure more complex is called self-organization. A seed sprouts, a crystal forms (fractal geometry), a baby learns to speak, a class struggles for its interests. Emergent properties.

Science knows now that self-organizing systems can arise from simple rules. Science, itself a self-organizing system, likes to think that all the complexity of the world must arise, ultimately, from simple rules. Whether that actually happens is something that science does not yet know.

Self-organizing systems often generate hierarchy. Smaller subsystems serve the needs of larger systems (the cell serves the function of the liver serves the function of the human being). The larger system coordinates and enhances the functioning of the subsystems (the human being eats food to nourish the cell). This sort of coordination creates stable, resilient, and efficient structures. Hierarchy reduces the amount of information that any part of the system has to keep track of. Hierarchy delegates functions. Liver cells are in closer coordination with each other than with those cells of the heart.

Hierarchies are partially decomposable. Subsystems can often be extracted and analyzed on their own. Their important interrelationships should not be forgotten. Indeed liver disease can possibly be treated at the level of the liver itself. But what if the problem is at the cellular level? What if you're diseased because of the food you're consuming? Understanding all of the systems and their interrelations is necessary to make the best moves.

We must also be diligent in understanding the evolution of systems over time. The increasing globalization and specialization in the value chain completely disrupts a historically limited view of capitalism.

When a subsystem’s goals dominate at the expense of the total system’s goals, the resulting behavior is called suboptimization.

Cancer, "poor incentive structures" in economics, etc.

It's strange that "goals" are moral interpretations and this is being glossed over.

Hierarchical systems evolve from the bottom up. The purpose of the upper layers of the hierarchy is to serve the purposes of the lower layers.

Wait what about the other way around? The purpose of the liver is served by the cell. You just said that like 4 pages ago...

Chapter 4: Why Systems Surprise Us

All of our knowledge, all of our language, etc., are models. Are models can be insanely complex and congruent with the real world but they are never exactly representative of the real world. They often fall short. So, we make mistakes and we are regularly surprised by the real world.

The bad news, or the good news, depending on your need to control the world and your willingness to be delighted by its surprises, is that even if you do understand all these system characteristics, you may be surprised less often, but you will still be surprised.

The world seems to present itself as a series of events. Events (market crashes, natural disasters, presidential elections) are the outputs, moment by moment, from the black box of the world system. We are less likely to be surprised by the events if we muster up some predictive power using models. In this way, events accumulate into dynamic patterns of behavior. The rise and fall of the tide, the regular pattern of elections occurring every four years, the destruction of rainforests occurring at an ever-increasing rate, the steady depletion of oil reserves. These behaviors are described as performance over time: growth, stagnation, decline, oscillation, randomness, and evolution.

Long term behavior provides clues to the underlying system structure. The question a systems thinker asks is not what is happening, but why? (What is the story of development?)

In fact, much analysis in the world goes no deeper than events. Listen to every night’s explanation of why the stock market did what it did. Stocks went up (down) because the U.S. dollar fell (rose), or the prime interest rate rose (fell), or the Democrats won (lost), or one country invaded another (or didn’t). Event-event analysis.

These explanations give you no ability to predict what will happen tomorrow. They give you no ability to change the behavior of the system— to make the stock market less volatile or a more reliable indicator of the health of corporations or a better vehicle to encourage investment, for instance.

Most economic analysis goes one level deeper, to behavior over time. Econometric models strive to find the statistical links among past trends in income, savings, investment, government spending, interest rates, output, or whatever, often in complicated equations.

These behavior-based models are more useful than event-based ones, but they still have fundamental problems. First, they typically overemphasize system flows and underemphasize stocks. [...] Second, and more seriously, in trying to find statistical links that relate flows to each other, econometricians are searching for something that does not exist. There’s no reason to expect any flow to bear a stable relationship to any other flow. Flows go up and down, on and off, in all sorts of combinations, in response to stocks, not to other flows.

Econometric models are good at predicting short-term economic behavior but bad at predicting long-term behavior and terrible at advising how to improve economic performance. This is because the system's structure changes over time. Econometricians could do well predicting the oscillations in the temperature of a thermostat-regulated room, until someone comes along and opens a window or forgets to refill the heater's oil.

And that’s one reason why systems of all kinds surprise us. We are too fascinated by the events they generate. We pay too little attention to their history. And we are insufficiently skilled at seeing in their history clues to the structures from which behavior and events flow.

Linear relationships between two elements in a system are easy to predict and understand. But the world is filled with nonlinear relationships as a result of complex systems. "Too much of a good thing" is an example of an adage that resulted from a surprise after a failure of linear thinking. Nonlinearities can change the relative strengths of feedback loops and effectively "flip" a system from one mode of behavior to another.

The budworm/spruce/fir system oscillates over decades, but it is ecologically stable within bounds. It can go on forever. The main effect of the budworm is to allow tree species other than fir to persist. But in this case what is ecologically stable is economically unstable. In eastern Canada, the economy is almost completely dependent on the logging industry, which is dependent on a steady supply of fir and spruce.

"Side-effects" are simply "effects I hadn't foreseen or didn't want to think about." The sources and sinks in the system diagrams in this book (the "clouds") are boundaries of the system diagram. But they are not boundaries in real life, since in the real world there are no real boundaries between systems. We may abstract from e.g. the source of raw materials in production for the purpose of analysis of the reproduction of a single capital. But if there is a disruption in the flow from raw materials manufacturer to the capital, the system model breaks down.

There are no separate systems. The world is a continuum. Where to draw a boundary around a system depends on the purpose of the discussion—the questions we want to ask.

Systems analysts often fall into the opposite trap: making boundaries too large. They have a habit of producing diagrams that cover several pages with small print and many arrows connecting everything with everything.

Ideally, we would have the mental flexibility to find the appropriate boundary for thinking about each new problem. We are rarely that flexible. We get attached to the boundaries our minds happen to be accustomed to.

It’s a great art to remember that boundaries are of our own making, and that they can and should be reconsidered for each new discussion, problem, or purpose. It’s a challenge to stay creative enough to drop the boundaries that worked for the last problem and to find the most appropriate set of boundaries for the next question. It’s also a necessity, if problems are to be solved well.

On limiting factors:

At any given time, the input that is most important to a system is the one that is most limiting.

Rich countries transfer capital or technology to poor ones and wonder why the economies of the receiving countries still don’t develop, never thinking that capital or technology may not be the most limiting factors.

(Or maybe they don't want them to be developed at all.)

Limits can shift over time. Mao's "primary contradiction?"

Any physical entity with multiple inputs and outputs—a population, a production process, an economy—is surrounded by layers of limits. As the system develops, it interacts with and affects its own limits. The growing entity and its limited environment together form a coevolving dynamic system.

There always will be limits to growth. They can be self-imposed. If they aren’t, they will be system-imposed.

Understanding delays helps one understand why Mikhail Gorbachev could transform the information system of the Soviet Union virtually overnight, but not the physical economy. (That takes decades.) It helps one see why the absorption of East Germany by West Germany produced more hardship over a longer time than the politicians foresaw. Because of long delays in building new power plants, the electricity industry is plagued with cycles of overcapacity and then undercapacity leading to brownouts. Because of decades-long delays as the earth’s oceans respond to warmer temperatures, human fossil-fuel emissions have already induced changes in climate that will not be fully revealed for a generation or two.

When there are long delays in feedback loops, some sort of foresight is essential. To act only when a problem becomes obvious is to miss an important opportunity to solve the problem.

On the "invisible foot," or bounded rationality:

Unfortunately, the world presents us with multiple examples of people acting rationally in their short-term best interests and producing aggregate results that no one likes.

Bounded rationality means that people make quite reasonable decisions based on the information they have. But they don’t have perfect information, especially about more distant parts of the system. Fishermen don’t know how many fish there are, much less how many fish will be caught by other fishermen that same day.

On perspective (class interest):

If you become a manager, you probably will stop seeing labor as a deserving partner in production, and start seeing it as a cost to be minimized. If you become a financier, you probably will overinvest during booms and underinvest during busts, along with all the other financiers. If you become very poor, you will see the short-term rationality, the hope, the opportunity, the necessity of having many children. If you are now a fisherman with a mortgage on your boat, a family to support, and imperfect knowledge of the state of the fish population, you will overfish.

Mention of Stanford prison experiment.

Blaming the individual rarely helps create a more desirable outcome.

Some systems are structured to function well despite bounded rationality. The right feedback gets to the right place at the right time. Under ordinary circumstances, your liver gets just the information it needs to do its job. In undisturbed ecosystems and traditional cultures, the average individual, species, or population, left to its own devices, behaves in ways that serve and stabilize the whole. These systems and others are self-regulatory. They do not cause problems. We don’t have government agencies and dozens of failed policies about them.

Since Adam Smith, it has been widely believed that the free, competitive market is one of these properly structured self-regulating systems. In some ways, it is. In other ways, obvious to anyone who is willing to look, it isn’t.

Chapter 5: System Traps... and Opportunities

Understanding archetypal problem-generating structures is not enough. Putting up with them is impossible. They need to be changed. The destruction they cause is often blamed on particular actors or events, although it is actually a consequence of system structure.

These archetypes are referred to as "traps" because they often inspire idealist "solutions" ("bad apple" thinking, "tinkering at the margins," imagining and willing into existence a more favorable sequence of events). Traps can be escaped or avoided by prefiguring them and responding to them by altering the system (reformulating goals, restructuring existing feedback loops, adding new feedback loops, etc.). This sort of acting turns a trap into an opportunity. Mao:

We must learn to look at problems all-sidedly, seeing the reverse as well as the obverse side of things. In given conditions, a bad thing can lead to good results and a good thing to bad results.

Heightening contradictions between forces:

In fact, this system structure can operate in a ratchet mode: Intensification of anyone’s effort leads to intensification of everyone else’s. It’s hard to reduce intensification. It takes a lot of mutual trust to say, OK, why don’t we all just back off for a while?

The alternative to overpowering policy resistance is so counterintuitive that it’s usually unthinkable. Let go. Give up ineffective policies. Let the resources and energy spent on both enforcing and resisting be used for more constructive purposes. You won’t get your way with the system, but it won’t go as far in a bad direction as you think, because much of the action you were trying to correct was in response to your own action. If you calm down, those who are pulling against you will calm down too. This is what happened in 1933 when Prohibition ended in the United States; the alcohol-driven chaos also largely ended.

Backing off from such brute-force policies yields the opportunity to understand and analyze the system as it actually operates. (Although it seems to me that heightening the conflict between forces exposes the individual forces quite distinctly. When is it right to push and when is it right to wait and see?)

On creating incentives to coordinate different interests towards a unified goal:

The most effective way of dealing with policy resistance is to find a way of aligning the various goals of the subsystems, usually by providing an overarching goal that allows all actors to break out of their bounded rationality. If everyone can work harmoniously toward the same outcome (if all feedback loops are serving the same goal), the results can be amazing. The most familiar examples of this harmonization of goals are mobilizations of economies during wartime, or recovery after war or natural disaster.

Harmonization of goals in a system is not always possible, but it’s an worth looking for. It can be found only by letting go of more narrow goals and considering the long term welfare of the entire system.

Now there's a section on the tragedy of the commons as an archetype of bounded rationality. Reinforcing feedback loops running downhill. Conceivable in the abstract, even if the tragedy of the commons (in the 19th century, in the real world) never happened. In fact, the author here spells out why the tragedy of the commons never happened in the bullet points labeled educate and exhort and regulate the commons. (Privatizing the commons is the option hailed by bourgeois economists of course.)

Regarding the system of privatized commons, refer to footnote 27 in chapter 37 of Capital Vol. III:

[...] the whole spirit of capitalist production, which is directed toward the immediate gain of money are in contradiction to agriculture, which has to minister to the entire range of permanent necessities of life required by the chain of successive generations. A striking illustration of this is furnished by the forests, which are only rarely managed in a way more or less corresponding to the interests of society as a whole, i.e., when they are not private property, but subject to the control of the state.

Then there's a discussion about the drift to low performance archetype wherein a system has a negative reinforcing feedback loop that overpowers a balancing feedback loop. The negative feedback loop takes the form of an actor who perceives the system state to be different or worse than the real state, which leads to a worsened desired state, which leads to less corrective action. Perhaps this can be understood as a balancing feedback loop that has an internal mechanism that turns it into a negative reinforcing feedback loop more and more as time passes. See: "boiled frog syndrome" and "eroding goals." If the system state were to degrade rapidly, the change would be noticed. But since it is gradual, the feedback loop mechanism is not triggered effectively. If the feedback loop mechanism were to be changed to measure absolute values instead of relative values, or if the current values would be measured in terms of the best past values (instead of the most recent values or the worst past values), the degradation would be fixed.

Another archetype is escalation, which is simply a runaway reinforcing feedback loop. "I'll raise you one." Price cutting. Wars. Advertising campaigns. The process ends with one or both of the competitors breaking down. Unilateral disarmament or negotiation of a new system can end the growth.

Another archetypal reinforcing feedback loop situation is success to the successful, also known as competitive exclusion. Monopoly. Winners receive the means to compete even more effectively in the future.

Another expression of this trap was part of the critique of capitalism by Karl Marx [sic; what the hell is a "critique of capitalism?"]. Two firms competing in the same market will exhibit the same behavior as two species competing in a niche. One will gain a slight advantage, through greater efficiency or smarter investment or better technology or bigger bribes, or whatever. With that advantage, the firm will have more income to invest in productive facilities or newer technologies or advertising or bribes. Its reinforcing feedback loop of capital accumulation will be able to turn faster than that of the other firm, enabling it to produce still more and earn still more.

Diversification can be a way out of the system, whereby losing parties exit the system and start their own system. (Of course when they're part of a larger ecosystem, they are bound to those same rules that led to their demise.) According to the author anti-trust laws are effective. Introducing policies that devise rewards for success that do not bias the next round of competition stop the feedback loop.

The next trap is called shifting the burden to the intervenor, or addiction. The intervenor in a system brings a system to a desirable state in the short-term by some means that have long-term negative effects. The process weakens the capability of the original system. This can be caused by lack of foresight. "Addictive policies are insidious, because they are so easy to sell, so simple to fall for."

Are insects threatening the crops? Rather than examine the farming the monocultures, the destruction of natural ecosystem controls that have led to the pest outbreak, just apply pesticides. That will make the bugs go away, and allow more monocultures, more destruction of ecosystems. That will bring back the bugs in greater outbursts, requiring more pesticides in the future.

Then there's rule beating. This trap is where actors in a system technically abide by policy but in ways unanticipated by those who set up the system.

The U.S. Endangered Species Act restricts development wherever an endangered species has its habitat. Some landowners, on discovering that their property harbors an endangered species, purposely hunt or poison it, so the land can be developed.

The way out of the trap is to understand the reality of rule beating and redesign the rules or the system(s) entirely.

Finally there's seeking the wrong goal. This is like when you set a quota on production quantity instead of quality. The way out of this one is to specify incentives that reflect "the real welfare of the system." This one is straight out of an economics textbook.

Throughout this chapter it's striking how unabashedly the preordained moral whims of the author are embedded into the analysis. This seriously read like an intro to microeconomics textbook. Like... The system is "failing?" On whose terms is the system failing? I feel like class struggle can be brought into the picture in the form of a "meta-system" but, at least for this author, it is necessarily abstracted away. In other words, that system which decides what "failure" or "success" means remains unaccounted for. This for obvious reasons, given the author's class.

Chapter 6: Leverage Points—Places to Intervene in a System

Finding the most powerful leverage points in a system is difficult. Ways to intervene in a system, from alleged least important (ineffective or difficult to perform) to most important (effective or easy to perform):

  • Adjusting numbers (sizes of flows)
  • Adjusting buffers (sizes of stabilizing stocks relative to their flows)
  • Adjusting stock-and-flow structures (the overall design of the system)
  • Adjusting feedback loop delays (the lengths of time relative to the rates of system changes)
  • Adjusting balancing feedback loop strengths (relative to the impacts they are trying to correct)
  • Adjusting reinforcing feedback loop strengths (the gains of driving loops)
  • Adjusting information flows (which components do and do not have access to information)
  • Adjusting rules (incentives, punishments, constraints). It just dawned on me how silly it is to anthropomorphize systems with "incentives/punishments". Certainly constraints can be formulated in terms of thresholds, flows, and so on. I'm sure mathematically rigorous expressions of systems theory do this sort of thing.
  • Adjusting self-organization (the system's potential to add, change, or evolve its structure)
  • Adjusting goals (the purpose or function of the system)
  • Adjusting paradigms (the "mind-set" out of which the system arises). So... Intervening in a higher system?
  • Transcending paradigms.

Chapter 7: Living in a World of Systems

A chapter about the "human element" of being a "systems thinker." Moral guidebook for a bourgeois "dialectician." Conclusion: "Systems theory" is a subset of dialectics for bourgeois interests.

Edit
Pub: 20 Jul 2020 08:40 UTC
Edit: 20 Jul 2020 08:41 UTC
Views: 960