Thinking in Systems

A Primer

by

  • On Amazon
  • ISBN: 978-1603580557
  • My Rating: 7/10

Thinking in Systems is an introduction to systems thinking.

I found Thinking in Systems an interesting and informative book with many examples. Albeit too basic sometimes, and there is also some unnecessary repetition. The third, and last, part of the book felt too vague too me: touching many things without going in-depth.

My notes

Introduction: The System Lens

A system is a set of things – people, cells, molecules, or whatever – interconnected in such a way that they produce their own pattern of behavior over time. The system may be buffeted, constricted, triggered, or driven by outside forces. But the system's response to these forces is characteristic of itself, and that response is seldom simple in the real world.

The system, to a large extent, causes its own behavior! An outside event may unleash that behavior, but the same outside event applied to a different system is likely to produce a different result.

Modern systems theory, bound up with computers and equations, hides the fact that it traffics in truths known at some level by everyone. It is often possible, therefore, to make a direct translation from systems jargon to traditional wisdom.

The behavior of a system cannot be known just by knowing the elements of which the system is made.

System Structure and Behavior

The Basics

I have yet to see any problem, however complicated, which, when looked at in the right way, did not become still more complicated.

Poul Anderson

A system is an interconnected set of elements that is coherently organized in a way that achieves something. If you look at that definition closely for a minute, you can see that a system must consist of three kinds of things: elements, interconnections, and a function or purpose.

A system is more than the sum of its parts. It may exhibit adaptive, dynamic, goal-seeking, self-preserving, and sometimes evolutionary behavior.

Many of the interconnections in systems operate through the flow of information. Information holds systems together and plays a great role in determining how they operate.

A system's function or purpose is not necessarily spoken, written, or expressed explicitly, except through the operation of the system. The best way to deduce the system's purpose is to watch for a while to see how the system behaves.

Purposes are deduced from behavior, not from rhetoric or stated goals.

An important function of almost every system is to ensure its own perpetuation.

[...] one of the most frustrating aspects of systems is that the purposes of subunits may add up to an overall behavior that no one wants.

[...] the least obvious part of the system, its function or purpose, is often the most crucial determinant of the system's behavior. Interconnections are also critically important. Changing relationships usually changes system behavior. The elements, the parts of systems we are most likely to notice, are often (not always) least important in defining the unique characteristics of the system – unless changing an element also results in changing relationships or purpose.

A stock is the foundation of any system. Stocks are the elements of the system that you can see, feel, count, or measure at any given time.

Stocks change over time through the actions of a flow. [...] A stock, then, is the present memory of the history of changing flows within the system.

If you understand the dynamics of stocks and flows – their behavior over time – you understand a good deal about the behavior of complex systems.

All models, whether mental models or mathematical models, are simplifications of the real world.

A stock can be increased by decreasing its outflow rate as well as by increasing its inflow rate.

A stock takes time to change, because flows take time to flow.

Stocks generally change slowly, even when the flows into or out of them change suddenly. Therefore, stocks act as delays or buffers or shock absorbers in systems.

Stocks allow inflows and outflows to be decoupled and to be independent and temporarily out of balance with each other.

People monitor stocks constantly and make decisions and take actions designed to raise or lower stocks or to keep them within acceptable ranges. Those decisions add up to the ebbs and flows, successes and problems, of all sorts of systems. Systems thinkers see the world as a collection of stocks along with the mechanisms for regulating the levels in the stocks by manipulating flows. That means system thinkers see the world as a collection of "feedback processes".

A feedback loop is formed when changes in a stock affect the flows into or out of that same stock.

A feedback loop is a closed chain of causal connections from a stock, through a set of decisions or rules or physical laws or actions that are dependent on the level of the stock, and back again through a flow to change the stock.

Balancing feedback loops are goal-seeking or stability-seeking. Each tries to keep a stock at a given value or within a range of values. A balancing feedback loop opposes whatever direction of change is imposed on the system. If you push a stock too far up, a balancing loop will try to pull it back down. If you shove it too far down, a balancing loop will try to bring it back up.

Balancing feedback loops are equilibrating or goal-seeking structures in systems and are both sources of stability and sources of resistance to change.

Reinforcing feedback loops are self-enhancing, leading to exponential growth or to runaway collapses over time. They are found whenever a stock has the capacity to reinforce or reproduce itself.

Because we bump into reinforcing loops so often, it is handy to know this shortcut: The time it takes for an exponentially growing stock to double in size, the "doubling time", equals approximately 70 divided by the growth rate (expressed as a percentage).

A Brief Visit to the Systems Zoo

The information delivered by a feedback loop – even nonphysical feedback – can only affect future behavior; it can't deliver a signal fast enough to correct behavior that drove the current feedback. Even nonphysical information takes time to feedback into the system.

A stock-maintaining balancing feedback loop must have its goal set appropriately to compensate for draining or inflowing processes that affect that stock. Otherwise, the feedback process will fall short of or exceed the target for the stock.

Dominance is an important concept in systems thinking. When one loop dominates another, it has a stronger impact on behavior. Because systems often have several competing feedback loops operating simultaneously, those loops that dominate the system will determine the behavior.

Complex behaviors of systems often arise as the relative strengths of feedback loops shift, causing first one loop and then another to dominate behavior.

Whenever you are confronted with a scenario (and you are, every time you hear about an economic prediction, a corporate budget, a weather forecast, future climate change, a stockbroker saying what is going to happen to a particular holding), there are questions you need to ask that will help you decide how good a representation of reality is the underlying model.

  • Are the driving factors likely to unfold this way?
  • If they did, would the system react this way?
  • What is driving the driving factors?

Model utility depends not on whether its driving scenarios are realistic (since no one can know that for sure), but on whether it responds with a realistic pattern of behavior.

One of the central insights of systems theory [...] is that systems with similar feedback structures produce similar dynamic behaviors, even if the outward appearance of these systems is completely dissimilar.

A delay in a balancing feedback loop makes a system likely to oscillate.

Delays are pervasive in systems and they are strong determinants of behavior. Changing the length of a delay may (or may not, depending on the type of delay and the relative lengths of other delays) make a large change in the behavior of a system.

[...] any physical, growing system is going to run into some kind of constraint, sooner or later. That constraint will take the form of a balancing loop that in some way shifts the dominance of the reinforcing loop driving the growth behavior, either by strengthening the outflow or by weakening the inflow. Growth in a constrained environment is very common, so common that systems thinkers call it the "limits to growth" archetype.

Nonrenewable resources are stock-limited. The entire stock is available at once, and can be extracted at any rate (limited mainly by extraction capital). But since the stock is not renewed, the faster the extraction rate, the shorter the lifetime of the resource.

Renewable resources are flow-limited. They can support extraction or harvest indefinitely, but only at a finite flow rate equal to their regeneration rate. If they are extracted faster than they regenerate, they may eventually be driven below a critical threshold and become, for all practical purposes, nonrenewable.

Systems and Us

Why Systems Work So Well

Resilience is a measure of a system's ability to survive and persist within a variable environment. The opposite of resilience is brittleness or rigidity.

Resilience arises from a rich structure of many feedback loops that can work in different ways to restore a system even after a large perturbation.

Resilience is not the same thing as being static or constant over time. Resilient systems can be very dynamic. Short-term oscillations, or periodic outbreaks, or long cycles of succession, climax, and collapse may in fact be the normal condition, which resilience acts to restore!

Systems need to be managed not only for productivity or stability, they also need to be managed for resilience – the ability to recover from perturbation, the ability to restore or repair themselves.

Systems often have the property of self-organization – the ability to structure themselves, to create new structure, to learn, diversify, and complexify.. Even complex forms of self-organization may arise from relatively simple organizing rules – or may not.

In the process of creating new structures and increasing complexity, one thing that a self-organizing system often generates is hierarchy.

In hierarchical systems relationships within each subsystem are denser and stronger than relationships between subsystems. Everything is still connected to everything else, but not equally strongly.

The original purpose of a hierarchy is always to help its originating subsystems do their jobs better. This is something, unfortunately, that both the higher and the lower levels of a greatly articulated hierarchy easily can forget. Therefore, many systems are not meeting our goals because of malfunctioning hierarchies.

When a subsystem's goals dominate at the expense of the total system's goals, the resulting behavior is called suboptimization.

To be a highly functional system, hierarchy must balance the welfare, freedoms, and responsibilities of the subsystems and total system – there must be enough central control to achieve coordination toward the large-system goal, and enough autonomy to keep all subsystems flourishing, functioning, and self-organizing.

Why Systems Surprise Us

Everything we think we know about the world is a model. Our models do have a strong congruence with the world. Our models fall far short of representing the real world fully.

When a systems thinker encounters a problem, the first thing he or she does is look for data, time graphs, the history of the system. That's because long-term behavior provides clues to the underlying system structure. And structure is the key to understanding not just what is happening, but why.

System structure is the source of system behavior. System behavior reveals itself as a series of events over time.

[...] the world often surprises our linear-thinking minds. If we've learned that a small push produces a small response, we think that twice as big a push will produce twice as big a response. But in a nonlinear system, twice the push could produce one-sixth the response, or the response squared, or no response at all.

Many relationships in systems are nonlinear. Their relative strengths shift in disproportionate amounts as the stocks in the system shifts. Nonlinearities in feedback systems produce shifting dominance of loops and many complexities in system behavior.

If we're to understand anything, we have to simplify, which means we have to make boundaries.

There are no separate systems. The world is a continuum. Where to draw a boundary around a system depends on the purpose of the discussion – the questions we want to ask.

It's a great art to remember that boundaries are of our own making, and that they can and should be reconsidered for each new discussion, problem, or purpose. It's a challenge to stay creative enough to drop the boundaries that worked for the last problem and to find the most appropriate set of boundaries for the next question. It's also a necessity, if problems are to be solved well.

At any given time, the input that is most important to a system is the one that is most limiting.

Insight comes not only from recognizing which factor is limiting, but from seeing that growth itself depletes or enhances limits and therefore changes what is limiting. [...] Whenever one factor ceases to be limiting, growth occurs, and the growth itself changes the relative scarcity of factors until another becomes limiting. To shift attention from the abundant factors to the next potential limiting factor is to gain real understanding of, and control over, the growth process.

Any physical entity with multiple inputs and outputs is surrounded by layers of limits.

There always will be limits to growth. They can be self-imposed. If they aren't, they will be system-imposed.

Delays are ubiquitous in systems. Every stock is a delay. Most flows have delays – shipping delays, perception delays, processing delays, maturation delays.

Delays determine how fast systems can react, how accurately they hit their targets, and how timely is the information passed around a system. Overshoots, oscillations, and collapses are always caused by delays.

When there are long delays in feedback loops, some sort of foresight is essential. To act only when becomes obvious is to miss an important opportunity to solve the problem.

Bounded rationality means that people make quite reasonable decisions based on the information they have. But they don't have perfect information, especially about more distant parts of the system.

We do our best to further our own nearby interests in a rational way, but we can take into account only what we know. We don't know what others are planning to do, until they do it. We rarely see the full range of possibilities before us. We often don't foresee (or choose to ignore) the impacts of our actions on the whole system. So instead of finding a long-term optimum, we discover within our limited purview a choice we can live with for now, and we stick to it, changing our behavior only when forced to.

The bounded rationality of each actor in a system – determined by the information, incentives, disincentives, goals, stresses, and constraints impinging on that actor – may or may not lead to decisions that further the welfare of the system as a whole. If they do not, putting new actors into the same system will not improve the system's performance. What makes a difference is redesigning the system to improve the information, incentives, disincentives, goals, stresses, and constraints that have an effect on specific actors.

System Traps... and Opportunities

Despite efforts to invent technological or policy "fixes", the system seems to be intractably stuck, producing the same behavior every year. This is the systemic trap of "fixes that fail" or "policy resistance".

Policy resistance comes from the bounded rationalities of the actors in a system, each with his or her (or "its" in the case of an institution) own goals. Each actor monitors the state of the system with regard to some important variable [...] and compares that state with his, her, or its goal. If there is a discrepancy, each actor does something to correct the situation. Usually the greater the discrepancy between the goal and the actual situation, the more emphatic the action will be. Such resistance to change arises when goals of subsystems are different from and inconsistent with each other.

In a policy-resistant system with actors pulling in different directions, everyone has to put great effort into keeping the system where no one wants it to be. If any single actor lets up, the others will drag the system closer to their goals, and farther from the goal of the one who let go. In fact, this system structure can operate in a ratchet mode: Intensification of anyone's effort leads to intensification of everyone else's. It's hard to reduce the intensification. It takes a lot of mutual trust to say, OK, why don't we all just back off for a while?

One way to deal with policy resistance is to try to overpower it. If you wield enough power and can keep wielding it, the power approach can work, at the cost of monumental resentment and the possibility of explosive consequences if the power is ever let up. [...] The alternative to overpowering policy resistance is so counterintuitive that it's usually unthinkable. Let go. Give up ineffective policies. Let the resources and energy spent on both enforcing and resisting be used for more constructive purposes. You won't get your way with the system, but it won't go as far in a bad direction as you think, because much of the action you were trying to correct was in response to your own action.

The most effective way of dealing with policy resistance is to find a way of aligning the various goals of the subsystems, usually by providing an overarching goal that allows all actors to break out of their bounded rationality.

The trap called the tragedy of the commons comes about when there is escalation, or just simple growth, in a commonly shared, erodable environment.

In any commons system there is, first of all, a resource that is commonly shared [...]. For the system to be subject to tragedy, the resource must be not only limited, but erodable when overused. That is, beyond some threshold, the less resource there is, the less it is able to regenerate itself, or the more likely it is to be destroyed.

A commons system also needs users of the resource [...], which have good reason to increase, and which increase at a rate that is not influenced by the condition of the commons.

The tragedy of the commons arises from missing (or too long delayed) feedback from the resource to the growth of the users of that resource. The more users there are, the more resource is used. The more resource is used, the less there is per user. If the users follow the bounded rationality of the commons ("There's no reason for me to be the one to limit my cows!"), there is no reason for any of them to decrease their use. Eventually, then, the harvest rate will exceed the capacity of the resource to bear the harvest. Because there is no feedback to the user, overharvesting will continue. The resource will decline. Finally, the erosion loop will kick in, the resource will be destroyed, and all the users will be ruined.

The structure of a commons system makes selfish behavior much more convenient and profitable than behavior that is responsible to the whole community and to the future.

There are three ways to avoid the tragedy of the commons.

  • Educate and exhort. Help people to see the consequences of unrestrained use of the commons. Appeal to their morality. Persuade them to be temperate. [...]
  • Privatize the commons. Divide it up, so that each person reaps the consequences of his or her own actions. If some people lack the self-control to stay below the carrying capacity of their own private resource, those people will harm only themselves and not others.
  • Regulate the commons. [...] Regulation can take many forms, from outright bans on certain behaviors to quotas, permits, taxes, incentives. To be effective, regulation must be enforced by policing and penalties.

Some systems not only resist policy and stay in a normal bad state, they keep getting worse. One name for this archetype is "drift to low performance".

The actor in this feedback loop [...] has, as usual, a performance goal or desired system state that is compared to the actual state. If there is a discrepancy, action is taken. So far, that is an ordinary balancing feedback loop that should keep performance at the desired level. But in this system, there is a distinction between the actual system state and the perceived state. The actor tends to believe bad news more than good news. As actual performance varies, the best results are dismissed as aberrations, the worst results stay in the memory. The actor thinks things are worse than they really are. And to complete this tragic archetype, the desired state of the system is influenced by the perceived state.

Drift to low performance is a gradual process. If the system state plunged quickly, there would be an agitated corrective process. But if it drifts down slowly enough to erase the memory of (or belief in) how much better things used to be, everyone is lulled into lower and lower expectations, lower effort, lower performance.

There are two antidotes to eroding goals. One is to keep standards absolute, regardless of performance. Another is to make goals sensitive to the best performances of the past, instead of the worst. If perceived performance has an upbeat bias instead of a downbeat one, if one takes the best results as a standard, and the worst results only as a temporary setback, then the same system structure can pull the system up to better and better performance.

Escalation comes from a reinforcing loop set up by competing actors trying to get ahead of each other. The goal of one part of the system or one actor is not absolute, [...] but is related to the state of another part of the system, another actor.

The escalation is exponential and can lead to extremes surprisingly quickly. If nothing is done, the spiral will be stopped by someone's collapse – because exponential growth cannot go on forever.

If caught in an escalating system, one can refuse to compete (unilaterally disarm), thereby interrupting the reinforcing loop. Or one can negotiate a new system with balancing loops to control the escalation.

Using accumulated wealth, privilege, special access, or inside information to create more wealth, privilege, access or information are examples of the archetype called "success to the successful". This system trap is found whenever the winners of a competition receive, as part of the reward, the means to compete even more effectively in the future. That's a reinforcing feedback loop, which rapidly divides a system into winners who go on winning, and losers who go on losing.

Most people understand the addictive properties of alcohol, nicotine, caffeine, sugar, and heroin. Not everyone recognizes that addiction can appear in larger systems and in other guises – such as the dependence of industry on government subsidy, the reliance of farmers on fertilizers, the addiction of Western economies to cheap oil or weapons manufacturers to government contracts. This trap is known by many names: addiction, dependence, shifting the burden to the intervenor.

Shifting the burden, dependence, and addiction arise when a solution to a systemic problem reduces (or disguises) the symptoms, but does nothing to solve the underlying problem.

Wherever there are rules, there is likely to be rule beating. Rule beating means evasive action to get around the intent of a system's rules – abiding by the letter, but not the spirit, of the law. Rule beating becomes a problem only when it leads a system into large distortions, unnatural behaviors that would make no sense at all in the absence of the rules. If it gets out of hand, rule beating can cause systems to produce very damaging behavior indeed.

Rule beating is usually a response of the lower levels in a hierarchy to overrigid, deleterious, unworkable, or ill-defined rules from above. There are two generic responses to rule beating. One is to try to stamp out the self-organizing response by strengthening the rules or their enforcement – usually giving rise to still greater system distortion. That's the way further into the trap. The way out of the trap, the opportunity, is to understand rule beating as useful feedback, and to revise, improve, rescind, or better explain the rules.

If the goal is defined badly, if it doesn't measure what it's supposed to measure, if it doesn't reflect the real welfare of the system, then the system can't possibly produce a desirable result. Systems, like the three wishes in the traditional fairy tale, have a terrible tendency to produce exactly and only what you ask them to produce. Be careful what you ask them to produce.

You have the problem of wrong goals when you find something stupid happening "because it's the rule". You have the problem of rule beating when you find something stupid happening because it's the way around the rule. Both of these system perversions can be going on at the same time with regard to the same rule.

Creating Change – in Systems and in Our Philosophy

Leverage Points – Places to Intervene in a System

You can often stabilize a system by increasing the capacity of a buffer. But if a buffer is too big, the system gets inflexible. It reacts too slowly.

Physical structure is crucial in a system, but is rarely a leverage point, because changing it is rarely quick or simple. The leverage point is in proper design in the first place. After the structure is built, the leverage is in understanding its limitations and bottlenecks, using it with maximum efficiency, and refraining from fluctuations or expansions that strain its capacity.

Reducing the gain around a reinforcing loop – slowing the growth – is usually a more powerful leverage point in systems than strengthening balancing loops, and far more preferable than letting the reinforcing loop run.

Missing information flows is one of the most common causes of system malfunction. Adding or restoring information can be a powerful intervention, usually much easier and cheaper than rebuilding physical infrastructure.

There is a systematic tendency on the part of human beings to avoid accountability for their own decisions. That's why there are so many missing feedback loops – and why this kind of leverage point is so often popular with the masses, unpopular with the powers that be, and effective, if you can get the powers that be to permit it to happen (or go around them and make it happen anyway).

If you want to understand the deepest malfunctions of systems, pay attention to the rules and to who has power over them.

Living in a World of Systems

Self-organizing, nonlinear, feedback systems are inherently unpredictable. They are not controllable. They are understandable only in the most general way. The goal of foreseeing the future exactly and preparing for it perfectly is unrealizable. The idea of making a complex system do just what you want it to do can be achieved only temporarily, at best.

Systems can't be controlled, but they can be designed and redesigned.

Before you disturb the system in any way, watch how it behaves.

Remember, always, that everything you know, and everything everyone knows, is only a model. Get your model out there where it can be viewed. Invite others to challenge your assumptions and add their own. Instead of becoming a champion for one possible explanation or hypothesis or model, collect as many as possible. Consider all of them to be plausible until you find some evidence that causes you to rule one out. That way you will be emotionally able to see the evidence that rules out an assumption that may become entangled with your own identity.

Information is power. Anyone interested in power grasps that idea very quickly. The media, the public relations people, the politicians, and advertisers who regulate much of the public flow of information have far more power than most people realize. They filter and channel information. Often they do so for short-term, self-interested purposes.

Our information streams are composed primarily of language. Our mental models are mostly verbal. Honoring information means above all avoiding language pollution – making the cleanest possible use we can of language.

A society that talks incessantly about "productivity" but that hardly understands, much less uses, the word "resilience" is going to become productive and not resilient. A society that doesn't understand or use the term "carrying capacity" will exceed its carrying capacity. A society that talks about "creating jobs" as if that's something only companies can do will not inspire the great majority of its people to create jobs, for themselves or anyone else.

Pretending that something doesn't exist if it's hard to quantify leads to faulty models.

"Intrinsic responsibility" means that the system is designed to send feedback about the consequences of decision making directly and quickly and compellingly to the decision makers. Because the pilot of a plane rides in the front of the plane, that pilot is intrinsically responsible. He or she will experience directly the consequences of his or her decisions.