There are many ways to look at “systems” and even within the arena of “systems thinking,” there seem to be many approaches. One of the names that comes up often is Donella Meadows and her connection to MIT and Jay Forrester and System Dynamics. I finally took the time to read Meadows’ Thinking in Systems: A Primer (published posthumously, but written over many years of her work).
The book acknowledges the idea that there are multiple approaches to “systems.” In her discussion throughout the book, there are clearly a lot of similarities across the approaches. Most of my thinking on “systems” is in the work I do with Theory of Constraints as a framework - that there are very few places in a system one can intervene and have an effect on the whole system. My read of this book aligns with that concept and then adds on the idea that the system can and will change, which means the leverage point may change. But the idea is that we don’t have to “touch” everything in a system to have the output of the system change significantly - we just have to know where to touch.
In that light, I particularly liked the basic definitions and descriptions of a system. “A system is a set of things - people, cells, molecules, or whatever - interconnected in such a way that they produce their own pattern of behavior over time.” And a few pages later she adds that a system has a function or purpose, along with the elements and interconnections. I think she also gives us the familiar idea that this emergent behavior comes about because of the way the elements of the system are connected - you don’t see it in the individual elements. This means that while understanding the individual elements (which may be “systems” on their own) can generate a lot of useful information, it does not tell us how the whole system will operate. The system generates its own pattern of behavior - “the system causes its own behavior!” And if we want to “fix” systems - usually meaning we want to remove some undesirable behavior or effect of the system - we need to understand how the system works. Otherwise, we are just playing Whac-a-Mole by “fixing” the elements without a lasting effect on the whole system.
In the system dynamics world, there is a language behind describing systems. Stocks (elements) and flows (interconnections). These are best understood with the bathtub example - there is a stock of water in the tub plus and inflow and outflow. Then we can throw control loops into the mix. Stocks can be anything that can accumulate - and in some of the examples, even ephemeral “stuff” can be treated as a stock, as it is sometimes a better representation. (As an engineer, I would model the “heat” of a house for a temperature control discussion, but it is easier to picture - and measure - temperature.) Not much purpose behind a bathtub, but it gives some mental framing to the various diagrams and discussions in the first part of the book. The other thing that comes about quickly is different types of loops that might exist - how does the state of the system affect the system - positive feedback or negative feedback? Systems, of course, are comprised of multiple stocks, flows and feedback loops that generate their interesting behaviors.
Interestingly, there are very interesting behaviors that develop from fairly small models - only a few feedback loops and one or two stocks. And for an engineer, flipping one or another loop will create VERY different resulting system behaviors. This then leads to discussions of “typical” systems. Reading through this section, I was reminded that I’d seen these before in other materials. They are commonly-seen behaviors and the underlying systems. I like the idea that these are common patterns - “systems with similar feedback structures produce similar dynamic behaviors, even if the outward appearance is dissimilar.” I’ve seen these lists of archetypes before, but here are the ones in Meadows’ book. Interestingly, she talked about these archetypes as “system traps” and described some ways out of the traps.
Policy resistance. Any change we make seems to have a short term effect on the system but the long term sees the system return to the same behaviors we were trying to change before. My thought is that this is a classic Whac-a-Mole situation. Meadows’ suggestion is that this may be a “let it go” moment - adding more control isn’t going to help, rather find ways to understand and meet the goals of the actors in the system.
Tragedy of the commons. This is a classic where over-use of a common resource ends up destroying the resource. I don’t work in this world too often, but the suggestions are interesting from the counter-intuitive splitting of the resource to improving the feedback loop to discourage damaging behavior.
Drift to low performance. This isn’t as familiar to me, but the idea is that we keep changing performance expectations based on what happened before - and that we see successes as outliers, pushing us to lower and lower targets. Of course, a counter to this is to find ways to use the successes as the reinforcing mechanisms. I wonder how much of this one has to do with people wanting to protect themselves from damage if they are held to their commitments.
Escalation. This sounds like the opposite of drift to low performance but is described in the context of two entities, where as drift is about one entity. The classic is a price war or an arms race. As with drift the best way seems to be not to engage. Difficult to do when you are in the middle of it!
Success to the successful. The winner takes all - and then goes on winning. This is why countries have rules about monopolies. The curious thing is that in many situations the competition is a good thing for the larger system, even if the competitors are driving to “win.”
Addiction (shifting the burden to the intervenor). Another classic - if one is good, then one hundred must be better. Another trap where those inside don’t see a way out. Another trap best avoided by not getting into it. But there are policies and practices that end up reinforcing the trap - that’s why it is another common trap.
Rule beating. This felt like a repeat of some of the traps above. Or maybe an example of what happens in sub-systems when operating in these larger archetypes. We have all seen these kinds of situations: I have to do X to get around Y, which causes the system to respond with more Y, so I have to do more X. How many times have I heard clients talk about their favorite workarounds and then their frustration when the rules change under them and their workarounds no longer work - until they find new ones.
Seeking the wrong goal. This is a familiar trap - “show me how you measure me, I will show you how I will behave.” And the fix is to align the measures with the goals. Not always obvious, but much more valuable. “Be careful not to confuse effort with result or you will end up with a system that is producing effort, not result.”
There are great thoughts throughout the book on how to incorporate systems thinking and the value behind it. Even at the outset, Meadows says something to the effect that it is only useful to do this analysis if it will help solve real problems.
One of the challenges I’ve had in exploring the System Dynamics branch of systems thinking has been a heavy bent on BIG systems - often full-world systems around ecology / environment or big political / social systems. And there are often moral choices described and embedded in the conversations about these systems. It gets in the way for my enjoyment and understanding of the materials. From one perspective, I understand that these systems are useful as demonstrations of the principles and concepts, because they are familiar to most people. On the other hand, I often feel like the frame of reference of the writer is breathing down my neck: isn’t it obvious that we need to do X to fix the environment?!? Even if it is the right thing to do, it doesn’t help me understand thinking in systems any better.