Showing posts with label loops. Show all posts
Showing posts with label loops. Show all posts

Wednesday, June 13, 2007

Causal Loop Diagrams, stories, and macrobes




One standard tool of Systems Dynamics is the Causal Loop Diagram. This tool is explained at great length in MIT Professor John Sterman's text "Business Dynamics", but a short explanation is given by Daniel Kim in "Guidelines for Drawing Causal Loop Diagrams."
(John Sterman had a paper in the March, 2006 issue of AJPH on "Learning From Evidence in a Complex World", so he's finally been given "judicial notice" by Public Health. Always a good start.)

Kim begins:

The old adage "if the only tool you have is a hammer, every-thing begins to look like a nail" can also apply to language. If our language is linear and static, we will tend to view and interact with our world as if it were linear and static. Taking a complex, dynamic, and circular world and linearizing it into a set of snapshots may make things seem simpler, but we may totally misread the very reality we were seeking to understand. ...

Articulating Reality
Causal loop diagrams provide a language for articulating our understanding of the dynamic, interconnected nature of our world. We can think of them as sentences which are constructed by linking together key variables and indicating the causal relationships between them. By stringing together several loops, we can create a coherent story about a particular problem or issue. [emphasis added]
I haven't been able to get away for a few weeks for intensive training in Vensim or Causal loop diagrams, but they are certainly referred to in the professional literature as being a strong basis around which to bring many different interest groups together and reach a better common undertanding than would be possible without even turning on the simulator.

Still, it appears to me, a relative newbie, that Causal Loop diagrams still suffer from the concept that feedback comes in only two flavors - "positive" and "negative", not the full multidimensional spectrum I described in recent posts for "self-aware, goal-seeking, feedback control loops." Thus, on web sites such as Pegasus Communications, we see the classic "two" kinds of loops, those labeled with an "R" for "REINFORCING" and those labeled with a "B" for "BALANCING" (or "negative" feedback reducing difference from some fixed goal state.) See also Mindtools' description of CLD's with somewhat clearer diagrams.

Here, I fear, the power of the ability to turn on the computer and have it crunch through ranges of estimated parameters short-circuits the process I would recommend -- namely, putting the CLD up on the wall, standing back a few paces, and looking amid all the N-factorial combinations of N "loops" for a few "self-aware, self-protective, self-repairing, goal-seeking, feedback-mediated control loops."

At the risk of hitting a lot of hot-buttons, let me say that these, in my mind, constitute a kind of proto-life, which is to say that they are active agents that "might as well be alive" because they satisfy the usual definitions of "life", which is to say that they:

  • Consume energy
  • are self-repairing
  • adapt to their environments
  • are self-aware
  • seek something akin to homeostasis when disturbed
  • resist being shut down or shut off
  • are capable of learning and becoming smarter
BUT, because the entities I'm describing are creatures of the "control" domain, what flows in their equivalent of "veins" or "neurons" is control information and in particular real-time, real-world phase-lock signals. And, as I've emphasized, control information can easily jump from one medium to another, so it's tricky to track it down and "see" it the first time, although once you "see" it, like most visual patterns, you can keep on seeing it.

So, once again, I bring in the hand and water faucet picture, and the person-driving car picture, to illustrate the number of different stages that a single "control loop" can pass through and draw together into synchronous action.







Aside: The fact that the action is synchronized is all the difference in the world, as that is the difference between a laser-beam and incoherent light, where one cuts steel and the other is a little bright to look at. Most of our real-world measurements, unless we are into Very Long Baseline Interferometry (VLBI), discard "phase" information and absolute time as "irrelevant." VLBI can work even if the telescopes in different countries are not connected physically, if a very accurate record is kept of each signal and the records synthetically reconnected in virtual space inside a computer. But it does require recording not just "amplitude" or "power" but also the phase component of the signal at each antenna -- information we normally discard.

The big Y-shaped array of "dishes" that Jodie Foster was using to listen to the stars with in the movie "Contact" was a VLBI, where the spacing between the dishes, which are on railroad tracks, could be altered to focus on different wavelengths of incoming signals.
Another example with some smaller feedback loops that compete for our attention is the "story telling feedback loop" picture I put up yesterday, again repeated below. Don't try to dig into the details. Just notice that there is a big loop that covers most of the diagram, surrounding the light blue bar -- and that is the main, persistent, "I am a person" kind of loop. Then there is also a smaller loop with a shorter lifetime managing incoming visual input at the lower right, and two competing permanent-fixture loops at the left -- One driven by higher levels reaching downward and trying to raise this person's goals; the other driven by the person's frustration limit and protection against overheating basically, which tries to lower the goals again until they are achievable.


It requires reflection, and explicitly asking the question, to realize which loops are self-aware and self-repairing if damaged.

Consciousness certainly keeps shutting down every night, but it recovers the next morning, usually. The visual system has many small loops that leap into action when triggered, then go back to sleep. If we lost them we'd be essentially blind in a sea of unfiltered noise.
Aside: I'm not sure about the loop I drew in the upper right, where the person's actions are echoed back to them, with various lagtimes, by different parts of the environment. Maybe that's just the classic, passive, "environment" that's envisioned by epidemiology -- with the same intelligence and adaptiveness as a canyon's walls. Or, maybe the world develops its own set of ruts and habits around reacting to you, as an irritant is surrounded by pearl inside an oyster, and those prove to be so useful that they are endowed with self-aware, self-sustaining, independent status to keep an eye on you and provide very fast feedback, as if you'd touched a stove, when you try to harm the world. That's a sort of meta-sociological question touching on guardian processes, for some other day.

Cutting to the chase: Hypothesis: Because self-aware, self-repairing agents survive noise and damage that will disrupt other, dumb, passive, accidental "loops", they will tend to end up dominating the landscape -- even if they don't reproduce or form support alliances. But, they will tend to form support alliances too.

Similar hypothesis I put forward a month or so ago: Because organizations tend to find and fix small-scale, non-complex problems, if we assume problems arise due to noise at every level in some equal amount, then the large-scale, "complex" ones will end up dominating the landscape, because those are the ones that keep getting put off and not addressed or fixed.

Synthesis of those two: In any long-lived multi-level complex adaptive system, large-scale, complex, active, self-aware, self-repairing control loops will end up dominating the landscape and being the primary shaping force.

And, sad corollary: Until we build scientific tools that can glance at a picture like M.C. Escher's Waterfall, and "see" at a glance "where" it is "broken", we will continue to be plagued by these large scale active agents.

We are, in fact,most likely swimming in a sea of semi-alive "macrobes" -- a concept probably as distressing as Pasteur's "germ theory" that had a sea of "microbes" swimming inside us. They would certainly be as "alive" and as annoying as viruses, and if they were not well, we would feel it, being, as it were, inside the "whale".



Of course, before going into anaphylactic shock at the idea of macrobes, I should point out that you already are familiar with some of them, as a big "yawn." Those would include persistent, self-aware, self-repairing, energy consuming, possibly self-extending macro-agents known as "families", "corporations", "cultures", "religions", and "nation-states." If the Gaia theory is correct, it would also include the Earth as a whole. If religions are even partly correct about some big issues, it continues at scales much larger than the Earth. However, the larger such an agent would be, the more slowly changing it would be, and at some point we could locally treat it as "fixed" or a "constant" for planning daily activities.

So, if sociologists, and even untrained civilians recognize that corporations and countries exist, what's the big deal here? What contribution to our collective wisdom am I suggesting this framework brings to the table?

Again, the most important point I'm making has three parts:

Hey everyone strugling with methodologies for feedback and multilevel systems in Public Health! Control System Engineering already solved that! Read the Literature!

and

Hey everyone in Control System Engineering! You have some potential new clients over here in Public Health!


Finally this one: Children! Stop fighting!

Public HealtH? stop picking on corporations -- the healthy ones hold your planet together right now. And the diseased ones need your insight and techniques to be healed -- once you master multi-level organism healing techniques. And, Hey, CEO's? :Please stop kicking Public Health in the shins -- they're trying to keep your workforce alive and healthy and productive, and besides they're closer than anyone to understanding The Toyota Way in terms of a health multilevel organism. Religion please stop picking on Science, and vice versa!

And, everyone, there's a big qualitative difference between a "distal factor" and the big toe on your other foot, so before you bite down .... oh, never mind. You'll find out soon enough!

By this model, there really only is one multi-level life form occupying this planet, and while it is the job of clinical medicine to heal people at the 1-body level, it is the larger distinct job of "Public Health" to deal with disharmony at any level -- between cells and cells, people and people, cultures and cultures, nations and nations, corporations vs. corporations, departments vs. departments, silo versus silo within hospitals, etc.

Because all that will persist is actually connected through all those loose-couplings (amplified by compounding feedback loops over long times), in a "control" or "regulatory system" sense, it's all only ONE body. We share parts of it, or levels of it. But it's hard to have your own foot have gangrene and not be affected by it, sooner or later.

The biggest problem right now is that the healers of society cannot easily see the feedback loop connections and evaluate the strength of each link and of phase-locked groups of links. That's the missing toolset. And that already exists, but indexed in a different literature where public health seldom treads. Now, with the new competency (2006) for MPH students from the ASPH, the focus on "systems thinking" will lead us there. The March 2006 AJPH is a start, but our work is cut out for us.

It puts a kind of different light and torque on things if we assume there is only one Body here, with many pieces and parts, that we're trying to heal and make right. It won't do to fix most of the body but leave a tooth or limb infected -- that'll turn around and bite "us".

If every level and pair of levels had different rules, this would be a huge problem, probably intractable. BUT, if every level and pair of levels has the SAME set of rules in "control space", then instead of many levels being harder to "solve", suddenly many levels becomes more hints and easier to fix. We have one equation and one unknown and 50 clues, not 50 equations with 50 unknowns and no clue.

That's WAY BETTER. Just align the fragmentary knowledge of the control structures of each level on a mental transparency, then put them on top of each other at the same scale and orientation, and look through the whole set, and all the clues will line up and reveal the full picture that applies at every level, even though we only have a little bit of it right now on each of those levels.

The prospect is compelling. It's a win-win-win solution, and we might just be able to get every field to give up 1% of its budget to work on this single problem that is relevant to working out more details in that field, for every field. It could be politically acceptable. It might fly.





Another gentle introduction to control loops



So, my favorite reader tells me the diagram of the loop with 432 boxes or whatever "left her cold."

So, here's a little more gentle ramp up to that diagram I did yesterday. The classic model of "causality" is that one thing causes another thing. The "causality" goes only one way. B has no impact on A.

But, sometimes there is "feedback" and B does affect A.
Often, "feedback" is mistakenly described as "positive" or "negative". In control theory, feedback is just information. But since those terms are pretty common, let's review what the users mean by them.
P
Positive feedback (above). Things reinforce each other and we get an upwards spiral of whatever it is. Both sides chase each other upwards. A "virtuous spiral".


B keeps raining on A's parade, and after a while A gives up and stops trying. But, actually, feedback loops have many more different behaviors than those two. Here's a third possible outcome - an oscillating condition that keeps going in "cycles", like the economy or the number of birds in a given region.

Actually, again, in "control theory" the thing called "feedback" is just information. By itself it has no "positive" or "negative" content. All that meaning is actually supplied by some active entity, maybe a person, who has to interpret what that news means.

Take a new driver trying to keep the car on the road. He has a "goal", to stay in the right hand lane (in the US at least). He sees what's going on and responds by turning the steering wheel one way or another. (This is much more dramatic if he is learning how to drive in reverse and steer going backwards for the first time.)




OK, now to get from THERE to the picture I used or the format used in "control theory" textbooks, we need to identify a few more familiar parts and rearrange the pieces a little. Here goes. Let me add a "command" box with"turn left!", and a line called 'visual feedback'.

OK, so keep on unfolding, and add an explicit eyeball. (That's what that is supposed to be in the lower right). In general this is not always a visual feedback, and could be sound or tactile or whatever, so we break out a "sensor" of some kind (in this case, an eyeball).

And, finally we make the step to removing all my effort at drawing cars and people and roads, and get a very dry, abstract looking diagram that looks something like the one below. The"loop" is the part in green. The"controller" is a general term, and in this case, it's the driver of the car. Sticking off the loop, like some kind of side radical group in chemistry, are two boxes - the "goal" of the loop, and some external conditions that make life harder (usually), such as the fact that the road bends sharply to the left ahead.
In real life, there are some additional feedback paths if we look over a longer time window. For example, by picking the right controls of the steering wheel, the driver can pick one road instead of another, which indirectly changes the "road" box.

And, the thing that is following the loop around over and over is a very abstract thing - it's "control". "Control" flows, and like electricity, it can move through solid objects like current flows through solid copper wire. Control can happily leap from one medium to another, now being in a steering wheel position, now in a car position, now in light travelling to the driver's eyeballs, now in neural impulses going to the arm, etc. Control doesn't care who it hitches a ride from.

As I've said in earlier posts on this same subject, this makes it hard to find all the parts of a control loop sometimes. I trace out the parts in a person getting a glass of water out of a faucet in this prior post. (See "controlled by the Blue Gozinta")

OK, but why go to all this effort, you may well ask. The answer is that this can "solve" many problems for us. If we can rotate and stretch the problem around until we can see it as a control loop, then there are software programs that can tell us what could or will happen next, if we push here versus there. We can tweak and tune them. We can design them and redesign them. We can draw on a 100 year deep literature on "what goes wrong and why" so we know a classic problem when we see it. We can realize "Oh this is because we have a lag time between when we turn the boat's helm and when the boat gets around to responding" so we know what needs to be fixed or accounted for.

Or, it can help us design feedback-based interventions, as the IOM has suggested in "Crossing the Quality Chasm" for "microsystems" -- small teams of health care providers. I used a feedback model in my "Capstone" to inform the suggestion of what factors the diabetes team should be considering routinely and to be sure it forms some kind of coherent and complete set of topics while being as compact as possible.
Also, from control systems textbooks we can see quickly why "dashboards" have a very dangerous downside, if the data being used to steer an organization with is 2 months old -- and why that can tend to produce wild oscillations if it's used to steer by.

And, we can design "regulatory processes" and "regulations" that have a snowball's chance in a hot place of actually working, because we did our homework and computed the necessary numbers in the underlying feedback control process we're trying to tweak. Or we can realize there's no point in pressing there, because that point won't budge, and we need to save our pennies and find a better intervention point. We can model "what if" behavior.

That's what this is all good for. It connects a huge problem domain of public health to a huge solution set already found and in place in 'control system engineering" on the other side of campus. It makes the powerful analysis tools Engineers already use daily to design jet planes suddenly useful to redesign health care reimbursement policies. Etc.

As I've said before, to public health and control system engineers - you guys REALLY need to meet each other and form an alliance.

Monday, June 04, 2007

Controlled by the Blue Gozinta



For those who are following this discussion of feedback loops, we're most of the way through the basic description of the insides of such a loop.

I showed how a microphone and speaker, or getting a glass of water represented kinds of feedback loops, and made a distinction between dumb feedback loops and smart - goal seeking - feedback loops, also known as control loops. And we showed how control loops are everywhere in nature, made up of almost any substance - animal, mineral, vegetable, light, chemicals -- and they don't care because the principles work regardless. Control is to the loop as a song is to the instrument - you can play the "same" song on almost any instrument, or sing it, and the "sameness" is there.

So, I need to give a name to the four parts that I had in the upper left in this picture I drew yesterday:



The basic diagram that Professor Gene Franklin uses in the book "Feedback Control of Dynamic Systems" is similar to that block diagram, except for pulling the "GOAL" out and lumping the three other boxes "comparer", "model", and "decider" into a single blue box that is labelled "?" in his diagram of a car's cruise-control system for maintaining a constant speed.


So, the diagram is from that book, as quoted by me in slide 16 of my Capstone presentation on patient team management of diabetes control. I think you may need to click on the picture to make it zoom up large enough to read the words.



In any case, the only box on that diagram that is blue is the one that the feedback "goes into", so I'm calling it a "blue gozinta" as just a funny name that rhymes and that no one else is using.

Besides, the word "controller" rings all sorts of bells I didn't want to ring, echoing back to parents and school and bosses, etc.

Well I guess I failed in that already, as I gave the example of "negative feedback " of a student getting "graded" by a teacher for performance on an "exam", and receiving a failing grade of zero percent, which could be quite discouraging and dampen enthusiasm for the subject.

Franklin's picture has two other minor differences from mine. First, he adds "sensor noise" to the bottom "speedometer" box, to emphasize that this loop is all built around a perception of reality, not reality, and the thing that does the perceiving may not be perfectly accurate. That's a pretty good model of human beings or any other regulatory agent or agency.

As John Gall would say in his book Systemantics -- inside a "system" the perception IS the reality. The medical chart IS the patient.

That effect is so strong that the patient can be dying in the bed but caregivers are so busy looking at the monitors showing something else that they don't see the problem -- which is part of what went on in the tragic Josie King case, where an 18 month child slowly died of thirst in the middle of one of the best hospitals in the world. So, yes, we better remember on our diagram that what our senses tell us is going on may be very wrong. We'll come back to that in a big way when discussing how human vision and perception get distorted by all sorts of invisible and insidious pressures - especially in groups with very strong beliefs.

The other difference between Franklin's diagram and mine is on the upper right, where he adds an incoming arrow labelled "road grade". This means the slope of the road, and how hilly it is, not what we think of the road. His point is that the behavior of a car and the speed it ends up going after we have set our end and put the gas pedal where we think it should be actually ALSO depends on factors that are outside the car - such as whether it's going up a steep hill.

That will also be a universal pattern. The results of our actions are mixed into the impact of outside actions, which makes it hard to disentangle the two from just looking at the end result. The good news is that there are software programs that can disentangle those two for us.

Anyway, the whole point of this post is to get the "blue gozinta" identified.

This little blue box is the heart of the problem, because "feedback" is really just information, and is not intrinsically "positive" or "negative". In this diagram, the "feedback" is the speed of the car, as measured by the speedometer. That's just a number.

The number becomes "positive" or "negative", leading to "more gas!" or "more brake!" actions, only because the blue box, the controller, the blue-gozinta, compared that number to the desired speed, and saw that it was less than desired. Then the controller had to check a mental model and use some rule like "if we're going too slow, push on the pedal on the right!"
"If we're going too fast, push on the pedal on the left!'

As anyone who has ever taught someone else to drive knows, that turns out NOT to be the actual rule that drivers use to control the gas pedal. The behavior those rules and that simplistic model of the world result in is holding down the gas until the car shoots past the correct speed, then slamming on the brake until the car passes the desired speed slowing down, then overshooting and slamming on the gas until the car passes the right speed on the way up, then slamming on the brake, etc. The car jerks back and forth in an unstable and very unpleasant oscillation forever if that's the only rule in use.

However, we can probably all think of organizational policies or laws that have exactly that behavior, and are either too harsh or too lenient, or something, and keep on going back and forth and never manage to get the right setting.

It has been hard to recognize those problems and go
  • Hey, I've seen that behavior before!
  • That's a "control loop" behavior.
  • The way to fix it is to change what goes on in the blue gozinta box.
  • What part of the process / law / policy I have corresponds to that box?
  • That's where the problem can be fixed.

It's really important to see that there is nothing wrong with the car. The gas pedal works fine, and does not need to be replaced. The brake pedal works fine. The speedometer (in this case) works fine. What is wrong is inside the blue box, and is subtle - it's the "mental model" or rule that is used to decide what action to take depending on what information is coming into the box from outside.
And, the realization is that a very simple rule, a dumb rule, doesn't accomplish what we want, but a slightly better rule will make the very same parts behave correctly together.
The better rule requires a little more brains inside the box. We have to track more than just how fast we are going and how fast we want to go -- we have to figure out how fast we are converging on the goal, and start letting up on the gas as we get near the target speed, before we even get there.
The controller needs to "plan ahead" or "look ahead" and react to something that hasn't happened yet.
This seems to fly in the face of science and logic. How can a dumb box react to something that hasn't happened yet? We can't afford the "glimpse the future!" add on module, at $53 trillion.

Ahh, but here's another wonderful property of feedback loops. What goes around comes around. We've been here before. Nothing is new under the sun. The past is a guide to the future.

Either putting out the garbage can causes the garbage trucks to come, or we can learn the routine well enough that we can predict when the trucks will come based on past experience. It turns out, in a loop, the past and future become very blurred together.
Being able to recall the past IS being able to predict the future, in a control loop.
We don't just go around a control loop once or twice -- we go around a control loop thousands or millions of times. So, if we have any rudimentary learning capacity at all, we can start to notice certain patterns keep happening. We can detect what always seems to be happening JUST BEFORE the bad thing happens, and use THAT as the trigger event to react to instead.

So, we have a second rule that gets added by experience -- "When you get near the target goal, start easing up on the pressure to change and start increasing the pressure to stay right there and keep on doing exactly what you're doing."

This basic ability to learn from experience is the simplest definition of "intelligence" we can come up with. Do you recall the joke about Sven and Ollie that Garrison Keeler told?

Sven comes by Ollie's house and sees that Ollie has both ears bandaged.
"What happened?" he asks.
"Well", Ollie replies, "I was ironing and the phone rang and I picked up the iron by mistake and held it to my ear!"
"Oh.... So, what happened to your other ear?"
" Ahh.... once I was hurt, I tried to call an ambulance. "
So, the moral of all this post is that the key to the behavior of a system being managed by a feedback control loop is the blue box, the "blue gozinta."

Very simple changes to that box can change a horrible experience into a pleasant ride.

The heart of "Control System Engineering" is figuring out what to put in that box.

For human beings, a second major problem is that little tiny addition of "sensor noise", and figuring out how to prevent, reduce, or account for distortions in perception that can cause the system to be responding to a perception, not a reality.

And, for both, there's another very subtle but very well understood problem, and that is "lag time." I didn't draw "lag time" on the picture but I will in the future.

If we're trying to drive based on the speedometer reading from 5 minutes ago, things will not go well for us. In fact, the more we try to "control" things, the worse they can get.

This is a huge problem. A perfectly stable system that is perfectly controllable becomes a nightmare and unstable and can fly out of control just by there being too much of a lag between collecting the sensor data and presenting the picture to the controller.

Or, in hospitals and business, it's popular now to have a "dashboard" that shows indicators for everything, often exactly in "speedometer" type displays.

The problem is, the data shown may be two months old. We are trying to drive the car using yesterday's speedometer reading at this time of day. When I state it that way, the problem is obvious. But, I can't find any references at all in the Hospital Organization and Management literature about the risks caused by lag times in dashboard-based "control".

At this point, even with just this much understanding of control loops, you, dear reader, should be starting to realize how may places around you these loops are being managed incorrectly.

We're spending a huge amount of effort trying to improve the brakes and gas pedals, when the actual problem is a lag time in the messages to upper management, or that sort of problem.

None of these problems need to be in our face. These are all "Sven and Ollie" problems that we can fix with what we know today.

But that will only work if we're really sure about how control loops work, and how they fail, and can make that case to the right people in the right way at the right time.

Take home message -
Even a very basic understanding of control loops can help us ask the right questions, and realize where the problems may be lurking instead of where they appear to be at first glance, so we don't waste our time barking up the wrong tree.

Especially in complex organizations, the generator of failure is usually not that labor failed or management failed, or that any one person did something "wrong." What is killing us now is that we have a huge collection of "system problems" that are due to things like "lag time" and "feedback". Every piece of the system is correct, but the way they behave when connected is broken. There is a "second level" of existence, above the pieces, in the "emergent" world. Things can break THERE. Most of the systems humans built are broken there, or at least seriously in need of an engine mechanic, because we didn't even realize there WAS a THERE.

Worse, "management" still thinks that discussion of "higher level" problems means that someone is pointing the finger at THEM, and that leads to bad responses.

The problems are subtle. We won't see them unless we spend a little time studying how control systems work, and how they fail. Then, the patterns will be much more obvious, and our efforts will be much more likely to be successful. And, then we can stop blaming innocent people for problems that aren't their fault.

It is, however, in my mind, the fault of the whole enterprise of Public Health if this kind of insight is not taken advantage of when designing regulatory interventions or in helping individuals try to "control" behavior. That, in my mind, would be a clear failure of due diligence.

Or - it would be, if these concepts had been published in the peer-reviewed literature that's the only thing they read and pay attention to.

Which says, it's my fault for not publishing this and your fault, dear reader, if you don't get after me to do so.

After all - I depend on feedback from my readers to control my behavior. So, what I do depends on what you do.

Wow, doesn't that sound familiar?

Sunday, May 13, 2007

The Sixth Discipline of Learning Organizations - part B

Yesterday, in my post The Sixth Discipline of Learning Organizations, I reviewed a few of the lessons Peter Senge's book The Fifth Discipline teaches that we can learn from thinking in circles, not in lines.

There are other properties of loops that are critical, but as subtle as the difference between the behavior of a spinning bicycle wheel (a gyroscope) and a stationary one, or attempting to throw a plate or a playing card that is spinning rapidly versus one that is not spinning. At first glance you might say - it's just spinning, so what? But the behavior of trying to throw a plate and a "Frisbee" is quite different - the plate may go 20 feet and the Frisbee 100 yards.

Spinning rapidly in a circle matters. All feedback is not the same. The speed of feedback in a feedback loop also matters. The feedback rate matters ( loops per second or per day or per year).

But this morning I want to start looking at vertically oriented loops in hierarchically structured organizations - for which a triangle or pyramid shape is more helpful than a circle for discussion.
(Imagine the pyramid shown on the back of every US dollar bill.)

Say that the "boss" is the eye on the top of the pyramid, and that the boss's orders come down the right side, through the "chain of command" (which is actually a branching tree shape.)

At the bottom of the organizational pyramid, where it actually touches the reality and "ground truth", employees attempt to carry out those orders, and imagine that activity moving us from right to left across the bottom of the pyramid. Finally, status reports ("mission accomplished!") move back up the chain of command being consolidated at each level all the way back to the boss at the top. So, we have a vertically oriented loop, or cycle, because now new orders come down the chain and that loop pattern repeats.

So far, so good.

In a static, simple world, if all employees except one named "Joe" report success, and Joe keeps reporting failure, the classic model would say that the action management needs to take is to replace Joe. The model says all employees are interchangeable machine parts and if a part fails to do its job, the part is broken and should be replaced. This is a simplified version of McGreggor's "Theory X" of management, very popular in the machine age, from 1850 - 1950.
Another implicit assumption is that the boss completely understands the tasks to be performed, and is the resident expert. If people don't "perform" it must be because they are "lazy" and what is needed is a "bigger whip." Employees are told to "jump" and they don't need to understand why or agree -- they just need to ask "yes sir, how high sir?"

That model worked for early industrial models, such as workers in textile mills, or slaves picking cotton.

But, in a dynamic, complex world, that model breaks down and doesn't work. Actions and responses that worked yesterday suddenly no longer work. The "cheese has moved." The organization has to learn new responses to the same old inputs. The response of the outside world to an action is no longer predictable, and has to be judged based on rapid-feedback and a quick poke to see what happens and learning from that. We move into McGreggor's "Theory Y" of management where the expertise is now on the bottom of the pyramid, where front-line
troops are as likely to reply "What bridge? The bridge is gone!" as "OK, yes we crossed the bridge." Now an ever-changing set of facts or dots of information have to be aggregated upwards and "reporting" has to change into continuous "sense-making" of shifting patterns and images of the battlefield truth.

Again, this model is not that strange. It's the basic model we use when we have to move a bit of food from the table to our mouth on a very windy day - we move the hand a little, see where it is now, move it a little move, see where it is now, etc., in a very rapid sequence that automatically adjusts for the wind. If we don't adjust for the wind, the hand and food will miss the mouth on the downwind side. We don't "compute" wind velocity and use Newton's laws to figure out what to do - we just do it and watch while it's happening. It's no big deal. It's the basic "cybernetic loop" of tiny intent, tiny action, tiny perception, and repeat the loop rapidly over and over. It's a loop we can use to cross an unfamiliar room in the dark. Move slowly, stay alert and aware, and adjust as you run into things. It works. It doesn't require quantitative analysis or calculus or a computer or a PhD in robotics. It just requires using a very basic action and sensory loop over and over.

And, like any feedback loop, causality disappears in the normal sense. Motion alters perception and perception alters motion and the two become one, in a very real sense, a single motion-perception action and a loop as an actor.

Again, no big deal. So why is this important?

The big deal is that our society is in the middle of adjusting to this change from "Theory X", and a stable, static world with expertise at the top to "Theory Y" with a very dynamic, unknown world and the expertise at the bottom. In fact, because of the property of loops, there really is no longer much of a "top" and "bottom" in the classical Theory Y sense of the terms.

Just as the level of the water could be seen to control the hand on the faucet, the staff at the bottom of the chain of command can be seen to be controlling the General at the top of the pyramid -- and both those models are wrong, because it's actually the shape of the feedback loop that now has taken on a life of its own, on a whole different scale, and is controlling both of them.

Senge's point, and mine, is that most of the organizational problems we see around us are because we haven't managed to get that much right. In some health care organizations, an extreme case of the expertise being on the "bottom" of the pyramid, the top management still thinks in "Theory X" terms and tries to see itself as the expert in everything and "gives orders" to move in a certain way. The body reports back "No -- what bridge?" and the boss sees this as stubbornness, stupidity, or hostility and things just get worse from there.

Arguably one of the best "learning organizations" around is the US Army. I've mentioned many times before role of Doctrine in FM22-100, the US Army Leadership Field Manual. The pyramid model I just described is the theoretical basis for the doctrine, and every field action is supposed to be followed with a "lessons learned" session. News, particularly surprising news about a misfit between upper management's concept of where the battle or bridge should be and what actual boots on the ground see in front of them, is supposed to be free to travel upwards. Management, as it were, is supposed to listen to the staff and learn what's actually going on, not what management imagined yesterday was going on. It's not insubordination to say "Sir, What Bridge Sir?"

By simple trial and error experience, repeated millions of times, the Army has finally figured out what works and what doesn't and come to some conclusions that are startling to the Theory X old guard, but not at all surprising to the Theory Y thinkers. For one thing, listening has to go upwards, at every level. It's as important that superior officers listen to junior officers as vice versa. If new conditions at the bottom don't result in a new picture of what's going on at the top, the whole pyramid will simply drive off a cliff or otherwise carry out actions that bear no resemblance to reality.

And, because the picture of reality is not perceived directly, but has to come up the chain of command and be re-filtered and consolidated at a dozen different levels, that process has to be incredibly accurate, frank, honest, and unbiased. Even a 10% "adjustment" in facts, repeated over and over at each level of consolidation, can result in a reported "reality" at the top that is 180-degrees out of whack.

In a profound sense, the key word is integrity, and not just integrity when the going is easy, but integrity when the going is tough - not because of enemy action but because of "friendly fire from above". That kind of integrity is also part of the other key word in the doctrine - character.
If the information flows freely and rapidly and can spin up to a high rate of rotation, as with a bicycle wheel or gyroscope, this whole design pattern becomes very stable, agile, nimble, and capable of navigating the most bizarre terrain as events unfold in surprising and unexpected ways. BUT, if there are pockets of resistance to the flow of information, such as cover-ups, that model breaks down. Or, if there are superiors who think "superior" means they know everything and they don't need to learn from their men, the model breaks down. So, another few important words are honesty and humility.

See US Army Leadership Field Manual FM22-100
and What relates Public Health and the US Army?
and the whole posting from my Capstone slide 7 Theories are Changing which has twenty more references to the literature on high-reliability organizations in nuclear power plants and chemical plants and aircraft cockpits and hospital intensive care units, and what makes them actually work in practice. It just keeps coming back to the same thing and the same model that's right in front of us be we haven't finished mastering.

And, again we have a place where our religious heritage has been observing what makes society work for thousands of years and has more wisdom to offer on this than scientists, although the science is beginning to catch up at last. Our religions have been stressing virtues - integrity, honesty, compassion, humility, etc. - for centuries but we haven't really been listening or haven't thought that "mattered any more in the modern age." Actually, the basic cybernetic model is ageless, and true at any size and scale. It's going to be something we have in common with aliens from other worlds when we meet. It's a universal truth every bit as solid as other physical "laws" we rely on.

These are truths that are seen by Hindus, by Muslims, by Christians, by Jews, by atheists, and by learning organizations like the US Army. They can serve as a basis for unity among even such diverse groups and cultures. They can link science and religion without either side having to admit they were wrong about something and lose face.

Grasping and implementing that truth certainly looks like it could give us far more "bang per buck" than investing in new technology, new weapon systems, new gizmos and gadgets, and other ways to shift the detail complexity around.

Also, see my early post Virtue drives the bottom line with many links at the end to such literature. (excuse the formatting near the top of that post - I'm technically challenged by the html editor.)

Another author's take on this subject is "Spirituality in the Workplace - The Sixth Discipline of a learning organization, by Harish Midha at the University of Toronto.

Peter Senge's latest book is Presence: Human Purpose and the Field of the Future and readers interested in that book might also be interested in Stephen Covey's book The Eighth Habit. All these books teach the same gospel - that we are going to have to come to grips with the nature of community to "make it" through our social problems of this century, and that community requires us to realize the power and impact of "virtues" when amplified by the feedback properties of complex systems.

Another post I wrote exploring the role of community, virtues, and organizational learning and agility is The Importance of Social Relationships (short)

I also recommend: Pathways to Peace - beautiful slides and reflections to music on the value of virtues

A general summary of what I think are my best dozen posts on related subjects is here.

This is also relevant:

Spiritual solutions for technical problems

Enjoy, and please, for reasons this whole post embraces, send me feedback! A human can't sustain a thought without some measure of social support! Criticisms and objections are welcome. Use the comment box below, or send to my email in my "profile" box above.

Wade