Showing posts with label how systems fail. Show all posts
Showing posts with label how systems fail. Show all posts

Friday, September 19, 2008

We need an improved "invisible hand", Adam

David Brooks wrote a piece in the NY Times this morning on regulation of the financial industry.

Incidentally, there is essentially no engine today in any product that does not have a "controller" as part of the design, to increase stability, response time, etc. No elevator would stop at the floor without an abrupt "jerk" without a controller. The design of such controllers is in the field called "Control System Engineering."

A sample text book is this one: Feedback Control of Dynamic Systems, by Franklin, Powell, and Emami-Naeini. These are the concepts we need for a "governance" or "regulatory" system that actually works as advertised.

Control system engineering is to complex systems what "civil engineering" is to automobile bridges across rivers -- it is completely general and non-political, it won't tell you where to build or what to build with, but it WILL tell you the required properties of the materials and that some things will simply not work. You can't build the Brooklyn Bridge out of plastic, for example, regardless how cheap it is. You can't design a regulatory system that depends on feedback, for another example, and then blind the sensors that are supposed to determine the feedback.

The advantage of such engineering is that it focuses on issues such as "stability" (a big one right now) and gives power to insight, such as that blinding the eyes of a system will make it drive off the road for sure.\

Search "feedback" or "system thinking" in this weblog for other posts on such matters!
====================================================


One obstacle to a good solution is the incorrect assumption that a process "under control" equates to a small group of people doing "the controlling." Let's keep those separate.
The question of whether we need more "governance" should be distinct from who, or what, should be the active agent. For much of the US History, many have favored Adam Smith's "invisible hand" of the marketplace to do this controlling.
The classic debate over more or less "government" desperately needs this distinction.
The question should be whether there is an improvement on the class of "invisible controllers" that (a) do a better job and (b) are even less corruptable by those who would hijack the process.
There is no question that we have very complex processes running out of control, and that this is not the preferred state. Fine.
The question is how to achieve the "under control" part. The institution called "government" has typically decayed to "a few people" who, regardless of wisdom and intent, have been unable to grasp the com
plexity of the beast or improve on its operation and results.

The deep cynicism resulting from such failure seems related to the abandonment of a goal of prosperity for all and replacement with a goal of "prosperity for me and my friends at everyone else's expense" which turns out to be a short-term illusion, given how interconnected everything is.
These are problems in the area of "control system engineering" and "complex adaptive systems" and the necessary insights are probably in those fields.

Tuesday, June 05, 2007

Gentle primer on feedback control loops

Here's yet another pass at the basic concepts using mostly pictures. Let me know if this works better for you or your students! I can adjust what I'm putting here to your needs and interests, but only if I get feedback!

The first picture shows rising and falling output. This is often what people mean or think of when they talk about "positive" and "negative" feedback.

Unfortunately, it's also their concept of where the "feedback" concept stops, so they missed all the good stuff.

The next picture shows converging output as a result of a simple control ("goal seeking") feedback loop.

The output rises or falls to some present value or "goal".

Then, the system can be "tweaked" a little so it converges faster on the goal, but that often will result in overshooting and coming back with a little bit (or a lot) of bouncing.

The next picture, of the car getting to a hill from the flatland below, is supposed to show how a speed control system should do a good job of maintaining the same speed, even when the outside world changes a lot.

Then the picture of the car going up and down the mounntain explains more about that. Without speed "control", the car would slow down going up the hill, and speed up a lot going down the hill. Instead, the speed is almost constant.

But, this whole effect of locking down or "latching" or "clamping" a value, such as speed, to some predetermined value is really confusing to statistical analysis. The effect is that a variation that is expected to be there is not there. There's no trace of it. So far as statistical analysis shows, there is absolutely no relationship between the slope of the hill and the speed of the car. Well, that's true and false. The speed may not be changing, but the speed of the engine has changed a lot.

The same kind of effect could be seen in an anti-smoking campaign. The level of smoking in a region is constant, and then you spend $10,000 to try to reduce smoking. The tobacco companies notice a slight drop and counter by spending $200,000 to increase advertising. The net result is zero change in the smoking rate. Did your intervention have no effect? Well, yes and no.

The output (cigarette sales) has been "clamped" to a set value by a feedback control loop, so it varies much less than you'd expect. Again, this is hard to "see" with statistics that assume there is no feedback loop involved in the process.

For that matter, the fact that the "usual" statistical tests should ONLY be used if there is no feedback loop is often either unknown or dismissed casually, when it's the most important fact on the table.

(The "General Linear Model" only gives you reliable results if the world is, well, "linear" -- and feedback loop relationships are NEVER linear, unless they're FLAT, which also confuses the statistical tests, and sometimes the statisticians or policy makers.

The good news is that there is a transformation of the data that makes it go back to "linear" again, which involves "Laplace Transforms", which I'm not going to get into today. But, stay tuned, we can make this circular world "linear" again so it can be analylzed and you guys can compute your "p-values" and statistical tests of significance and hypothesis testing, etc.)






OK, then, I illustrate INSTABILITY
caused by a "control loop" . In this case, a new driver with a poor set of rules thinks ("If slow, hit the gas. If fast, hit the brake pedal."). Those result in a very jerky ride alternating between going too fast and too slow.

Note, however, that the CAR is not broken. The Pedals are not broken. The only problem is that the mental rules used to transform the news about the speed into pedal action are a poor choice of rules - in this case, they have no "look ahead" built into them.


Then I have a really noisy picture that's really three pictures in one.

The left top side has a red line showing how some variable, say position of a ship in a river, varies over time. The ship stays mostly mid-stream until the boss decides to "help". Say the boss is up in the fog, and needs to get news from the deckhands, who can actually see the river and the river banks.

Unfortunately, the boss gets position reports by a runner, who takes 5 minutes to get up to the cabin.
As a result, using perfectly good RULES, the captain sees that the ship is heading too far to the right. (well, yes, that's PORT or STARBOARD or some nautical term. For now, call it "right").

So, she uses a good rule - if the ship is heading too far to the right, turn it more to the LEFT, and issues that command.

The problem is that the crew had already adjusted for the too much to the right problem, but too recently for the captain to know about, given the 5 minute delay. So, the captain tells them to turn even MORE to the left, which only makes the problem worse.

The resulting control loop has become unstable, and the ship will crash onto one or the other shores - not because any person is doing the wrong thing, but because the wrongness is extremely subtle. There is a LAG TIME between where the ship WAS and where the captain thinks it is NOW, based on her "dashboard".

That "little" change makes a stable system suddenly become unstable and deadly.

People who are familiar with the ways of control systems will be on the lookout for such effects, and take steps to counteract them. People who skipped this lesson are more likely to drive the ship onto the rocks, while complaining about baffling incompetency, either above or below their own level in the organization.



The last picture shows some of the things that "control system engineers" think about.

These are terms such as "rise time", "overshoot", "settling time", and "stability". And Cost.

These terms deal with how the system will respond to an external change, if one happened.

But a lot of the effort and tools are dedicated to being sure that the system, as built, will be STABLE, and won't cause reasonable components, doing reasonable things, to crash into something.

This kind of stability is a "system variable" in a very real sense that is lost when any heap of parts that interact is called "a system." It is something that has a very real physical meaning It is something that can be measured, directly or indirectly. It is something that can be managed and controlled, by very small changes such as reducing lag times for data to get from person A to person B.

And, my whole point, is that this is something people analyzing and designing organizational behavior and public health regulatory interventions should understand and use on a daily basis.

Maybe we need a simulator, or game, that is fun to play and gets people into situations where they have to understand these concepts, on a gut level, in order to "win" the game.

These are not "alien" concepts. Most of our lives we are in one or another kind of feedback control loop, and we have LOTS of experience with what goes right and wrong in them -- we just haven't categorized it into these buckets and recognized what's going on yet.

One thing I will confidently assert, is that once you understand what a feedback control loop looks like, and how to spot them, your eyes will open and the entire world around you will be transformed. Suddenly, you'll be surrounded by feedback loops that weren't there before.

The difficulty in seeing them may be due to the fact that what is flowing around this loop is "control information", and it can ride on any carrier, as I showed yesterday with the person getting a glass of water. The information can travel in liquids, solids, nerve cells, telephone wires, the internet, light rays, etc., and is pretty indifferent as to what it hitches a ride on.

The instruments keep changing, but the song is what matters.
You have to stop focusing on the instruments and listen to the song.
Control System Engineering is about the songs that everything around us is singing. Once we learn to hear them, they're everywhere. Life at every level is dense with them. And, they seem to be a little bit aware of each other, because sometimes they get into echos and harmonies across levels and seem to entrain each other.

It's beautiful to behold. I recommend it!

W.

Thursday, May 31, 2007

On Pyramid Schemes

The reading today is from the gospel of John Gall, "Systemantics - The underground text of systems lore - How systems really work and how they fail." page 79

Principle: THE CRUCIAL VARIABLES ARE DISCOVERED BY ACCIDENT

On the edge of the desert, a few miles south of the Great Pyramids of Egypt, stands a ruined tower of masonry some two hundred feet high, surrounded by great mounds of rubble. It is the remains of a gigantic Pyramid. It' ruined state has variously been attributed to time, weather, earthquake or vandalism, despite the obvious fact that none of thee factors has been able to affect the other Great Pyramids to the same degree.

Only in our own time has the correct solution to this enigma been advanced. In conformity with basic Systems Principles ... the answer was provided by an outsider, a physicist unaware that there was nay problem,who, after a vacation in Egypt, realized that the Pyramid of Snofru had fallen down. ... It is clear that the thing was almost complete when it fell.

... Unknown to Snofru, [his] achievement hung by a thread. It was at the limit of stability for such a structure. Snofru, in expanding the scale, unwittingly exceeded the engineering limits. It fell down.

Example 2. The pyramid of Cheops.

Cheops, son of Snofru, vowed not to make the same mistake. With great care he constructed his pyramid of finely dressed limestone blocks, carefully arranged to distribute the stresses. His pyramid did not fall down, nor did those of his immediate successors, which were built in the same way. But the Egyptian State, subjected to unbearable stresses by the building of those monsters of pride, collapsed into anarchy. Egypt fell down.
Thus ends the reading for today.
Let us pray.


further reading -
Failure is our most taboo subject