Monday, April 23, 2007

capstone slide 16


This diagram from Franklin's text book shows the basic parts and connections for a "cruise-control" system, to keep a car moving at a pre-set speed.

It's relevant because the parts and the flow are universal patterns that you can find in almost any system, whether it's man-made or biological or chemical.

The discussion below will start to get way more complicated than is typical for a public health model, but still far less complicated than a typical "control system" problem that engineers solve routinely when, say, designing a new fighter jet.

It doesn't matter that it has many parts and connections, which televisions and cell-phones also do -- it only matters that they number is small enough that the data fit into the computer tool and generate an answer that can then be tested independently. Even three loops, as in the Beer Distribution problem Senge describes in "The Fifth Discipline", is beyond normal human intuition, so beyond that it is pretty much all the same whether the model has 4 loops or 14.

Obviously, for parameter fitting, we want to have a lot more data than unknowns, a constraint that may often be easily met in practice with time-series data. For example, in a year there may be 365 blood glucose readings, which may be rich enough to nail down an 12-parameter model with data to spare.

So, bear with the complications. They turn out not to matter very much.

The picture above shows is a goal, which in the case of a car is the desired speed, shown way at the left of the diagram. This goal goes into the blue box, labelled "controller", which we'll discuss much more shortly. For now, that functions is done by either the computer or the person driving. The controller has something it can control, in the case of a car this is the gas-pedal (throttle). Pushing the pedal down asks the engine to produce more power, which may have some lag time before that takes effect. The power flows into the body of the car, tending to make it go faster, if it weren't for the outside influences that also affect the car, such as whether it is climbing a hill or going down one. The two forces combine to produce one outcome - the actual speed of the car.

The actual speed is perceived by some sensor, such as a speedometer, which also has some distortion and noise affecting it, and possibly some additional lag time. Then the perceived or "Measured" speed is conveyed back to the blue box, the "controller".

At this point, the cycle starts again, but this time with a difference. The controller "knows" what speed it wanted, and can "see" what speed it has achieved, and so it can measure whether it has succeeded in getting the car to go fast enough. The "feedback", by itself, is not positive or negative - it is just information about the car. What the controller does with that information is positive or negative, and is based on an analysis of (a) what the difference is from what was wanted, (b) and how fast the difference is changing or closing the gap.

Part B is really important. If the decision was simply to hold down the gas pedal to the floor until the desired speed was reached, and then release it, the car would overshoot the right speed and be going too fast. Then, if the decision was to slam on the brake until the car slowed down to the right speed, the car would overshoot again, and end up going too slowly. The result would be a rapid cycle of going from too fast to too slow that would never stop.

Not only does the controller have to have some wisdom, it has to have some foresight. If a baseball outfielder's rule was "run towards the ball", as soon as the ball was hit by a batter the outfielder would run towards home plate, where the batter is or just was. Instead, the right thing to do is to run towards "where the ball will come back down, not where it is now."

So, the controller has to decide several things. How far off from the goal is the current outcome? How fast is it catching up? Should something be changed and, if so, which direction?
(For example as it comes up near the correct speed, the gas pedal will have to be let up on slightly, even though the car is still going too slowly!)

It's even worse if the controller had no idea to start with what each of the pedals did, as with a student driver, and had to learn that "pushing the one on the right often makes the car go faster, except going up a steep hill when the car still slows down" and "pushing the one on the left makes the car go slower, except when going down a steep hill when the car may go faster anyway."

Now, add to this the addtional problem that maybe the controller cannot actually see what the ground is doing and has to guess at that as well, based on the response of the car. Finally we have a situation typical for a person learning how to control their blood sugar -- SOMETIMES, eating more carbohydrates helps, but SOMETIMES it doesn't seem to matter, except that SOMETIMES it really makes things worse.

Control System Engineering (CSE) is the study of how such control systems behave, although this is about as simple as one gets, with only one loop in it. Real systems, as are studied in "Systems Thinking" or "Systems Dynamics" have multiple loops that intersect each other, possibly in multiple places. To predict the behavior of those, or to CHANGE their behavior in a desired way without "unintended side-effects", intuition is almost impossible, and some more powerful tool is required.

Fortunately, CSE is well over 100 years old, and has already developed full tool-boxes that do the computational heavy lifting for you, just as products like STATA and SAS and SPSS do the heavy math of statistics for you, so you can just use the results.


The issues in designing a control system come down to figuring out what should go into the blue box, the "controller", which is unhelpfully left off entirely from most "feedback diagrams" in the public health or health literature.

Some of the issues that can be solved involve trade-offs in these factors:

Stability: will the overshoots and oscillations calm down over time and go away, or will they actually get worse and worse until something breaks?
Steady state value: if left alone, where will it settle?
Rise time: how fast does the system close the gap between the actual and desired outcomes?
Cost: how much does it cost to make such a system?
Overshoot: how much does the system overshoot the desired value? (Sometimes overshoot is very bad and has to be avoided, as in an example of too high a dose of medicine, in which case the system should come up to the right value slowly from below.)
Disturbance rejection: this is a fancy name for how well the system can maintain a steady value despite changes in the outside world. For a car going 60 miles per hour, for example, it measures how much the speed will change if the car goes down or up a hill.
Response time: How long does it take the controller to figure out that something external has changed and it needs to apply some sort of corrective action?
Lead time and lag time: How long does it take, from the time the gas pedal is pushed, before the engine starts to produce more power? In a small airplane for example, it takes about 4 seconds from the time the throttle is changed until the engine starts delivering more power.
Sensitivity: what happens if the engine gets older, or some days the "oomph" just isn't there, even when the "gas pedal" is pushed? Can the controller adjust for that?
Dynamic tracking: if the goal is changing, how well can the system "keep up" with the ever-changing goal? Can the system deal "if the cheese is moved" or did it only learn one pattern and if the rules change the system will just keep on trying to use the old way to try to cope with a new problem?


With human beings involved, there are some additional variables that are not quite so prevalent in hardware.
For one thing, there is a second "motivation" loop that can sag if too little "success" occurs, so it may be necessary to "lower the goal" temporarily to get motivation interested in action again, before raising the goal back up again slowly enough to not lose that sweet relation between the goal and success.

Also, humans have a third loop that can reduce the frustration of conflict between a goal and the actual outcome by changing the sensor - that is, altering their perception of how well they are doing, so that it better matches the goal.

Too strong a demand and pain related to conflict can result in altering perception, not altering action or actual outcomes.


Finally, humans have a fourth loop that can reduce the gap - simply shoot the messenger, or stop going to the doctor. Eliminate the thing that is making that annoying goal show up at all.

Of course, exactly the same relief from pressure can be accomplished by letting the feedback loop simply fall apart. Hospitals tend to do that with JCAHO requirements, once JCAHO team leaves. And, patients tend to do that with medical advice.

So, a fifth and sixth loops are needed to capture the external world's pressure and impact, not just on the "body" in question, but directly on the goals, in response to the actions taken (the equivalent of pushing the gas pedal), and onto the ability of the person to perceive what is going on, that is, on their "sensors."

Brief digression for two stories:

A crowd or audience can dramatically shift what can be perceived, something I have first hand knowledge of from doing stage magic in a crowd. Interestingly a crowd of young children is way more perceptive than one child, but a crowd of adults is way less perceptive than any one adult alone, at least when it comes to seeing how a magic trick is "done". In my own experience with such deceptions, a person can see what you have done, then try to tell a neighbor, and if the neighbors all put him down and say "No", he will actually forget entirely that he ever saw the issue in the first place. It's remarkable.

Research on the impact of crowds on individuals has one dramatic video in it that may have been done at Cornell in the late '60s, and was certainly presented by Allan Funt on the TV show "People are Funny." An unwitting subject gets on an elevator on floor 1 of a building going up, and the elevator only has a front door. At each successive floor upwards, an investigator gets on, walks to the back of the elevator, and faces the blank back wall - that clearly cannot possibly open. When the first one does this, our victim glances and ignores it. When the second one does it, he looks somewhat anxiously to see if there is a back door, but decides against it. When the third one does it, the victim simply pivots in place and faces the back wall along with everyone else. The magic number at which people spin around seemed to always be 3.


Returning to the main discussion:

So, we have a tangle of loops that leave the one person and go up to the person's family and friends. Is the problem now hopeless? No, because there is another "break point", after the "single person" break, there is a person and his or her "posse" or "gang" or small group of reference people. This is a cluster of people that are far more interactive with each other than with the outside world, and in some ways a "unit".

In a hospital or health care system, as the IOM report "Crossing the Chasm" points out, there are natural breaks and natural edges to "small care teams" or "microsystems." These are a group of people who collectively deliver care, and who interact far more with each other than they do with the outside world. They are, in a very real sense, a "unit", or "a system", but not just a heap or list of people who communicate - it is far deeper than that. They are directly tied into each other's goal setting, reward system, norm setting, etc. They are directly dependent on each other in a very real way many times a day. They can't get their job done if the other people don't do theirs.

So, the IOM conclusion, demonstrated in many examples, is that this next higher level unit of tangled control loops, the 'small team' or "microsystem" is an even better place to intervene in changing behavior and perception than at the individual level.


It's easier to change a dozen tangled people at once than one person. In fact, there is no way to change "just one person" in such a distributed control system, because the others will restabilize them right back to where they were as soon as you let go.

This is a deep and profound insight. It totally changes how to proceed.

Taken a little further, this suggests that the concept of "a patient" as "an individual" is a broken model. This is certainly true of primates, where there is a saying that "There is no such thing as one chimpanzee." The reason this is true is that a solitary chimp doesn't behave at all the way it will behave in its normal group of chimps. In fact, given the choice of food or a look out a window to see what its herd (?) is doing, even a hungry chimp will select opening the window. Without belonging, there is no point in eating or being alive. It's a chimp level equivalent of cellular apoptosis - where a perfectly healthy "cell", if removed from a human, and subjected to no other stresses, will basically lose the will to live and commit suicide.

Connectivity seems to be some kind of critical factor for humans. Infants that are not touched can simply die. Adults who lose social connectivity have far worse outcomes than those who have not.

The point for us, however, is that the simplest feedback loop is, in reality, a tangle of maybe a dozen or so loops.
This is ok, because no one has to have intuition about the tangle directly, it just has to be small enough that the data can be put into the computer so the computer tools can figure out the simplest feedback loop model that fits the data.
This process of "model discovery" is also well known and there are tools for it as well.
However, to my knowledge, no one has ever tried to fit a multi-person dataset using such tools from control system engineering, let alone used the model to design an intervention and deduce, as it were, "where to push" so that the desired outcome will emerge after all the echoes die down.

One more sidebar. Because the purpose of all this system may be to produce a "clamp", that is, to lock an outcome to a particular value (say speed of a car) despite many external changes (hilly terrain), the use of classical statistical reasoning and "causality" breaks down, and process control measures have to be used instead. If the inputs are varying all over the place and the output is constant, classical statistics will say "NOT ASSOCIATED", yet is precisely the role of the 'control system' to BREAK the association between external events and some outcome. The challenge is to spot that two things are unexpectedly NOT associated.

That's more than enough text for one slide.

References will go here, when I have time.

"Systems Thinking" is now part of the 2006 ASPH MPH Curriculum, and was featured in the March, 2006 AJPH issue. Stedman's book "System Dynamics" is certainly enough to intimidate anyone, at 998 pages. See the links in "the law of unintended consequences"
to the MPH curriculum, Stedman, Jay Forrester's classic paper, etc.








No comments: