Monday, June 04, 2007

Controlled by the Blue Gozinta



For those who are following this discussion of feedback loops, we're most of the way through the basic description of the insides of such a loop.

I showed how a microphone and speaker, or getting a glass of water represented kinds of feedback loops, and made a distinction between dumb feedback loops and smart - goal seeking - feedback loops, also known as control loops. And we showed how control loops are everywhere in nature, made up of almost any substance - animal, mineral, vegetable, light, chemicals -- and they don't care because the principles work regardless. Control is to the loop as a song is to the instrument - you can play the "same" song on almost any instrument, or sing it, and the "sameness" is there.

So, I need to give a name to the four parts that I had in the upper left in this picture I drew yesterday:



The basic diagram that Professor Gene Franklin uses in the book "Feedback Control of Dynamic Systems" is similar to that block diagram, except for pulling the "GOAL" out and lumping the three other boxes "comparer", "model", and "decider" into a single blue box that is labelled "?" in his diagram of a car's cruise-control system for maintaining a constant speed.


So, the diagram is from that book, as quoted by me in slide 16 of my Capstone presentation on patient team management of diabetes control. I think you may need to click on the picture to make it zoom up large enough to read the words.



In any case, the only box on that diagram that is blue is the one that the feedback "goes into", so I'm calling it a "blue gozinta" as just a funny name that rhymes and that no one else is using.

Besides, the word "controller" rings all sorts of bells I didn't want to ring, echoing back to parents and school and bosses, etc.

Well I guess I failed in that already, as I gave the example of "negative feedback " of a student getting "graded" by a teacher for performance on an "exam", and receiving a failing grade of zero percent, which could be quite discouraging and dampen enthusiasm for the subject.

Franklin's picture has two other minor differences from mine. First, he adds "sensor noise" to the bottom "speedometer" box, to emphasize that this loop is all built around a perception of reality, not reality, and the thing that does the perceiving may not be perfectly accurate. That's a pretty good model of human beings or any other regulatory agent or agency.

As John Gall would say in his book Systemantics -- inside a "system" the perception IS the reality. The medical chart IS the patient.

That effect is so strong that the patient can be dying in the bed but caregivers are so busy looking at the monitors showing something else that they don't see the problem -- which is part of what went on in the tragic Josie King case, where an 18 month child slowly died of thirst in the middle of one of the best hospitals in the world. So, yes, we better remember on our diagram that what our senses tell us is going on may be very wrong. We'll come back to that in a big way when discussing how human vision and perception get distorted by all sorts of invisible and insidious pressures - especially in groups with very strong beliefs.

The other difference between Franklin's diagram and mine is on the upper right, where he adds an incoming arrow labelled "road grade". This means the slope of the road, and how hilly it is, not what we think of the road. His point is that the behavior of a car and the speed it ends up going after we have set our end and put the gas pedal where we think it should be actually ALSO depends on factors that are outside the car - such as whether it's going up a steep hill.

That will also be a universal pattern. The results of our actions are mixed into the impact of outside actions, which makes it hard to disentangle the two from just looking at the end result. The good news is that there are software programs that can disentangle those two for us.

Anyway, the whole point of this post is to get the "blue gozinta" identified.

This little blue box is the heart of the problem, because "feedback" is really just information, and is not intrinsically "positive" or "negative". In this diagram, the "feedback" is the speed of the car, as measured by the speedometer. That's just a number.

The number becomes "positive" or "negative", leading to "more gas!" or "more brake!" actions, only because the blue box, the controller, the blue-gozinta, compared that number to the desired speed, and saw that it was less than desired. Then the controller had to check a mental model and use some rule like "if we're going too slow, push on the pedal on the right!"
"If we're going too fast, push on the pedal on the left!'

As anyone who has ever taught someone else to drive knows, that turns out NOT to be the actual rule that drivers use to control the gas pedal. The behavior those rules and that simplistic model of the world result in is holding down the gas until the car shoots past the correct speed, then slamming on the brake until the car passes the desired speed slowing down, then overshooting and slamming on the gas until the car passes the right speed on the way up, then slamming on the brake, etc. The car jerks back and forth in an unstable and very unpleasant oscillation forever if that's the only rule in use.

However, we can probably all think of organizational policies or laws that have exactly that behavior, and are either too harsh or too lenient, or something, and keep on going back and forth and never manage to get the right setting.

It has been hard to recognize those problems and go
  • Hey, I've seen that behavior before!
  • That's a "control loop" behavior.
  • The way to fix it is to change what goes on in the blue gozinta box.
  • What part of the process / law / policy I have corresponds to that box?
  • That's where the problem can be fixed.

It's really important to see that there is nothing wrong with the car. The gas pedal works fine, and does not need to be replaced. The brake pedal works fine. The speedometer (in this case) works fine. What is wrong is inside the blue box, and is subtle - it's the "mental model" or rule that is used to decide what action to take depending on what information is coming into the box from outside.
And, the realization is that a very simple rule, a dumb rule, doesn't accomplish what we want, but a slightly better rule will make the very same parts behave correctly together.
The better rule requires a little more brains inside the box. We have to track more than just how fast we are going and how fast we want to go -- we have to figure out how fast we are converging on the goal, and start letting up on the gas as we get near the target speed, before we even get there.
The controller needs to "plan ahead" or "look ahead" and react to something that hasn't happened yet.
This seems to fly in the face of science and logic. How can a dumb box react to something that hasn't happened yet? We can't afford the "glimpse the future!" add on module, at $53 trillion.

Ahh, but here's another wonderful property of feedback loops. What goes around comes around. We've been here before. Nothing is new under the sun. The past is a guide to the future.

Either putting out the garbage can causes the garbage trucks to come, or we can learn the routine well enough that we can predict when the trucks will come based on past experience. It turns out, in a loop, the past and future become very blurred together.
Being able to recall the past IS being able to predict the future, in a control loop.
We don't just go around a control loop once or twice -- we go around a control loop thousands or millions of times. So, if we have any rudimentary learning capacity at all, we can start to notice certain patterns keep happening. We can detect what always seems to be happening JUST BEFORE the bad thing happens, and use THAT as the trigger event to react to instead.

So, we have a second rule that gets added by experience -- "When you get near the target goal, start easing up on the pressure to change and start increasing the pressure to stay right there and keep on doing exactly what you're doing."

This basic ability to learn from experience is the simplest definition of "intelligence" we can come up with. Do you recall the joke about Sven and Ollie that Garrison Keeler told?

Sven comes by Ollie's house and sees that Ollie has both ears bandaged.
"What happened?" he asks.
"Well", Ollie replies, "I was ironing and the phone rang and I picked up the iron by mistake and held it to my ear!"
"Oh.... So, what happened to your other ear?"
" Ahh.... once I was hurt, I tried to call an ambulance. "
So, the moral of all this post is that the key to the behavior of a system being managed by a feedback control loop is the blue box, the "blue gozinta."

Very simple changes to that box can change a horrible experience into a pleasant ride.

The heart of "Control System Engineering" is figuring out what to put in that box.

For human beings, a second major problem is that little tiny addition of "sensor noise", and figuring out how to prevent, reduce, or account for distortions in perception that can cause the system to be responding to a perception, not a reality.

And, for both, there's another very subtle but very well understood problem, and that is "lag time." I didn't draw "lag time" on the picture but I will in the future.

If we're trying to drive based on the speedometer reading from 5 minutes ago, things will not go well for us. In fact, the more we try to "control" things, the worse they can get.

This is a huge problem. A perfectly stable system that is perfectly controllable becomes a nightmare and unstable and can fly out of control just by there being too much of a lag between collecting the sensor data and presenting the picture to the controller.

Or, in hospitals and business, it's popular now to have a "dashboard" that shows indicators for everything, often exactly in "speedometer" type displays.

The problem is, the data shown may be two months old. We are trying to drive the car using yesterday's speedometer reading at this time of day. When I state it that way, the problem is obvious. But, I can't find any references at all in the Hospital Organization and Management literature about the risks caused by lag times in dashboard-based "control".

At this point, even with just this much understanding of control loops, you, dear reader, should be starting to realize how may places around you these loops are being managed incorrectly.

We're spending a huge amount of effort trying to improve the brakes and gas pedals, when the actual problem is a lag time in the messages to upper management, or that sort of problem.

None of these problems need to be in our face. These are all "Sven and Ollie" problems that we can fix with what we know today.

But that will only work if we're really sure about how control loops work, and how they fail, and can make that case to the right people in the right way at the right time.

Take home message -
Even a very basic understanding of control loops can help us ask the right questions, and realize where the problems may be lurking instead of where they appear to be at first glance, so we don't waste our time barking up the wrong tree.

Especially in complex organizations, the generator of failure is usually not that labor failed or management failed, or that any one person did something "wrong." What is killing us now is that we have a huge collection of "system problems" that are due to things like "lag time" and "feedback". Every piece of the system is correct, but the way they behave when connected is broken. There is a "second level" of existence, above the pieces, in the "emergent" world. Things can break THERE. Most of the systems humans built are broken there, or at least seriously in need of an engine mechanic, because we didn't even realize there WAS a THERE.

Worse, "management" still thinks that discussion of "higher level" problems means that someone is pointing the finger at THEM, and that leads to bad responses.

The problems are subtle. We won't see them unless we spend a little time studying how control systems work, and how they fail. Then, the patterns will be much more obvious, and our efforts will be much more likely to be successful. And, then we can stop blaming innocent people for problems that aren't their fault.

It is, however, in my mind, the fault of the whole enterprise of Public Health if this kind of insight is not taken advantage of when designing regulatory interventions or in helping individuals try to "control" behavior. That, in my mind, would be a clear failure of due diligence.

Or - it would be, if these concepts had been published in the peer-reviewed literature that's the only thing they read and pay attention to.

Which says, it's my fault for not publishing this and your fault, dear reader, if you don't get after me to do so.

After all - I depend on feedback from my readers to control my behavior. So, what I do depends on what you do.

Wow, doesn't that sound familiar?

No comments: