Showing posts with label feedback. Show all posts
Showing posts with label feedback. Show all posts

Friday, September 19, 2008

We need an improved "invisible hand", Adam

David Brooks wrote a piece in the NY Times this morning on regulation of the financial industry.

Incidentally, there is essentially no engine today in any product that does not have a "controller" as part of the design, to increase stability, response time, etc. No elevator would stop at the floor without an abrupt "jerk" without a controller. The design of such controllers is in the field called "Control System Engineering."

A sample text book is this one: Feedback Control of Dynamic Systems, by Franklin, Powell, and Emami-Naeini. These are the concepts we need for a "governance" or "regulatory" system that actually works as advertised.

Control system engineering is to complex systems what "civil engineering" is to automobile bridges across rivers -- it is completely general and non-political, it won't tell you where to build or what to build with, but it WILL tell you the required properties of the materials and that some things will simply not work. You can't build the Brooklyn Bridge out of plastic, for example, regardless how cheap it is. You can't design a regulatory system that depends on feedback, for another example, and then blind the sensors that are supposed to determine the feedback.

The advantage of such engineering is that it focuses on issues such as "stability" (a big one right now) and gives power to insight, such as that blinding the eyes of a system will make it drive off the road for sure.\

Search "feedback" or "system thinking" in this weblog for other posts on such matters!
====================================================


One obstacle to a good solution is the incorrect assumption that a process "under control" equates to a small group of people doing "the controlling." Let's keep those separate.
The question of whether we need more "governance" should be distinct from who, or what, should be the active agent. For much of the US History, many have favored Adam Smith's "invisible hand" of the marketplace to do this controlling.
The classic debate over more or less "government" desperately needs this distinction.
The question should be whether there is an improvement on the class of "invisible controllers" that (a) do a better job and (b) are even less corruptable by those who would hijack the process.
There is no question that we have very complex processes running out of control, and that this is not the preferred state. Fine.
The question is how to achieve the "under control" part. The institution called "government" has typically decayed to "a few people" who, regardless of wisdom and intent, have been unable to grasp the com
plexity of the beast or improve on its operation and results.

The deep cynicism resulting from such failure seems related to the abandonment of a goal of prosperity for all and replacement with a goal of "prosperity for me and my friends at everyone else's expense" which turns out to be a short-term illusion, given how interconnected everything is.
These are problems in the area of "control system engineering" and "complex adaptive systems" and the necessary insights are probably in those fields.

Wednesday, September 26, 2007

Role of IT - information technolgy - in next-gen companies

Judging from Toyota and "lean" processes, what is the appropriate role of information technology ("computers and networks") in the next-generation company?

If we assume that what we're building is, essentially, a massively-parallel connnectionist computing engine (consciousness) out of people and technology, we get the suggestion that the key roles are:
transparent communication at successively larger scales
coherence-building at successively larger scales, and
transparent interactions - ("phase-lock loops") across the components of the system.

Yes, computers will still be required for tracking the trillions of details needed to run a large company today, but that is, in Peter Senge's words "detail complexity." There's a huge amount of it, but it is, relatively, simplistic in nature aside from the amount of it. Enterprise computing knows how to do that, at least in theory.

What we are looking for in the next-gen company is the thing that ties it all together, that supports the feedback loops that maintain coherence and build integrity, the same way the circulating thoughts in the brain slowly emerge an "image" out of billions of "nerve impulses" from the retina.

This is "Technology-mediated collaboration" and more, so I'll call it "technology-mediated coherence." It is what allows "aperture synthesis" in large radio telescope arrays to act as if they are a single huge individual and the gaps "don't exist."

This is pretty much what the Institute of Medicine was recommending when it urged a focus on "microsystems" recently (see prior posts on "microsystems"). The point is that a small team (5-25 people) is capable of being "self-managing" if they can simply be given the power to do so by having access to information about what their own outcomes are. This information does not need to be packaged and interpreted at successively higher levels of management and then repackaged and distributed back to them a month later as "feedback." In fact, that doesn't help much. What really helps is speed. What helps is if they can see, today at 2 PM, how they have been doing collectively, up through, say noon. They can learn to make sense of the details, and don't need "management" to try to do that for them.

In fact, given the fractal density of reality, and the successive over-simplifications required to get data into a "management report", it is a certainty that we have something far worse than the game "telephone". What will come back down the line from upper management will bear little resemblance to what went up, breeding distrust and anger on both sides.

So the role of next-gen IT is to grab hold of the 'WEB 2" technology, that allows bidirectional websites to be both read and written by people, and that includes weblogs, wikis, and "social software" that encourages interaction and cooperation, including, gasp, "gossip."

This is the stuff that, in the right climate and context, can be converted into "social capital" and converging understanding by each employee as to what everyone else is doing and why.

Where there can be dashboards, they should best be very close, in both space and time, to the decision-making actors. Lag times are incredibly dangerous, and are the source of instability in feedback systems. (Imagine trying to drive a car with a high-resolution TV screen instead of a windshield, with a fantastically clear picture of what was outside the car 15 minutes ago. )

A relevant quote from Liker's "The Toyota Way" is this (page 94) where he is talking about the problems with large batches and the delays that go with such batches:
"...there are probably weeks of work in process between operations and it
can take weeks or even months from the time a defect is caused until the time it
is discovered. By then the trail of cause and effect is cold, making it nearly
impossible to track down and identify why the defect occurred.
The hugely complex computation of making sense of such data is what human brains and visual systems are built for, and tuned for, and that machines costing a billion dollars cannot replace yet. Just give people a VIEW into what is happening as a result of what they are doing, and they will, by a miracle of connectionist distributed neural-networks, figure out what's affecting what faster than a room full of analysts with supercomputers - in most cases.

That's the role that computation needs to look at - is close-to-real-time feedback in a highly visual form to the workers of the outcome of the work currently being done. (This is a step-up from Lean manufacturing visual signal system which is a signal to management that something is amiss.)

The "swarm" is capable, like any good sports team, of making sense of "the play" long before the pundits have had a chance to replay the video 8 times and "analyze" it. Yes, there is a role for longer-term, more distant view that adds value.

But what there is NOT is a way to replace real-time feedback and visibility with ANY kind of delayed information summary. All the bases must be covered, and long-term impacts and global impacts will not be instantly visible to local workers -- but they have to be able to see what their own hands are doing or they'll be operating blind. "Dashboards" with 1-month delays on them cannot cover that gap. Too much of the information is stale by the time it arrives. Both are needed. Local feedback for local news, and successively more digested, more global feedback for successively larger and more slowly varying views.

Monday, September 17, 2007

Positive Organizational Psychology references

This is a collection of references to positive organizational psychology and high-reliability teams, which have in common the use of social feedback mechanisms to achive high performance.

[This is taken from a post I made to the System Dynamics listserver.]


Here's some links into the literature on positive organizations.

For how top-down management can suppress dissenting views that challenge the model.
see most of the high-reliability organization literature (aircraft carrier flight-deck, cockpits,
Nuclear reactor control-rooms, US Army, etc.).

High-Relability Organizations and asking for help(my thoughts)
Secrets of High-Reliability Organizations (in depth, academic paper, MIT)
High-Reliability.org web site
Threat and Error Management - aviation and hospital safety - Texas

Institute of Medicine - Crossing the Quality Chasm and microsystems (small group teamwork)

Nineteen case studies of health care organizations that dramatically improved their operations through the use of feedback-regulated small-team ("microsystems") operations are well documented in another post
here:


A great deal of accessible literature and some excellent videos are here:
Center for Positive Organizational Scholarship, at the U of Michigan Ross School of Business

http://www.bus.umich.edu/positive/pos-research/pastpositivesessions.htm

http://www.bus.umich.edu/Positive/POS-Research/Readings-to-Get-Started.htm


Positive Deviance - (the new business model)

Consider this excerpt from the US Army Leadership Field Manual (FM 22-100)

1-3: Leadership starts at the top, with the character of the leader, with your
character. In order to lead others you have to make sure your own house is in
order.
1-7: The example you set is just as important as the words you
speak.
1-8: Purpose ... does not mean that as a leader you must explain every
decision to the satisfaction of your subordinates. It does mean that you must
earn their trust: they must know from experience that you care about them and
would not ask them to do something - particularly something dangerous - unless
there was a good reason...
1-10: Trust is a basic bond of leadership, and it
must be developed over time.
1-15: People who are trained this way will
accomplish the mission, even when no one is watching.
1-23: you demonstrate
your character through your behavior.
1-56: Effective leaders strive to
create an environment of trust and understanding that encourages their
subordinates to seize the initiative and act.
1-74: The ultimate end of war,
at least as America fights it, is to restore peace.
4-9: Be aware of barriers
to listening. Don't form your response while the other person is still
talking.
4-20: Critical Reasoning ... means looking at a problem from several
points of view instead of just being satisfied with the first answer that comes
to mind.
4-24: Ethical leaders do the right things for the right reasons all
the time, even when no one is watching.


Failure is perhaps our most taboo subject (link to John Gall Systemantics)
Houston - we have another problem (My thoughts on complexity and limits of one person's mind)


Wade Schuette, MBA, MPH
Ann Arbor, MI

Tuesday, June 26, 2007

Darwin rules but biologists dream of a paradigm shift

"There is nothing scientists enjoy more than the prospect of a good paradigm shift."

Douglas H. Erwin starts with that premise in an essay in the New York Times Science Times section today. Focusing on the hot topic of evolutionary and developmental biology, his title is "Darwin Still Rules, But Some Biologists Dream of a Paradigm Shift."

Of course, I can't help but notice that he uses the word "some" in the title to soften that premise.

And, in reality, paradigm shifts are initially very strongly resisted. Thomas Kuhn documented this so well in his famous Structure of Scientific Revolutions. It is in fact a crisis of a fundamental kind to challenge the prevailing, comfortable, organizing world-view. This is the large scale version of the resistance within an organization to Karl Weick's "mindfulness" and surfacing problems that seem to imply the whole mental model is wrong, instead of suppressing them. In that way, this is the key to "The Toyota Way", as well, which is obsessive about forcing a process that leaves problems no place to hide.

John Gall discusses this delightfully in his half-humorous, half-profound view of how systems fail and his invented field of "systemantics". See Failure is perhaps our most taboo subject.

My readers know this subject is near and dear to me right now, as I'm caught up in the paradigm shift within public health, which is transitioning from a local, biomedically-oriented view of causality to a global, context-oriented, multileveled "distal" or "ecological" view of what determines who we are, how we act, and whether we are healthy or not. The older view was historically very successful and proponents of it are not about to give it up without a fight. Entire careers and departments have sprung up around it, giving it staying power.

My readers also know how I tend to view all this commotion through the lens of what I'm calling "s-loops", and what I see (modestly) as even more basic than DNA as the building block of all life at all scales. This is my invented term for Self-aware, Self-sustaining, Self-repairing, Self-protective regulatory feedback control loops -- which is why a shorter term is helpful.

These loops don't really care what substrate or medium they are based in, and can happily cross from DNA to water-levels to photons to whatever and back. Importantly, they don't care what scale of life they operate in, and are as happily at work at in a "genetic circuit" as in the Tobacco industry, following exactly the same rules and principles.

Erwin gets so close to this in his essay, talking about how researchers in artificial life labs and the whole Santa Fe Institute crowd have shown that eyespots can evolve into our current eyeball models through evolution. I have to note on the side that "cross-over" is probably the more accurate term for what he's calling "mutation".

His point supports my point, which is that s-loops quickly develop "eyes" of one kind or another. Erwin says:
Natural selection, driven by competition for resources, allows the best-adapted individuals to produce the most surviving offspring... It is the primary agent in shaping new adaptations. Computer simulations have shown how selection can produce a complex eye from a simple eyespot in just a few hundred thousand years.
Well, any adaptive cybernetic thingie, whether made of silicon or carbon or virutal electrons, needs to be able to detect the outside world that it is supposed to be adapting to, duh. Why is this rocket-science? And silent detection (eyes) is a lot safer in a predator-rich environment than active detection (touch.) I'd rather see the snake than reach in the hole and find it by feeling around. Again, duh.

In my book, the whole evolutionary biology crowd is too close to the beast to be able to see the simple outline, even though they draw "feedback" loops and Krebs Cycles and genetic circuits all day long. Systems Dynamics people draw "causal loops" and that's great as far as it goes, but fails to focus as well on that very special class of regulatory feedback loops that become self-aware and undergo a sort of phase-shift in nature.

Once a goal-seeking control loop has been established, with any learning capacity at all, the goal ends up including self-survival -- at least, of the ones that survive! Those not interested in or good at survival, bless their hearts, are not generally with us any more - but make a great snack.

So, the persisting ones care about survival, and care about internal quality-control. They have to be able to repair damage and overcome noise. Once they get more complex, they need to be able to distinguish "me" from "not me" - ie, develop a rudimentary immune system. They need to learn how to fight back.

Then, the very clever ones, with even more propensity to survive, discover that they have some influence over the world around them. They can move to a new location and get out of the rain, which is one way to locally control the local world. They become terra-formers.

It doesn't take long to run into the fact that part of the world one is terraforming (or about to eat) already "belongs to" another S-loop. Uh oh. The dumb ones take up fighting even more, and the bright ones learn about alliances and stable ecological cross-supportive worlds.

But it still all comes down to an s-loop at the core, despite the fancy clothes. We still have a Self-aware, Self-sustaining, Self-repairing, Self-protective regulatory feedback control loop at work, bound by all the principles that control-system engineering has discovered and made into textbooks for those who have eyes to read.

My prior posts show that such a core loop will have a "blue gozinta", my somewhat tongue in cheek term for a "controller" that must have a few key parts, and always has them:
  • A sensor for the world
  • A sense-maker of the raw sensory input
  • A mental-model (paradigm, world-view) of what's outside.
  • A goal.
  • A way to measure difference between the goal state and the current state.
  • A mental-model of how things work and what parts it has that it can move.
  • A way to take historical data stream of sensory input of what it's done and where it is and what seems to affect what and use it to generate the next second's push, pull, or other way of impacting the world.
But, it is not enough that it must locate and defend and repair the parts of itself that rust or break or are edible -- it must locate and defend the conceptual parts that can also break down -- that is, it must also defend its blue gozinta and its mental model or paradigm. To have the paradigm break down is to lose ability to make sense of the world, and therefore to die.

So, any S-loop will have a strong survival pressure to defend its own internal mental models and paradigm, countered by a learning system that has to come to grips with the fact that sometimes, yes, the "cheese moves."

If I call anything with a functioning S-loop "alive", then not only are all "Living things" alive, but so are corporations, nation-states, religions, cultures, social norms, prejudices, stereotypes, and evolutionary biologists' collective paradigm of how things work.

So, yes, by this model, of course they will fight back, and fiercely, if their paradigm is challenged. And, yes it makes sense that all the supportive control structures terraform around themselves locally supportive smaller s-loops, which are built or entrained to be part of the larger empire. In this case, researchers, and collections of researchers, have all organized around this older paradigm as part of their "given" world and shared assumption, and in acting to defend their own s-loop identity and world-view, give life to the defense of the entire field's identity and world-view - that is, the field's core s-loop. It is natural that the field, a meta-living thing, will then support supportive opinions and try to stamp out or squash contrary or challenging opinions and dissent. All s-loops will tend to do that, at all scales: genes, bosses, departments, corporations, religions, nation states -- all will tend to squash and suppress dissent.

But two things can happen. The old guard can die off and yield that way to the "young Turks" who have a different paradigm, or the old guard can learn and adapat - a traumatic crisis of paradigm shift.

But it can be successful, and go from everyone knowing that the new paradigm is "obviously wrong" to everyone adopting it and effectively changing the past to affirm now that "they've always believed that."

In the short run, failure of news to update the paradigm has been identified as the killer of high-reliable operation of pretty much any complex adapative system, whether it's a nuclear reactor control room or the US Army or an aircraft cockpit or a hospital's surgery suite. When the old paradigm suppresses too much dissent, it misses the news that the cheese has moved, the old model of the cooling system must be broken, the enemy has moved locations from where headquarters was sure they were, etc. Actions no longer are based on reality, and tend to no longer support survival.

This appears to be the core issue about which we, as a society, are pretty ignorant right now -- what's an efficient way to make a "learning organization" that can collect input from its sensors and figure out when the internal mental model and paradigm need to be updated.

And, in the military, or hospitals, or any high-stakes operation, how do you keep the "control" system functioning, right in the middle of a mission, while ripping out the old paradigm and implementing a new one. For example, how do you transition from McGreggor's "Theory X" management to "Theory Y" management without losing the whole ballgame during the transition? The middle state seems ugly and totally out of control, even if the far side "future state" looks way better than where we are now. Is there a way to skip the middle state and just wake up and find ourselves in the new paradigm?

This is effectively a phase-transition -- the same stuff is still in almost the same place, but now the way it is structured has changed, with possibly a lot of stray energy involved going in or coming out.

The benefits of an s-loop model of evolution is that, in addition to our genes and selves and species, it includes all those departments and corporations and cultures and nation-states around us that are visible daily trying to assert control and dominance of the world and paradigms around themselves.

And, the s-loop model has another really strong benefit over pure Darwin at one level -- namely, there is an alternative to "kill or be killed" known as "cooperate in an ecology" or "acquire and merge." Diverse ecologies are way stabler than homogeneous empires (the Borg) and have proven so far to be able to survive massive context and climate changes that even huge individual models (dinosaurs) couldn't survive.

S-loops are all around us. Two people in a strong relationship or marriage may succeed in forming a bond that is so real it takes on a life of its own - and becomes another s-loop that is self-aware, self-healing, and terraforming the space around it in order to survive better.

My main point is that the behavior of complex regulatory feedback control loops is not something I discovered yesterday -- this field has been studied over 100 years and has a great depth of literature, analysis tools, theory, principles, visualization tools, and ways to simulate situations and do "what if" analyses.

If pretty much everything we care about is in the grips of one or more s-loops, then wouldn't it make sense to get the Santa Fe Institute, or somegroup like that, to educate us on what kind of behaviors you can get out of a swarm of such things interacting with each other - especially if you allow for consciousness and efforts to terra-form, make alliances, and learn how to overcome the "sticky paradigm" problem with some sort of dynamically stable solution.






















Saturday, June 16, 2007

Being a robot - 101: The cybernetic loop

I realized that I was just assuming that everyone knew how robots think.
Or for that matter, how babies think when they have to grab something.

We usually think of actions as big chunks, such as "Catch the ball."

Robots have to operate on a much more detailed, step by step level, with everything spelled out for them. Nothing is certain, so everything is just a process of getting a little closer and seeing if anything broke yet. And repeat.

They do this by following a very simple loop, over and over again. Spot where the ball is. Push your hand towards it a little bit. Remember that your hand doesn't always end up where you were trying to push it. Figure out which way the ball is NOW from your hand. Push your hand that way one notch. Figure out again which way the ball is now. Push your hand. Etc.

In a diagram, it would look something like this:
Congratulations! If you understand that diagram, you are much closer to understanding how anything works. Actually, I think you're one huge step close to understanding how almost everything works.

There is a cycle of action, looking, planning, action, looking, planning, etc. Over and over.

The "planning" tends to be very short-range, uncomplicated planning - but what it lacks in complexity, it makes up for with speed and persistence and never getting bored.

So here's a very powerful fact about life. Not only does "a journey of 1000 leagues start with one step", but sometimes the ONLY way to plan that journey is one step at a time.

In fact, a series of small steps is a thousand times more capable than one big step, regardless how clever you are, and regardless how well "planned" that one step is. It took computer scientists almost 50 years to figure out that many small computers is actually much better than one large computer for getting work done. It took "artificial intelligence" workers about 30 years to figure out that many small, dumb rules added up to a better way to work than one huge, complicated rule - and it was easier to write and easier to fix too.

Why is this? Imagine that you are on one side of a small woods and you want to get to the other side.
It is very likely that there is no direction you can pick to walk in a straight line that won't bump into a tree.
But, if each step can be a slightly different direction, there are thousands of paths you can use to walk through the same forest without running into a tree.

What's the moral? It seems so "obvious" now, but it baffled scientists for 50 years -- a "curved" path is more flexible than a "straight" one. You can get places with a stupid little loop as guidance that no amount of clever planning can get you if you have to move in one step in one straight line.

This kind of cycle with many tiny steps and a very short pause to think between each step is called a "cybernetic loop". It looks deceptively simple, while it is amazingly powerful.

It can keep on working if the wind is blowing, without having to be reprogrammed. It can keep on working if the ball is rolling on a bumpy hillside. It can keep on working if your robot arm is rusty and doesn't always move as far as it used to when you push it, and sometimes it sticks entirely. This deceptive little loop is all the computer programming required, essentially.

Now, it will work a little better if the robot has some learning capacity and has done this kind of reaching thing before. The robot may learn that it should reach for where the ball will be, not where it is now.

You learned this so long ago you have forgotten that you learned it. Imagine a baseball game where the batter hits a high, fast ball and the guy in the field runs towards home base instead of towards where the ball looks like it will come down again, because that's "where the ball is now."
So, yes, taking the speed of the ball into account does help. But that's a minor change to the program. The same loop works, except the "planning" step is a little bit longer.

So, this is profound wisdom I'm giving you here. It took all of mankind 50 years to figure this out, and some haven't got the news yet. You get it for free, right here, right now.

So, let me run it by you one more time. Here's the same moral, or same story, in slightly different words:

A plan of action that involves a repeated cycle of very small steps, with some looking and thinking between steps, is much more flexible, and much more "powerful" than trying to "solve" any problem in huge step.

Furthermore, if the world is complicated, and tends to have hills and bumps and wind gusts and rusty arms, you can be guaranteed that no "single-step" plan will ever succeed. In that case, ONLY a multi-step approach will get you where you want to go. If your job involves "going through the woods" and around trees that you don't even know about yet, it is much easier to plan to go around trees than try to "collect data" on the location of every tree, put it into some huge list or database, print out a map, and find "a straight path" through the forest.

This doesn't say "don't bother planning." It does say, "don't waste your time trying to find a linear solution to a curved path." There are millions of curved paths that can work just fine, in cases, like the woods, where there is no straight path possible.

And, one more time through it, from the Institute of Medicine's perspective, as in dealing with small teams (called "microsystems"). If you are dealing with a "complex, adaptive system" (like a hospital), it is way more powerful to just rig up the team with eyes and a feedback loop than it is to try to have hospital management "plan" how to improve things. Ditto for "The Toyota Way", or the power of "continuous improvement" or what Demings taught, or a "plan do check act (PDCA) cycle".

Empowering your front-line employees by giving them "eyes" and a little room to maneuver on their own to get around "trees" is a very powerful strategy that works in practice.

It is based on the most powerful "algorithm" we know of today - the "cybernetic loop."

Oh, yes, one more tiny thing. Since this is such a powerful "algorithm" or "paradigm" or way of doing things, much of Nature and your body already knew about it and uses it.

Public Health is sort of vaguely discovering that the "action" step always needs to be followed with a "reflection" or "assessment" step, but hasn't yet sprung to the fact that it is reinventing the wheel, or more precisely, the cybernetic loop, yet one more time. It hasn't figured out that many smaller steps adds up to a more powerful path-generator than one large step.

And, sigh, enterprise budget processes don't reflect this wisdom. For years I fought with the fact that Universities tend to have "annual budget cycles", and enterprise computing is seen as coming in only two flavors: "maintenance" and "huge projects". Maintenance money can only be spent keeping things the same. Huge Project money ("capital budgets") can only be used to take, well, huge steps in a big straight line, and the big straight line, or "project plan" has to be computed up front and committed to before starting.

Well, duh, no wonder that doesn't work. That CANNOT BE MADE TO WORK. There are too many unknowns and unknowables, too many rusty arms, too many trees.

But every time it fails, the "solution" is to plan every LARGER steps next time, with a much BIGGER database that lists every single tree and bush and pothole. THEN, oh boy, you betcha we'll succeed.

Nope. That's a bad algorithm, a bad paradigm. The cybernetic loop model tells us the answer is way back at the other end: continuous, incremental, small improvement steps. Steps driven by local "feedback" that doesn't even involve upper management.

You can get to places you need to go with a million simultaneous tiny, sensible steps that people can understand that you cannot get to with one huge project, regardless how many billions you spend on "planning" it. Our whole accounting system, meant to help us spend money wisely, is causing us to spend it foolishly.

As the IOM report realizes - "We don't need a billion dollar project -- we need a billion, one-dollar projects." (paraphrased from "Crossing the Quality Chasm"). This isn't "sour grapes" or "some dumb idea" -- this is the most profound wisdom humanity has come up with yet.

It's kind of the Chinese approach. If every person picks up one piece of trash a day, it's way more successful than if every person sends $1000 per year into a central location where we build the Institute of Trash Pickup and study the trash-pickup problem and produce endless reports and finally some huge trash collection system that doesn't really work but is really expensive to maintain when they're not on strike (thank you, John Gall, for that insight.)

Ditto for installation of some kind of automated physician order entry system or other massive cultural change of the way things are done. It may seem "hard" to figure out what huge new system, in one step, will get us from point A to point B. Hmmm. Maybe that's because there aren't any "one-step" solutions to getting through the forest, and we need to reconsider our approach. Maybe a million tiny adjustments will solve two problems at once: the "What do we do?" problem, and the every popular "How do we implement it?" problem.

Ten thousand tiny search engines (people) each looking for one tiny step that is possible and totally understood that would help "a little bit" actually constitutes a "massively parallel supercomputer" that can outstrip almost any other way of "solving" BOTH of those problems simultaneously. That's really cool, because it turns out not to matter how great a solution is on paper or at some other site, if there's no way to get it implemented here without spilling the coffee and crashing the bus. That's the lesson Toyota learned. Forget central planning, which the Soviet Union demonstrated doesn't work. Empower the troops to use their eyes and brains and good judgement and make a million adjustments of 0.001 percent size.

It's an incredibly powerful algorithm. It doesn't require brilliant central planning officers. But it does require believing that the ground troops have enough brains to carry their coffee across the office without spilling it, even if they just waxed the floor. Turns out, according to Toyota, that's probably true.

Oh, yes, I almost forgot. It would seem to make sense that, if this cybernetic doodad is so powerful, that it is in operation already in billions of places around us in society and biology. That would argue that it might be worthwhile to have cybernetic doodad detectors, and cybernetic doodad statistical tools available to use to spot and describe and tweak such thingies.

Most of the last 6 months postings to this weblog have tried to make that argument, in more complex ways, and maybe that's my problem.

The American Indians knew this - that the Great Spirit worked in circles, not lines. Taoism knows about circles and cycles. "Systems thinking" involves accepting that there are important places where feedback loops just might possibly be involved.

We're so close now. Bring it home, baby!

(Posted in memory of Don Herbert, "Mr. Wizard", who died last week, and taught millions of kids, including me, basic science-made-easy on his TV show.)

Wednesday, June 13, 2007

Another gentle introduction to control loops



So, my favorite reader tells me the diagram of the loop with 432 boxes or whatever "left her cold."

So, here's a little more gentle ramp up to that diagram I did yesterday. The classic model of "causality" is that one thing causes another thing. The "causality" goes only one way. B has no impact on A.

But, sometimes there is "feedback" and B does affect A.
Often, "feedback" is mistakenly described as "positive" or "negative". In control theory, feedback is just information. But since those terms are pretty common, let's review what the users mean by them.
P
Positive feedback (above). Things reinforce each other and we get an upwards spiral of whatever it is. Both sides chase each other upwards. A "virtuous spiral".


B keeps raining on A's parade, and after a while A gives up and stops trying. But, actually, feedback loops have many more different behaviors than those two. Here's a third possible outcome - an oscillating condition that keeps going in "cycles", like the economy or the number of birds in a given region.

Actually, again, in "control theory" the thing called "feedback" is just information. By itself it has no "positive" or "negative" content. All that meaning is actually supplied by some active entity, maybe a person, who has to interpret what that news means.

Take a new driver trying to keep the car on the road. He has a "goal", to stay in the right hand lane (in the US at least). He sees what's going on and responds by turning the steering wheel one way or another. (This is much more dramatic if he is learning how to drive in reverse and steer going backwards for the first time.)




OK, now to get from THERE to the picture I used or the format used in "control theory" textbooks, we need to identify a few more familiar parts and rearrange the pieces a little. Here goes. Let me add a "command" box with"turn left!", and a line called 'visual feedback'.

OK, so keep on unfolding, and add an explicit eyeball. (That's what that is supposed to be in the lower right). In general this is not always a visual feedback, and could be sound or tactile or whatever, so we break out a "sensor" of some kind (in this case, an eyeball).

And, finally we make the step to removing all my effort at drawing cars and people and roads, and get a very dry, abstract looking diagram that looks something like the one below. The"loop" is the part in green. The"controller" is a general term, and in this case, it's the driver of the car. Sticking off the loop, like some kind of side radical group in chemistry, are two boxes - the "goal" of the loop, and some external conditions that make life harder (usually), such as the fact that the road bends sharply to the left ahead.
In real life, there are some additional feedback paths if we look over a longer time window. For example, by picking the right controls of the steering wheel, the driver can pick one road instead of another, which indirectly changes the "road" box.

And, the thing that is following the loop around over and over is a very abstract thing - it's "control". "Control" flows, and like electricity, it can move through solid objects like current flows through solid copper wire. Control can happily leap from one medium to another, now being in a steering wheel position, now in a car position, now in light travelling to the driver's eyeballs, now in neural impulses going to the arm, etc. Control doesn't care who it hitches a ride from.

As I've said in earlier posts on this same subject, this makes it hard to find all the parts of a control loop sometimes. I trace out the parts in a person getting a glass of water out of a faucet in this prior post. (See "controlled by the Blue Gozinta")

OK, but why go to all this effort, you may well ask. The answer is that this can "solve" many problems for us. If we can rotate and stretch the problem around until we can see it as a control loop, then there are software programs that can tell us what could or will happen next, if we push here versus there. We can tweak and tune them. We can design them and redesign them. We can draw on a 100 year deep literature on "what goes wrong and why" so we know a classic problem when we see it. We can realize "Oh this is because we have a lag time between when we turn the boat's helm and when the boat gets around to responding" so we know what needs to be fixed or accounted for.

Or, it can help us design feedback-based interventions, as the IOM has suggested in "Crossing the Quality Chasm" for "microsystems" -- small teams of health care providers. I used a feedback model in my "Capstone" to inform the suggestion of what factors the diabetes team should be considering routinely and to be sure it forms some kind of coherent and complete set of topics while being as compact as possible.
Also, from control systems textbooks we can see quickly why "dashboards" have a very dangerous downside, if the data being used to steer an organization with is 2 months old -- and why that can tend to produce wild oscillations if it's used to steer by.

And, we can design "regulatory processes" and "regulations" that have a snowball's chance in a hot place of actually working, because we did our homework and computed the necessary numbers in the underlying feedback control process we're trying to tweak. Or we can realize there's no point in pressing there, because that point won't budge, and we need to save our pennies and find a better intervention point. We can model "what if" behavior.

That's what this is all good for. It connects a huge problem domain of public health to a huge solution set already found and in place in 'control system engineering" on the other side of campus. It makes the powerful analysis tools Engineers already use daily to design jet planes suddenly useful to redesign health care reimbursement policies. Etc.

As I've said before, to public health and control system engineers - you guys REALLY need to meet each other and form an alliance.

Sunday, June 10, 2007

Bees, infection, lean, and emergent immune systems

"What's good for the hive is good for the bees." That's one of the posters near the cafe at Johns Hopkins Bloomberg School of Public Health, in Baltimore. I recall it's described as an "African saying."

I've gone on at great length looking for the right way to describe and convey the difference between multi-level organization and, well, "heaps."

There seems to be an extremely strong bias in the US against anything that has to do with higher organizational levels of humans - unless it's man-made, centrally-planned, top-down business organizations. Anything "bottom up" has a cultural repellent overtone of collectivism or labor-movements or community-organizers ( read "troublemakers") or socialism or communism or Star Trek's ultimate bogeyman - "The Borg."

It's puzzling. It's as if there's a conviction on the one hand that the country has passed through its entire need for "social and economic development" and is trying to forget that awkward, teenager stage when things didn't work out well, now that ... um ... we have everything perfectly under control?

That's pretty much a "theory X" model, where all the expertise is concentrated at the top, and the only thing everyone below that level is good for is blind obedient labor or paying taxes. And maybe that did work in the middle ages or for running plantations or companies where the labor was just an extension of the company's founder.

But, that model also ran out of steam a few decades ago, as more companies started being "knowledge based" with "knowledge workers," all of which meant that the center of mass of the expertise was moving from the executive wing to the shop floor. In hospitals, for example, there was a traumatic transition, that's still happening, where the main administrator of the hospital would now be a professional administrator, who was not even a medical doctor. The expertise in medical matters was shifting out to the floor, and the expertise in central administration was becoming, gasp, "administration" -- which previously had been sort of a dirty "four-letter word", the kind of thing that only worn out doctors would do when they couldn't keep up with "real work."

All this is morphing slowly, and with loud shrieks and moans and strenuous objections, towards "theory Y" where the laborers are assumed to be highly competent experts and in touch with reality on the floor or "ground truth" or "in country" or whatever the context is. Central "management's" role became less to "direct" or "manage" the operation than to "orchestrate" it. There' s no way the new "conductors" could even begin to grasp how to operate one of the "instruments" out there in the orchestra, let alone be the fount of all wisdom on every one of the sub-sub-sub-specialties and stay current on every relevant journal and attend every important conference.

So, it's a new "paradigm." The "chain of command" doesn't go away, but the nature of the command is distinguished very carefully from "information flow".

Now, if you look at this through the high-magnification lens, it doesn't look very different from the old model. (see picture below).


To see the difference, you need to rotate the microscope lenses around to a lower-power, broader field-of-view lens, and you can see what's changed, or what has to change, to make this new model work as advertised.

The big changes are that:
  • News about the outside world comes in at the bottom (the front, the ground troops), and loops up to the top, where it has an effect, altering the new, revised orders that come back down the chain. That loop is travelled many times, but is still relatively slow.
  • There is a very fast local loop, where feedback about performance comes right into the low level team, which responds to it on the spot, with no involvement of management. This is akin to your hand retracting from a hot stove without having to check in with the brain first. Or equivalent to the Coast Guard in Katrina, where they were pre-authorized to make decisions on their own without bothering headquarters.
  • In Theory X, the news comes in the top, which has limited bandwidth or a small 1-person pipe, then only some of it goes down and some is lost at each level, depending on upper managers to recognize what lower employees care about. Finally a dribble of news makes it to the front. The troops report what they see and differences with what the orders seem to imply, but at each level going back up the chain, half of that is deleted by managers who think they know what the boss actually cares about. By the time the internal news gets up to the boss, 3 months later,
  • it's unrecognizable.
  • TheoryX is very hard to steer with. The Boss is effectively blind to what's going on inside, the troops are essentially blind to what the boss sees outside, and the whole thing feels like "pushing" on a rope.
  • Theory Y is very easy to steer with. Most of the heavy lifting is done at each level with fast feedback that never has to go up to the brain and back down to the hand. Because the loop upwards is fast and phase-locked, news at the front actually makes it up to the top, which can change the mental model and the marching orders. The troops effectively control the boss, the same way the water-level controls the hand when filling a glass of water.
  • Carrying on the "rope" analogy, it's like PULLING on a rope that goes out to a pulley and comes back to a pulley and goes in a big loop. You can accomplish "pushing" your clothes out to dry by "pulling" on the rope. The LOOP does the magic. You need the loop.

Well, I came in to talk about bees and emergent immune systems, and I've headed off in what seems a different direction, so now let's stop, turn around, and look a the "bee problem" from the top of this mountain we just climbed.

What's the problem? As the Los Angeles Times put it this morning,
Suddenly, the bees are simply vanishing.

by Jia-Rui Chong and Thomas H. Maugh II
June 10, 2007

The puzzling phenomenon, known as Colony Collapse Disorder, or CCD, has been reported in 35 states, five Canadian provinces and several European countries. The die-off has cost U.S. beekeepers about $150 million in losses and an uncertain amount for farmers scrambling to find bees to pollinate their crops.

Scientists have scoured the country, finding eerily abandoned hives in which the bees seem to have simply left their honey and broods of baby bees.

"We've never experienced bees going off and leaving brood behind," said Pennsylvania-based beekeeper Dave Hackenberg. "It was like a mother going off and leaving her kids."

Researchers have picked through the abandoned hives, dissected thousands of bees, and tested for viruses, bacteria, pesticides and mites.

So far, they are stumped.
The problem seems to be both a parasite (that can be killed by irradiating the hive), and a simultaneous breakdown in the bee's immune systems. The article states:
Several researchers, including entomologist Diana Cox-Foster of Penn State and Dr. W. Ian Lipkin, a virologist at Columbia University, have been sifting through bees that have been ground up, looking for viruses and bacteria.

"We were shocked by the huge number of pathogens present in each adult bee," Cox-Foster said at a recent meeting of bee researchers convened by the U.S. Department of Agriculture.

The large number of pathogens suggested, she said, that the bees' immune systems had been suppressed, allowing the proliferation of infections.
The article goes on looking at parasites, but I want to hit the brakes here, get off the highway, and go up the side road of looking at the question of suppression of immune systems. This is pure speculation, but possibly important speculation.

What catches my attention here is that there is a natural, multi-level beastie here - and that is that honeybees don't exist as individuals, they exist as parts-of-a-hive. Increasingly research is showing that humans have lot of the same tendencies, but for bees this is extreme. If you remove a honeybee from its hive, I suspect it will simply die - as will a human cell if you remove it from a human body. (That's why it's so hard to cultivate human "cell-lines".)

The latest literature on humans shows that it's not just that a person's immune system reflects the "health" of their own body, but it also reflects whether the person has become isolated and fragmented from society. One of the most painful things for a person, that is sort of surprising in the "rational actor" model, is that the imprisonment in "solitary confinement" is extremely draining, even to prisoners. The need for daily interaction with other humans is tangible.

Chimps, if removed from their herd, have been shown to sacrifice a chance for food for a chance to open a window and see what the other chimps are doing. This is a deep, biological need, not confined to one species, or, as the human cell example shows, not confined to a single "level" of organizational hierarchy.

The point is this. If you forget what your eyes see, and look at what the mathematics show, human beings, or bees, or cells, are not the shape your eye sees. They have parts of their physiological control and regulatory systems that extend out into their larger social structure. Those are important parts, and if those parts are not well, or damaged, the damage is quickly manifested in the local physiology of the individual as well.

For tax or legal purposes, or buying a train ticket, we are separate "individuals". For purposes of computing how regulatory processes operate, and how they fail, we are not nearly so "separate". Because our eyes don't show us these invisible (but very real) connections, we tend to discount them, or ignore them. We do so at our peril.

These tendrils of our "meta-bodies" are like having our blood diverted from our bodies in tubes in a dialysis unit, run out to some other place, processed and cleaned up, and returned to our bodies through some other tube. We can say that is not "me", but in the sense that a breakdown in that system can directly cause you to be sick or die, it really is "you".

Apparently, cells, chimps, bees, humans, whatever, develop many such external loops in their interactions with each other. These can be so great that it is common to hear a person say that when a loved one abandons them or dies, "it is as if a part of me died."

Alternatively, it's been shown that cells with even damaged DNA's can be supported by a "field effect" from neighboring healthy cells, and not become cancerous. [ I'll track down the reference.] Notice that the "life sciences" spend a huge amount of effort on "signal transduction" and ways signals are communicated between cells, or between genes with "genetic circuits", but there's little use of a model that this low-level communication, if it persists, really has to be part of a high-level closed feedback control loops with a mind of its own, and the key thing to do is to find that loop. As I showed a few days ago, tracing out the loop is a challenge, because control information leaps happily from medium to medium, now in neurons, now in voice, now in electromagnetic waves, now in liquid flow, etc. The point is if you know there MUST be a closed loop, so that the cells can PULL on the ROPE (discussed above), then you are encouraged to find the rest of the pieces.
And, then, of course, if you're a drug company, you have a whole new set of intervention points at the meta-loop level.
In extreme cases, when the culture and society collapses, the impact can be dramatic. I suspect that collapse of cultural integrity is part of what is going on in the huge rise in suicide rates among native Americans right now. The history of the Pima Indians, in the USA, shows a dramatic collapse of physical and social health, going from a tribe with almost no diabetes and one with a reputation for being extremely cordial in 1800, to one with something like 80% diabetes rates and a high rate of suicide and interpersonal violence. Many factors are put forward to explain this, but I'm biased to looking at multi-level models for this kind of effect.

So, if something is killing off the honeybees, and the something is enabled by an apparent collapse of the individuals "immune systems", then other people will start looking at what's wrong with "this bee" (the "clinical medicine" model), and I'd prefer to start the investigation at the other end and ask "Is something wrong with the hive?"

In other words, what's "broken" for each bee may not be "inside the box" of that bee's "body", but may be out in the external part of the control-system-body that is connected into and through the "hive." In the analogy, the "dialysis machine" is broken, or the tubes running to it are clogged or kinked, or something like that.

I think this can be a very powerful model, to think that there are TWO life-forms involved that may need medical attention. One is a lot of individual cells, or bees, or people. The other is a much larger scale emergent thingie, that we'd call "our body", or "the hive" or "society" respectively.

To date, we've considered emergent thingies as if they would evaporate if you took away the tiny things that make up the big thingie.

But I've presented many cases where the emergent thingie suddenly transitions, becomes self aware, and takes on "a life of its own" and even acts as if it has "a mind of its own."

For humans, the emergent thingie is very familiar - it's "us". Cells may have formed the substrate in which our spirit was formed (or placed, if you prefer that model), but now that spirit has definitely taken on a life and identity and mind of its own that is only remotely related to the lives of the cells that once made it up, but now are subordinate to it.

We see the same pattern in many other places. Mental images in human or machine vision start by being made up of many small patches of data or patterns, but once they combine into an overall "vision" or "percept", that thingie takes on a life of its own and even if we remove the source data it persists. In fact, even if the data now refute it, it can continue to persist, and defend itself, and change what we look at in order to sustain itself. Wow.

So, I think it is safe to say that everyone recognizes that bees have a very strong social component to their daily activity and identity. And, like corporations that continue to exist long after the founders have died or left, "hives" tend to persist even if individual bees die off.

But, OK, say the hive is a living thing that has a "meta-body" and has something that is appropriately called "health" that is a mostly-independent factor from the health of the individuals within it. I say "mostly" because it's only in the short term that they may appear to be separate -- in the long term, they are tightly coupled because feedback loops have compounded the "weak interactions" and "loose coupling" into dominant factors.

So, if the bees are dying, it may be because the hive-scale-thingie is dying first. As with any feedback loop, causal "directions" become a meaningless concept. The hive and the individuals rise or fall as one, in a upward or downward spiral feedback loop pattern.

But, it still can make sense for humans to talk about "psychological problems" or "immune system problems" that are defined at the large-scale, meta-body level and may not even make sense at the individual cell level.

The point is, things can "break" or "be wrong" at that large scale.

That's why I keep on flashing that M.C. Esher picture of the waterfall -- everything is healthy locally, but it's broken globally. The two are completely distinct, in the short run. (but coupled in the long run in any living thing.)

Is this what's going on with the bees? I have no idea. But I am pretty certain that very few people who aren't systems analysts would even start with that approach and look there for signs of something wrong at that level. So, it would be "baffling."

This is exactly what many social and corporate organizational problems are. At a local level, we see the equivalent of "bees dying" or "employees burning out" or "employees quitting" and we are baffled as to what's wrong with them. Sometimes, the problem isn't at that level. Sometimes it's a structural problem, a "systems" problem. Those are hard to see to begin with, and impossible to see if you don't look for them on purpose and methodically.

A great deal of management literature these days, including The Toyota Way by Jeffrey Liker, describe problems and solutions at the meta-level, without ever springing, in my mind, to the overall pattern they are pointing to. This is an emergent-organism that has a meta-body. It acts like its alive, and it can have disorders and dysfunctions and "health" and often needs "medical attention" at its own scale. (But save us from most "consultants"!)

If you look at all the emphasis on "vision" or "spirit" or "direction" or "identity" in the management literature, you can simplify it all to an effort to create a self-aware, self-sustaining, emergent beastie at the meta-level -- a beastie that will then turn around and form a nurturing context and reshape and empower the people that just gave it life.

So, it's one thing if you push up emergent life, and when you let go it falls down again. That's one case. In this other case, it's more like a radio antenna or something -- you push up emergent life and push so hard or well that the life breaks loose and is radiated out and takes on an existence of its own outside the antenna. Then, you can shut down the transmitter or dismantle the antenna, and the radiated wave just keeps on propagating outward.

Except in this case, it's more like a ring-vortex wave that just sits in place, like a little donut-shaped "halo" above us. It doesn't shoot off a the speed of light, but instead turns around and comes back and embraces the parts that just created it.

I think this is what we're trying to do with corporate management these days, effectively.
I think that's what "lean" and "six-sigma" and "Toyota Production System" are about. They're about creating a culture that is vital, and self-sustaining and that reaches around people and becomes the sea they swim in and draw life from, while they complete the cycle and return the favor.

That requires a lot of complete loops to work, and they have to be vertically oriented. We need to have the vertical donut model, not the open-ended "tree" model of management to bring all the pieces into "phase-lock" and allow a laser-beam output, not incoherent light.

And, when it breaks, we need "doctors" of the corporate spirit to bring it into alignment with a pattern that works again.

But it's not "the Borg" and it's not scary and it's not homogenization and it's not domination and it's not an abandonment of a social hierarchy -- but it is a different use of those pathways, a transforming use, that uses vertical close-paths to make the top the bottom and bring vertical unity to the compound-level beast. Then, it works. Then, it's great!

Note: All closed paths are "loops", so any causal loop diagram will have lots of "loops".

Most of those loops aren't dominant. What will be dominant will be the FEEDBACK CONTROL LOOPS. These will be self-aware, self-repairing, persistent, goal-seeking loops. THOSE are the key players over any long period of time in living systems. Those are where things break, or never got formed in the first place. And those are the intervention points for a sustainable intervention.

Tuesday, June 05, 2007

Gentle primer on feedback control loops

Here's yet another pass at the basic concepts using mostly pictures. Let me know if this works better for you or your students! I can adjust what I'm putting here to your needs and interests, but only if I get feedback!

The first picture shows rising and falling output. This is often what people mean or think of when they talk about "positive" and "negative" feedback.

Unfortunately, it's also their concept of where the "feedback" concept stops, so they missed all the good stuff.

The next picture shows converging output as a result of a simple control ("goal seeking") feedback loop.

The output rises or falls to some present value or "goal".

Then, the system can be "tweaked" a little so it converges faster on the goal, but that often will result in overshooting and coming back with a little bit (or a lot) of bouncing.

The next picture, of the car getting to a hill from the flatland below, is supposed to show how a speed control system should do a good job of maintaining the same speed, even when the outside world changes a lot.

Then the picture of the car going up and down the mounntain explains more about that. Without speed "control", the car would slow down going up the hill, and speed up a lot going down the hill. Instead, the speed is almost constant.

But, this whole effect of locking down or "latching" or "clamping" a value, such as speed, to some predetermined value is really confusing to statistical analysis. The effect is that a variation that is expected to be there is not there. There's no trace of it. So far as statistical analysis shows, there is absolutely no relationship between the slope of the hill and the speed of the car. Well, that's true and false. The speed may not be changing, but the speed of the engine has changed a lot.

The same kind of effect could be seen in an anti-smoking campaign. The level of smoking in a region is constant, and then you spend $10,000 to try to reduce smoking. The tobacco companies notice a slight drop and counter by spending $200,000 to increase advertising. The net result is zero change in the smoking rate. Did your intervention have no effect? Well, yes and no.

The output (cigarette sales) has been "clamped" to a set value by a feedback control loop, so it varies much less than you'd expect. Again, this is hard to "see" with statistics that assume there is no feedback loop involved in the process.

For that matter, the fact that the "usual" statistical tests should ONLY be used if there is no feedback loop is often either unknown or dismissed casually, when it's the most important fact on the table.

(The "General Linear Model" only gives you reliable results if the world is, well, "linear" -- and feedback loop relationships are NEVER linear, unless they're FLAT, which also confuses the statistical tests, and sometimes the statisticians or policy makers.

The good news is that there is a transformation of the data that makes it go back to "linear" again, which involves "Laplace Transforms", which I'm not going to get into today. But, stay tuned, we can make this circular world "linear" again so it can be analylzed and you guys can compute your "p-values" and statistical tests of significance and hypothesis testing, etc.)






OK, then, I illustrate INSTABILITY
caused by a "control loop" . In this case, a new driver with a poor set of rules thinks ("If slow, hit the gas. If fast, hit the brake pedal."). Those result in a very jerky ride alternating between going too fast and too slow.

Note, however, that the CAR is not broken. The Pedals are not broken. The only problem is that the mental rules used to transform the news about the speed into pedal action are a poor choice of rules - in this case, they have no "look ahead" built into them.


Then I have a really noisy picture that's really three pictures in one.

The left top side has a red line showing how some variable, say position of a ship in a river, varies over time. The ship stays mostly mid-stream until the boss decides to "help". Say the boss is up in the fog, and needs to get news from the deckhands, who can actually see the river and the river banks.

Unfortunately, the boss gets position reports by a runner, who takes 5 minutes to get up to the cabin.
As a result, using perfectly good RULES, the captain sees that the ship is heading too far to the right. (well, yes, that's PORT or STARBOARD or some nautical term. For now, call it "right").

So, she uses a good rule - if the ship is heading too far to the right, turn it more to the LEFT, and issues that command.

The problem is that the crew had already adjusted for the too much to the right problem, but too recently for the captain to know about, given the 5 minute delay. So, the captain tells them to turn even MORE to the left, which only makes the problem worse.

The resulting control loop has become unstable, and the ship will crash onto one or the other shores - not because any person is doing the wrong thing, but because the wrongness is extremely subtle. There is a LAG TIME between where the ship WAS and where the captain thinks it is NOW, based on her "dashboard".

That "little" change makes a stable system suddenly become unstable and deadly.

People who are familiar with the ways of control systems will be on the lookout for such effects, and take steps to counteract them. People who skipped this lesson are more likely to drive the ship onto the rocks, while complaining about baffling incompetency, either above or below their own level in the organization.



The last picture shows some of the things that "control system engineers" think about.

These are terms such as "rise time", "overshoot", "settling time", and "stability". And Cost.

These terms deal with how the system will respond to an external change, if one happened.

But a lot of the effort and tools are dedicated to being sure that the system, as built, will be STABLE, and won't cause reasonable components, doing reasonable things, to crash into something.

This kind of stability is a "system variable" in a very real sense that is lost when any heap of parts that interact is called "a system." It is something that has a very real physical meaning It is something that can be measured, directly or indirectly. It is something that can be managed and controlled, by very small changes such as reducing lag times for data to get from person A to person B.

And, my whole point, is that this is something people analyzing and designing organizational behavior and public health regulatory interventions should understand and use on a daily basis.

Maybe we need a simulator, or game, that is fun to play and gets people into situations where they have to understand these concepts, on a gut level, in order to "win" the game.

These are not "alien" concepts. Most of our lives we are in one or another kind of feedback control loop, and we have LOTS of experience with what goes right and wrong in them -- we just haven't categorized it into these buckets and recognized what's going on yet.

One thing I will confidently assert, is that once you understand what a feedback control loop looks like, and how to spot them, your eyes will open and the entire world around you will be transformed. Suddenly, you'll be surrounded by feedback loops that weren't there before.

The difficulty in seeing them may be due to the fact that what is flowing around this loop is "control information", and it can ride on any carrier, as I showed yesterday with the person getting a glass of water. The information can travel in liquids, solids, nerve cells, telephone wires, the internet, light rays, etc., and is pretty indifferent as to what it hitches a ride on.

The instruments keep changing, but the song is what matters.
You have to stop focusing on the instruments and listen to the song.
Control System Engineering is about the songs that everything around us is singing. Once we learn to hear them, they're everywhere. Life at every level is dense with them. And, they seem to be a little bit aware of each other, because sometimes they get into echos and harmonies across levels and seem to entrain each other.

It's beautiful to behold. I recommend it!

W.

Monday, June 04, 2007

Controlled by the Blue Gozinta



For those who are following this discussion of feedback loops, we're most of the way through the basic description of the insides of such a loop.

I showed how a microphone and speaker, or getting a glass of water represented kinds of feedback loops, and made a distinction between dumb feedback loops and smart - goal seeking - feedback loops, also known as control loops. And we showed how control loops are everywhere in nature, made up of almost any substance - animal, mineral, vegetable, light, chemicals -- and they don't care because the principles work regardless. Control is to the loop as a song is to the instrument - you can play the "same" song on almost any instrument, or sing it, and the "sameness" is there.

So, I need to give a name to the four parts that I had in the upper left in this picture I drew yesterday:



The basic diagram that Professor Gene Franklin uses in the book "Feedback Control of Dynamic Systems" is similar to that block diagram, except for pulling the "GOAL" out and lumping the three other boxes "comparer", "model", and "decider" into a single blue box that is labelled "?" in his diagram of a car's cruise-control system for maintaining a constant speed.


So, the diagram is from that book, as quoted by me in slide 16 of my Capstone presentation on patient team management of diabetes control. I think you may need to click on the picture to make it zoom up large enough to read the words.



In any case, the only box on that diagram that is blue is the one that the feedback "goes into", so I'm calling it a "blue gozinta" as just a funny name that rhymes and that no one else is using.

Besides, the word "controller" rings all sorts of bells I didn't want to ring, echoing back to parents and school and bosses, etc.

Well I guess I failed in that already, as I gave the example of "negative feedback " of a student getting "graded" by a teacher for performance on an "exam", and receiving a failing grade of zero percent, which could be quite discouraging and dampen enthusiasm for the subject.

Franklin's picture has two other minor differences from mine. First, he adds "sensor noise" to the bottom "speedometer" box, to emphasize that this loop is all built around a perception of reality, not reality, and the thing that does the perceiving may not be perfectly accurate. That's a pretty good model of human beings or any other regulatory agent or agency.

As John Gall would say in his book Systemantics -- inside a "system" the perception IS the reality. The medical chart IS the patient.

That effect is so strong that the patient can be dying in the bed but caregivers are so busy looking at the monitors showing something else that they don't see the problem -- which is part of what went on in the tragic Josie King case, where an 18 month child slowly died of thirst in the middle of one of the best hospitals in the world. So, yes, we better remember on our diagram that what our senses tell us is going on may be very wrong. We'll come back to that in a big way when discussing how human vision and perception get distorted by all sorts of invisible and insidious pressures - especially in groups with very strong beliefs.

The other difference between Franklin's diagram and mine is on the upper right, where he adds an incoming arrow labelled "road grade". This means the slope of the road, and how hilly it is, not what we think of the road. His point is that the behavior of a car and the speed it ends up going after we have set our end and put the gas pedal where we think it should be actually ALSO depends on factors that are outside the car - such as whether it's going up a steep hill.

That will also be a universal pattern. The results of our actions are mixed into the impact of outside actions, which makes it hard to disentangle the two from just looking at the end result. The good news is that there are software programs that can disentangle those two for us.

Anyway, the whole point of this post is to get the "blue gozinta" identified.

This little blue box is the heart of the problem, because "feedback" is really just information, and is not intrinsically "positive" or "negative". In this diagram, the "feedback" is the speed of the car, as measured by the speedometer. That's just a number.

The number becomes "positive" or "negative", leading to "more gas!" or "more brake!" actions, only because the blue box, the controller, the blue-gozinta, compared that number to the desired speed, and saw that it was less than desired. Then the controller had to check a mental model and use some rule like "if we're going too slow, push on the pedal on the right!"
"If we're going too fast, push on the pedal on the left!'

As anyone who has ever taught someone else to drive knows, that turns out NOT to be the actual rule that drivers use to control the gas pedal. The behavior those rules and that simplistic model of the world result in is holding down the gas until the car shoots past the correct speed, then slamming on the brake until the car passes the desired speed slowing down, then overshooting and slamming on the gas until the car passes the right speed on the way up, then slamming on the brake, etc. The car jerks back and forth in an unstable and very unpleasant oscillation forever if that's the only rule in use.

However, we can probably all think of organizational policies or laws that have exactly that behavior, and are either too harsh or too lenient, or something, and keep on going back and forth and never manage to get the right setting.

It has been hard to recognize those problems and go
  • Hey, I've seen that behavior before!
  • That's a "control loop" behavior.
  • The way to fix it is to change what goes on in the blue gozinta box.
  • What part of the process / law / policy I have corresponds to that box?
  • That's where the problem can be fixed.

It's really important to see that there is nothing wrong with the car. The gas pedal works fine, and does not need to be replaced. The brake pedal works fine. The speedometer (in this case) works fine. What is wrong is inside the blue box, and is subtle - it's the "mental model" or rule that is used to decide what action to take depending on what information is coming into the box from outside.
And, the realization is that a very simple rule, a dumb rule, doesn't accomplish what we want, but a slightly better rule will make the very same parts behave correctly together.
The better rule requires a little more brains inside the box. We have to track more than just how fast we are going and how fast we want to go -- we have to figure out how fast we are converging on the goal, and start letting up on the gas as we get near the target speed, before we even get there.
The controller needs to "plan ahead" or "look ahead" and react to something that hasn't happened yet.
This seems to fly in the face of science and logic. How can a dumb box react to something that hasn't happened yet? We can't afford the "glimpse the future!" add on module, at $53 trillion.

Ahh, but here's another wonderful property of feedback loops. What goes around comes around. We've been here before. Nothing is new under the sun. The past is a guide to the future.

Either putting out the garbage can causes the garbage trucks to come, or we can learn the routine well enough that we can predict when the trucks will come based on past experience. It turns out, in a loop, the past and future become very blurred together.
Being able to recall the past IS being able to predict the future, in a control loop.
We don't just go around a control loop once or twice -- we go around a control loop thousands or millions of times. So, if we have any rudimentary learning capacity at all, we can start to notice certain patterns keep happening. We can detect what always seems to be happening JUST BEFORE the bad thing happens, and use THAT as the trigger event to react to instead.

So, we have a second rule that gets added by experience -- "When you get near the target goal, start easing up on the pressure to change and start increasing the pressure to stay right there and keep on doing exactly what you're doing."

This basic ability to learn from experience is the simplest definition of "intelligence" we can come up with. Do you recall the joke about Sven and Ollie that Garrison Keeler told?

Sven comes by Ollie's house and sees that Ollie has both ears bandaged.
"What happened?" he asks.
"Well", Ollie replies, "I was ironing and the phone rang and I picked up the iron by mistake and held it to my ear!"
"Oh.... So, what happened to your other ear?"
" Ahh.... once I was hurt, I tried to call an ambulance. "
So, the moral of all this post is that the key to the behavior of a system being managed by a feedback control loop is the blue box, the "blue gozinta."

Very simple changes to that box can change a horrible experience into a pleasant ride.

The heart of "Control System Engineering" is figuring out what to put in that box.

For human beings, a second major problem is that little tiny addition of "sensor noise", and figuring out how to prevent, reduce, or account for distortions in perception that can cause the system to be responding to a perception, not a reality.

And, for both, there's another very subtle but very well understood problem, and that is "lag time." I didn't draw "lag time" on the picture but I will in the future.

If we're trying to drive based on the speedometer reading from 5 minutes ago, things will not go well for us. In fact, the more we try to "control" things, the worse they can get.

This is a huge problem. A perfectly stable system that is perfectly controllable becomes a nightmare and unstable and can fly out of control just by there being too much of a lag between collecting the sensor data and presenting the picture to the controller.

Or, in hospitals and business, it's popular now to have a "dashboard" that shows indicators for everything, often exactly in "speedometer" type displays.

The problem is, the data shown may be two months old. We are trying to drive the car using yesterday's speedometer reading at this time of day. When I state it that way, the problem is obvious. But, I can't find any references at all in the Hospital Organization and Management literature about the risks caused by lag times in dashboard-based "control".

At this point, even with just this much understanding of control loops, you, dear reader, should be starting to realize how may places around you these loops are being managed incorrectly.

We're spending a huge amount of effort trying to improve the brakes and gas pedals, when the actual problem is a lag time in the messages to upper management, or that sort of problem.

None of these problems need to be in our face. These are all "Sven and Ollie" problems that we can fix with what we know today.

But that will only work if we're really sure about how control loops work, and how they fail, and can make that case to the right people in the right way at the right time.

Take home message -
Even a very basic understanding of control loops can help us ask the right questions, and realize where the problems may be lurking instead of where they appear to be at first glance, so we don't waste our time barking up the wrong tree.

Especially in complex organizations, the generator of failure is usually not that labor failed or management failed, or that any one person did something "wrong." What is killing us now is that we have a huge collection of "system problems" that are due to things like "lag time" and "feedback". Every piece of the system is correct, but the way they behave when connected is broken. There is a "second level" of existence, above the pieces, in the "emergent" world. Things can break THERE. Most of the systems humans built are broken there, or at least seriously in need of an engine mechanic, because we didn't even realize there WAS a THERE.

Worse, "management" still thinks that discussion of "higher level" problems means that someone is pointing the finger at THEM, and that leads to bad responses.

The problems are subtle. We won't see them unless we spend a little time studying how control systems work, and how they fail. Then, the patterns will be much more obvious, and our efforts will be much more likely to be successful. And, then we can stop blaming innocent people for problems that aren't their fault.

It is, however, in my mind, the fault of the whole enterprise of Public Health if this kind of insight is not taken advantage of when designing regulatory interventions or in helping individuals try to "control" behavior. That, in my mind, would be a clear failure of due diligence.

Or - it would be, if these concepts had been published in the peer-reviewed literature that's the only thing they read and pay attention to.

Which says, it's my fault for not publishing this and your fault, dear reader, if you don't get after me to do so.

After all - I depend on feedback from my readers to control my behavior. So, what I do depends on what you do.

Wow, doesn't that sound familiar?