Showing posts with label IOM. Show all posts
Showing posts with label IOM. Show all posts

Monday, October 29, 2007

Central planning in a complex world


If the world is too complex to allow for long range planning, what should central management be spending its time doing?

As all the parts of the world, on many scales, start colliding and interacting, we now find ourselves inside what scientists would call a "complex adaptive system."

In that kind of world, nothing works the way you think it will, and everything has "unintended consequences" or "unforeseen side-effects." So, we might think that long-range central planning is impossible.

As usual, we're both right and wrong, and the situation is, well, "complex" and nuanced, and depends on what you mean by "planning."

Certainly "central planning" as practiced by Stalin in the Soviet Union or Mao in China ran into many unintended side effects, of the kind where millions of people died because the plans didn't seem to relate to reality on the ground.

But, today, with advanced supercomputers and high-speed global communications, now we can do central planning, right? Nope. Before the problem was too little information. We zoomed right past the sweet spot of "just the right amount" of information, and now we're deep into "too much information!" and heading deeper at an ever faster rate.

So, yes, we could deliver the equivalent of a moving van full of 3-inch binders to a small leadership committee every day, and ask them to read that, digest it, and plan based on it -- but I think the problem is obvious. That will simply never work. There is not enough "bandwidth," regardless how "smart" those people are , even to read that much new information, let along digest it well enough to grasp the implications in "real-time."

All technology is doing is further swamping the system, and that will never get better.

Actually, it's getting worse, because of the problem I've talked about before that information is "context-sensitive" -- that is, the meaning of some "fact" is really only evident if you understand the context of the observation of that "fact. " You can't just snip a fact out of context, slide it over to a central place, and expect it to mean the same thing there that it meant in context.

We all are familiar with this problem, yet, socially, we keep on pretending that it is some sort of local breakdown and that this is not a universal law. The problem is that it is a universal law. Information is not only context dependent -- it gets worse. Information is basically "fractal", like an evergreen where every branch, if looked at by itself, is the same shape as the tree, and each of its branches is the same shape, etc. There is, in other words, an infinite amount of information buried behind every detail, and under every rock, and in every "can of worms."

To try to "consolidate" this information and avoid the "moving van" of binders, each level of management "condenses" the information and "simplifies it." That process, alas, is "lossy", meaning, frankly, it doesn't work most of the time. What gets lost in translation are the key "details" that seem unimportant but that add up to changing the entire conclusion and outcome.

So, this cannot be fixed by having "even smarter" people at the top of this pyramid of information distortion. By the time information gets to the "war room" all the relevant detail has been stripped out by well-meaning intermediaries. And, you can't skip the middle because the volume of detail is too much to handle, again regardless how smart you are.

So, what to do? The only way to deal with this is to realize that the concept of central planning and central "control" is fatally flawed, and to push decision making outward, and delegate it down to as close to the decision as possible, where it still makes sense.

So, we find in The Toyota Way, an emphasis on Genchi Genbutsu, or "go down and look for yourself, because whatever they told you is going on left out something important that will change your decision once you see it."

This is not because the people "at the top" are not smart -- it's because "smart" doesn't matter if you were handed the wrong problem to work on, and the wrong facts about it to use.

It is what is known as a "system problem" and it is "structural." It will not go away with better information processing. The details cannot always be ignored. In fact, most of the time the details matter. Information is not "compressible" on the huge scale we're trying to operate on these days.

So, again, what to do? If central planners cannot plan actions, there is still one thing they can do, and that is to plan processes that, when distributed out, will result in coherent and successful action.

(Actually I think it's even one more step removed, and the best they can do is to plan processes that will lead to emergence of local processes that when carried out locally, times a billion, will result in correct and coherent action - even in the total absence of a "central plan." )

This is the problem that Computer Science is dealing with today, under the handle "emergent computing" or "evolutionary computing" or "swarm computing" or some such thing. This is the problem IBM has to solve for the "operating system" for their supercomputer (Big Blue?) that is really 860,000 computers consulting with each other about what each of them should do next.

So, the literature and research on this topic is buried in Computer Science, where managers and policy makers seldom tread.

The key take-away message, though, is that the problem for today, as viewed by Complex Systems people and Computer Scientists, is how to develop, discover, or evolve processes that lead to processes that lead to coherent adaptive action of the whole swarm.

Interestingly, as I understand it, that is largely the central focus as well of the Baha'i Faith, which focuses on finding what processes lead to the emergence of locally relevant decision-making processes that still combine and work together instead of fragmenting so that the whole thing hangs together with central unity and yet the power of local eyes dealing with local issues, while percolating larger issues upwards and getting guidance on those downward.

This is the exact same focus that the Institute of Medicine has realized needs to be done to make health care safer, as described in "Crossing the Quality Chasm" -- local teams, which they call "microsystems", have to be realized and empowered to be self-managing based on real-time local information and feedback -- while, at the same time, still participating in larger scale coherence that can follow patients and patient care as it crosses from one such team to the next.

And, this is the same focus that Public Health has, as I learned at Johns Hopkins over the last few years. Aid and support for any group, whether teen-smokers in some rich suburb, or indigenous people in some remote country, has to be "culturally relevant" and rooted in local action, or it will suffer "tissue rejection" and be thrown out as soon as the intervention is over.

Central planning can realize there is, say, a problem with malaria that crosses teams, cultures, and nation-state boundaries - but the action has to be locally meaningful and sensible and fit with what else is going on locally, or it cannot work. Solutions cannot be imposed from above, as those that attempt to do so keep on discovering. Too much information is lost at the top.

I think these seemingly disparate groups need to pool their notes and cross-fertilize each other's thinking, because this is all the same problem surfacing in different places, manifesting itself in different worlds.

I guess if no one else is going to do that, or has already, it's time for me to start a "Wiki" so everyone can hang their fragment of knowledge on that framework and we can start to see what it adds up to, and where someone else has already solved that part of the problem.

Wade
(rainbow photo by me, on Flickr)

Saturday, June 16, 2007

Being a robot - 101: The cybernetic loop

I realized that I was just assuming that everyone knew how robots think.
Or for that matter, how babies think when they have to grab something.

We usually think of actions as big chunks, such as "Catch the ball."

Robots have to operate on a much more detailed, step by step level, with everything spelled out for them. Nothing is certain, so everything is just a process of getting a little closer and seeing if anything broke yet. And repeat.

They do this by following a very simple loop, over and over again. Spot where the ball is. Push your hand towards it a little bit. Remember that your hand doesn't always end up where you were trying to push it. Figure out which way the ball is NOW from your hand. Push your hand that way one notch. Figure out again which way the ball is now. Push your hand. Etc.

In a diagram, it would look something like this:
Congratulations! If you understand that diagram, you are much closer to understanding how anything works. Actually, I think you're one huge step close to understanding how almost everything works.

There is a cycle of action, looking, planning, action, looking, planning, etc. Over and over.

The "planning" tends to be very short-range, uncomplicated planning - but what it lacks in complexity, it makes up for with speed and persistence and never getting bored.

So here's a very powerful fact about life. Not only does "a journey of 1000 leagues start with one step", but sometimes the ONLY way to plan that journey is one step at a time.

In fact, a series of small steps is a thousand times more capable than one big step, regardless how clever you are, and regardless how well "planned" that one step is. It took computer scientists almost 50 years to figure out that many small computers is actually much better than one large computer for getting work done. It took "artificial intelligence" workers about 30 years to figure out that many small, dumb rules added up to a better way to work than one huge, complicated rule - and it was easier to write and easier to fix too.

Why is this? Imagine that you are on one side of a small woods and you want to get to the other side.
It is very likely that there is no direction you can pick to walk in a straight line that won't bump into a tree.
But, if each step can be a slightly different direction, there are thousands of paths you can use to walk through the same forest without running into a tree.

What's the moral? It seems so "obvious" now, but it baffled scientists for 50 years -- a "curved" path is more flexible than a "straight" one. You can get places with a stupid little loop as guidance that no amount of clever planning can get you if you have to move in one step in one straight line.

This kind of cycle with many tiny steps and a very short pause to think between each step is called a "cybernetic loop". It looks deceptively simple, while it is amazingly powerful.

It can keep on working if the wind is blowing, without having to be reprogrammed. It can keep on working if the ball is rolling on a bumpy hillside. It can keep on working if your robot arm is rusty and doesn't always move as far as it used to when you push it, and sometimes it sticks entirely. This deceptive little loop is all the computer programming required, essentially.

Now, it will work a little better if the robot has some learning capacity and has done this kind of reaching thing before. The robot may learn that it should reach for where the ball will be, not where it is now.

You learned this so long ago you have forgotten that you learned it. Imagine a baseball game where the batter hits a high, fast ball and the guy in the field runs towards home base instead of towards where the ball looks like it will come down again, because that's "where the ball is now."
So, yes, taking the speed of the ball into account does help. But that's a minor change to the program. The same loop works, except the "planning" step is a little bit longer.

So, this is profound wisdom I'm giving you here. It took all of mankind 50 years to figure this out, and some haven't got the news yet. You get it for free, right here, right now.

So, let me run it by you one more time. Here's the same moral, or same story, in slightly different words:

A plan of action that involves a repeated cycle of very small steps, with some looking and thinking between steps, is much more flexible, and much more "powerful" than trying to "solve" any problem in huge step.

Furthermore, if the world is complicated, and tends to have hills and bumps and wind gusts and rusty arms, you can be guaranteed that no "single-step" plan will ever succeed. In that case, ONLY a multi-step approach will get you where you want to go. If your job involves "going through the woods" and around trees that you don't even know about yet, it is much easier to plan to go around trees than try to "collect data" on the location of every tree, put it into some huge list or database, print out a map, and find "a straight path" through the forest.

This doesn't say "don't bother planning." It does say, "don't waste your time trying to find a linear solution to a curved path." There are millions of curved paths that can work just fine, in cases, like the woods, where there is no straight path possible.

And, one more time through it, from the Institute of Medicine's perspective, as in dealing with small teams (called "microsystems"). If you are dealing with a "complex, adaptive system" (like a hospital), it is way more powerful to just rig up the team with eyes and a feedback loop than it is to try to have hospital management "plan" how to improve things. Ditto for "The Toyota Way", or the power of "continuous improvement" or what Demings taught, or a "plan do check act (PDCA) cycle".

Empowering your front-line employees by giving them "eyes" and a little room to maneuver on their own to get around "trees" is a very powerful strategy that works in practice.

It is based on the most powerful "algorithm" we know of today - the "cybernetic loop."

Oh, yes, one more tiny thing. Since this is such a powerful "algorithm" or "paradigm" or way of doing things, much of Nature and your body already knew about it and uses it.

Public Health is sort of vaguely discovering that the "action" step always needs to be followed with a "reflection" or "assessment" step, but hasn't yet sprung to the fact that it is reinventing the wheel, or more precisely, the cybernetic loop, yet one more time. It hasn't figured out that many smaller steps adds up to a more powerful path-generator than one large step.

And, sigh, enterprise budget processes don't reflect this wisdom. For years I fought with the fact that Universities tend to have "annual budget cycles", and enterprise computing is seen as coming in only two flavors: "maintenance" and "huge projects". Maintenance money can only be spent keeping things the same. Huge Project money ("capital budgets") can only be used to take, well, huge steps in a big straight line, and the big straight line, or "project plan" has to be computed up front and committed to before starting.

Well, duh, no wonder that doesn't work. That CANNOT BE MADE TO WORK. There are too many unknowns and unknowables, too many rusty arms, too many trees.

But every time it fails, the "solution" is to plan every LARGER steps next time, with a much BIGGER database that lists every single tree and bush and pothole. THEN, oh boy, you betcha we'll succeed.

Nope. That's a bad algorithm, a bad paradigm. The cybernetic loop model tells us the answer is way back at the other end: continuous, incremental, small improvement steps. Steps driven by local "feedback" that doesn't even involve upper management.

You can get to places you need to go with a million simultaneous tiny, sensible steps that people can understand that you cannot get to with one huge project, regardless how many billions you spend on "planning" it. Our whole accounting system, meant to help us spend money wisely, is causing us to spend it foolishly.

As the IOM report realizes - "We don't need a billion dollar project -- we need a billion, one-dollar projects." (paraphrased from "Crossing the Quality Chasm"). This isn't "sour grapes" or "some dumb idea" -- this is the most profound wisdom humanity has come up with yet.

It's kind of the Chinese approach. If every person picks up one piece of trash a day, it's way more successful than if every person sends $1000 per year into a central location where we build the Institute of Trash Pickup and study the trash-pickup problem and produce endless reports and finally some huge trash collection system that doesn't really work but is really expensive to maintain when they're not on strike (thank you, John Gall, for that insight.)

Ditto for installation of some kind of automated physician order entry system or other massive cultural change of the way things are done. It may seem "hard" to figure out what huge new system, in one step, will get us from point A to point B. Hmmm. Maybe that's because there aren't any "one-step" solutions to getting through the forest, and we need to reconsider our approach. Maybe a million tiny adjustments will solve two problems at once: the "What do we do?" problem, and the every popular "How do we implement it?" problem.

Ten thousand tiny search engines (people) each looking for one tiny step that is possible and totally understood that would help "a little bit" actually constitutes a "massively parallel supercomputer" that can outstrip almost any other way of "solving" BOTH of those problems simultaneously. That's really cool, because it turns out not to matter how great a solution is on paper or at some other site, if there's no way to get it implemented here without spilling the coffee and crashing the bus. That's the lesson Toyota learned. Forget central planning, which the Soviet Union demonstrated doesn't work. Empower the troops to use their eyes and brains and good judgement and make a million adjustments of 0.001 percent size.

It's an incredibly powerful algorithm. It doesn't require brilliant central planning officers. But it does require believing that the ground troops have enough brains to carry their coffee across the office without spilling it, even if they just waxed the floor. Turns out, according to Toyota, that's probably true.

Oh, yes, I almost forgot. It would seem to make sense that, if this cybernetic doodad is so powerful, that it is in operation already in billions of places around us in society and biology. That would argue that it might be worthwhile to have cybernetic doodad detectors, and cybernetic doodad statistical tools available to use to spot and describe and tweak such thingies.

Most of the last 6 months postings to this weblog have tried to make that argument, in more complex ways, and maybe that's my problem.

The American Indians knew this - that the Great Spirit worked in circles, not lines. Taoism knows about circles and cycles. "Systems thinking" involves accepting that there are important places where feedback loops just might possibly be involved.

We're so close now. Bring it home, baby!

(Posted in memory of Don Herbert, "Mr. Wizard", who died last week, and taught millions of kids, including me, basic science-made-easy on his TV show.)

Tuesday, May 08, 2007

The hierarchy of life and implications for interventions

Apparently, we don't exist.

Every day more studies come out showing something that we'd suspected all along - namely, we actually have very little control over our own lives and even over our own decisions.

The people around us and our neighborhoods, at work and at home, are increasingly seen as the main cause of our beliefs, our decisions, and our actions.

Well, that just messes up everything, thank you. Our whole system of justice, and education, and rewards at work, and "the American way" are all based on the concept of rugged individualism, on one dominant person surrounded by a sea of "environment", making decisions, navigating the shoals of life, and deserving rich rewards for success or punishment for "being bad."

But that concept doesn't seem to survive the light of day, or a careful look at the evidence. And much of the evidence lately is coming from public health, including studies of the "health" of the "healthcare system" itself.

A very "robust" finding of the field of "social epidemiology" is that the physical health of a person seems to be very strongly associated with his or her "connectedness" with the tissue of society around them. The more someone is connected to the social fabric, the healthier they will generally be. The more someone disconnects and drops out of social interactions, the worse they will tend to be, across the board, in terms of almost every measure of morbidity and mortality. They'll be more depressed, more fatigued, less successful, less wealthy, more likely to be obese, more likely to have depression, diabetes, heart disease, asthma, the flu, common colds, etc.

But, does disconnection cause disease, or does disease cause disconnection?

The answer is "yes" to both, because this is not a linear chain of causality, but a causal loop. That means it can spiral downwards or upwards.

That's familiar. The more a person becomes depressed, the more likely they are to fail to cope, to get into trouble at work and home, and to worsen their situation at work and home. And, the worse their situation becomes, the more depressed they become. It's a "vicious cycle."

The ultimate end of that death spiral is, in fact death. There is complete disconnection and isolation, total dropping out, followed by catching the next excuse to die, from natural disease or neglect or violence, or violence against others (death by police). Just as a human cell, removed from the body, will lose the will to live and commit suicide ("apoptosis"), humans,
disconnected from the social body, lose the will to live, and find a way to die.

This is a real bummer in several ways. One unexpected way is that almost all research studies are based on statistics developed by a guy (Sir R. A. Fisher) studying crop yields where the causality only goes one way. The crops do not realizing they aren't growing and make midnight raids on the fertilizer shed. People, however, do. In fact, almost everything people do, or collections of people, are just drenched and dominated by feedback loops. And feedback loops invalidate classical statistics based on lines, not circles. (It's based on the "General Linear Model"). So, it's hard to study. So, people don't study it and go study something else.

Of course, there are tools that can easily handle such loops, including electronic circuit design or "system dynamics" or "feedback control system engineering." But those are almost unknown in public health so don't hold your breath.

Despite that, the evidence just leaps off the page. The most successful interventions in health care, as described in "Health Program Planning - An Educational and Ecological Approach" (4th ed) by Green and Kreuter, apologizes for abandoning classical models on page 3, with the comment that

"Ecological approaches, however have proven difficult to evaluate because the units of analysis do not lend themselves to the random assignment, experimental control, and manipulation characteristic of preferred scientific approaches to establishing causation."
Which is a long way of saying that the old set of "linear" tools and linear thinking really doesn't work, if you try to apply it to the real world that people, not billiard balls, deal with daily - a world dominated by feedback.

But, all is not lost. Even despite that, the healers of the healers, the designers of the health care system itself, have studied their own problem and concluded that the right unit of intervention is the small team on the front lines, which they call a "microsystem." In between the one doctor who is hard to change, and the hospital, which is hard to change, is the small practice team, which, fascinatingly, the Institute of Medicine has found easy to change.
(See Crossing the Quality Chasm.)

And, ta da!, big surprise, the recommended method of changing that unit of life, the small team, turns out to be "feedback." Well, of course it's feedback - that much becomes obvious once you shift lenses and realize that everything, at every scale, is more defined by what's outside it than what's inside it. (Mach's principle in cosmology.)

So, a single doctor or staff member can't really be changed by an intervention, because their behavior isn't really "theirs" -- it is a feedback property of the small team they work with. So, if a doctor or nurse "makes a mistake", it usually turns out that the place to fix isn't the individual, it's the larger structural team around them that effectively forced them to make the mistake. The system buys the gun, loads the gun, cocks the gun, hands it to the person on the front line who pulls the trigger.

And, on the flip side, there is no such thing as "the patient." Patients are people, and people come with a posse, an aura, their own small team of friends and family that mutually influence each other. So, ta da!, if you want to change how "a patient" behaves, or go a step further upstream and change what they believe, you have to address how the patient's "microsystem" behaves. The IOM didn't make that leap, but the rest of health education has realized that "family-centered" interventions are way more effective than "patient" interventions.

Of course, this really only changes the geographic and time scale, something the IOM hasn't yet realized.
This property of being defined by the outside peers is not restricted to cells or to people - it's a universal property of living things or any regulatory control system.
So, it's "scale invariant". That means if we flip to the next lens on our microscope and stand back another hundred yards, now we see the unit we are messing with is "the microsystem" but it is swimming in a sea of other "microsystems", and is ultimately dominated by the other microsystems as a peer group. Now, the time constant is much longer, so it may take months not days, but simply changing one small team and leaving its environment unchanged will sooner or later result in the change being undone, rejected like foreign tissue, and discarded by the larger living tissue of the body of the health care system. People will revert in hours. Clincial services may take months or years to revert, once the intervention pressure is released.

Man, how far does this thing go? Well, according to many people such as myself or Ken Wilber, it just keeps on going upwards. Wilber refers to one of these structural ladders of the hierarchy of life as a "holon." Norm Anderson, when at the NIH, refered to the same hierarchy from cells to tissues to organs to people to groups to neighborhoods to populations -- but nobody really wanted to hear that, so Norm left. The tissue rejected the novel idea.

Well, that math just gets impossible then, doesn't it? Not really, it just rotates. Large, tall, hierarchical structures have their own basic modes, as does anything else. There are almost certainly solutions that can be found, or descriptions, based on combinations of scale-invariant (symmetric) properties as basis vectors. And one such scale-invariant property is the concept of a regulatory feedback loop. At every level of this nested hierarchy, exactly the same problem has to be solved - how to maintain the equivalent of homeostatis in a sea of change. Cells do it. The pancreas does it. The Endocrine system does it. The body does it. People do it. Small teams (microsystems) do it. Hospitals do it. Health care chains do it. Whole cultures do it. Nations do it. They're all doing the same abstract dance, of seeking to reestablish their own feedback loop that works for them.

So it's kind of a fractal, a Christmas tree shape, where each branch is the same shape as the tree itself. The question is, what are the fundamental modes of vibration of such thingies? If it were made of steel and you plucked a branch, what would it sound like? (There would surely be harmonics of harmonics of harmonics.)

And, do such things have "resonant frequencies"? Is there some speed of change that will work far better than other speeds, or one that is far easier to "fall into" because it "aligns" with the larger resonance of the larger system around it?

Those are the interesting questions. In the short run, we have some immediate insights that don't need years of theoretical simulation and wisdom, based on this model or framework or lens, whatever you call it.

Here's a few:

1) To change a person, you have to change their peer group. They can move to a different peer group, or the peer group itself can be altered, but it has to happen.

2) etcetera. That is, you can't change that peer group, stably, without clicking up one more rung of the ladder, using a new power lens, and finding the peer-group's peer group.

3) Therefore, either you have a cascading, exponentially growing evangelical type of change, or you have a diminishing, exponentially decreasing, tissue-rejection kind of change. There is no such thing as a stable change of one "unit" at any scale. Life doesn't support constants, only growth or decay.

4) Our whole system of justice, education, rewards, and punishments is based on a flawed model of the world. That's all going to have to be rethought. All this emphasis on individual education has already run into the increasing emphasis on "teamwork" and "groupwork" and a realization that the unit of research, of discovery, of industrial production, of making or preventing errors is not a person, but a "Microsystem", a team, a cockpit crew, an operating room team, etc.

5) We're going to have to "bite the bullet" and start using the right tools to address these problems. They don't fit into the general linear model. All linear statistics break down and all linear thinking leads to erroneous intuition.

6) Collaborative IT systems are feedback loop generators, not huge replications of a single human-machine interaction. The "electronic health record", viewed this way, is part of the feedback loops that a patient uses to control his own life, or a doctor uses to control and manage their care for the patient, each side also calling on their own "microsystem" team to support this activity. Such systems cannot be evaluated or tested as if they were an Excel spreadsheet with a Graphic User Interface -- the human factors are feedback loops that can't possibly even show up in single user testing. The system will be made or broken on how the larger social fabric changes feedback loops when the system is put in place. That won't be revealed by the current CCHIT test suite.

7) This model would say that the right thing to be tracking for hospital adminstrators would be microsystems and teams, more so than individuals. The "dashboards" should reveal whether the microsystems are working, and, moreover, the people who need the dashboard aren't just the management outside the team, which is post-hoc, but the team members themselves for real-time self-management, steering and navigation. (That's straight out of the IOM's Crossing the Quality Chasm.)

8) Ditto for patients. This model would say that patient teams need their own Personal Health Record as part of a real-time feedback self-management model, that the doctors or clinical staff are only a very small remote second-order part of, for chronic disease management that involves life-style changes.

9) And, ultimately, this model points ever upwards. It says that people cannot be healthy unless their peer-group is healthy, and that cannot be healthy unless it's peer group is healthy, and, ultimately, all this depends on the national culture and planetary population being healthy.
So, yes, not only are you your brother's keeper, but your brother is, in many real ways, your keeper.
10) The "public" that "public health" must be concerned with (among others) is actually a fractal, nested, hierarchical part of the hierarchy of life. This cannot be made to "go away."
We need to "go to the mountain." Predictions as to the value of interventions in the behavior of a part of that hierarchy, on some level, whether cellular drugs or pancreas care of health system regulations, have to take into account that the parts are connected and will determine each other's behavior through feedback responses to interventional pressures.

It doesn't make sense to say "we put in a good system but the culture rejected it." The word "good" needs to be defined with respect to the whole hierarchy of life including culture. If the system is "good' in that metric, then the culture will, almost by definition, not reject it.

Well, that's pretty pedantic, and maybe you have a different view or some contrary evidence. I'd love to hear it. Let's have a good debate! See that "comment box" down there? Please use it and tell me whether you think I'm right, wrong, or need to increase my meds! Or email me. My email is in my profile.

Wade