Showing posts with label microsystems. Show all posts
Showing posts with label microsystems. Show all posts

Monday, October 29, 2007

Central planning in a complex world


If the world is too complex to allow for long range planning, what should central management be spending its time doing?

As all the parts of the world, on many scales, start colliding and interacting, we now find ourselves inside what scientists would call a "complex adaptive system."

In that kind of world, nothing works the way you think it will, and everything has "unintended consequences" or "unforeseen side-effects." So, we might think that long-range central planning is impossible.

As usual, we're both right and wrong, and the situation is, well, "complex" and nuanced, and depends on what you mean by "planning."

Certainly "central planning" as practiced by Stalin in the Soviet Union or Mao in China ran into many unintended side effects, of the kind where millions of people died because the plans didn't seem to relate to reality on the ground.

But, today, with advanced supercomputers and high-speed global communications, now we can do central planning, right? Nope. Before the problem was too little information. We zoomed right past the sweet spot of "just the right amount" of information, and now we're deep into "too much information!" and heading deeper at an ever faster rate.

So, yes, we could deliver the equivalent of a moving van full of 3-inch binders to a small leadership committee every day, and ask them to read that, digest it, and plan based on it -- but I think the problem is obvious. That will simply never work. There is not enough "bandwidth," regardless how "smart" those people are , even to read that much new information, let along digest it well enough to grasp the implications in "real-time."

All technology is doing is further swamping the system, and that will never get better.

Actually, it's getting worse, because of the problem I've talked about before that information is "context-sensitive" -- that is, the meaning of some "fact" is really only evident if you understand the context of the observation of that "fact. " You can't just snip a fact out of context, slide it over to a central place, and expect it to mean the same thing there that it meant in context.

We all are familiar with this problem, yet, socially, we keep on pretending that it is some sort of local breakdown and that this is not a universal law. The problem is that it is a universal law. Information is not only context dependent -- it gets worse. Information is basically "fractal", like an evergreen where every branch, if looked at by itself, is the same shape as the tree, and each of its branches is the same shape, etc. There is, in other words, an infinite amount of information buried behind every detail, and under every rock, and in every "can of worms."

To try to "consolidate" this information and avoid the "moving van" of binders, each level of management "condenses" the information and "simplifies it." That process, alas, is "lossy", meaning, frankly, it doesn't work most of the time. What gets lost in translation are the key "details" that seem unimportant but that add up to changing the entire conclusion and outcome.

So, this cannot be fixed by having "even smarter" people at the top of this pyramid of information distortion. By the time information gets to the "war room" all the relevant detail has been stripped out by well-meaning intermediaries. And, you can't skip the middle because the volume of detail is too much to handle, again regardless how smart you are.

So, what to do? The only way to deal with this is to realize that the concept of central planning and central "control" is fatally flawed, and to push decision making outward, and delegate it down to as close to the decision as possible, where it still makes sense.

So, we find in The Toyota Way, an emphasis on Genchi Genbutsu, or "go down and look for yourself, because whatever they told you is going on left out something important that will change your decision once you see it."

This is not because the people "at the top" are not smart -- it's because "smart" doesn't matter if you were handed the wrong problem to work on, and the wrong facts about it to use.

It is what is known as a "system problem" and it is "structural." It will not go away with better information processing. The details cannot always be ignored. In fact, most of the time the details matter. Information is not "compressible" on the huge scale we're trying to operate on these days.

So, again, what to do? If central planners cannot plan actions, there is still one thing they can do, and that is to plan processes that, when distributed out, will result in coherent and successful action.

(Actually I think it's even one more step removed, and the best they can do is to plan processes that will lead to emergence of local processes that when carried out locally, times a billion, will result in correct and coherent action - even in the total absence of a "central plan." )

This is the problem that Computer Science is dealing with today, under the handle "emergent computing" or "evolutionary computing" or "swarm computing" or some such thing. This is the problem IBM has to solve for the "operating system" for their supercomputer (Big Blue?) that is really 860,000 computers consulting with each other about what each of them should do next.

So, the literature and research on this topic is buried in Computer Science, where managers and policy makers seldom tread.

The key take-away message, though, is that the problem for today, as viewed by Complex Systems people and Computer Scientists, is how to develop, discover, or evolve processes that lead to processes that lead to coherent adaptive action of the whole swarm.

Interestingly, as I understand it, that is largely the central focus as well of the Baha'i Faith, which focuses on finding what processes lead to the emergence of locally relevant decision-making processes that still combine and work together instead of fragmenting so that the whole thing hangs together with central unity and yet the power of local eyes dealing with local issues, while percolating larger issues upwards and getting guidance on those downward.

This is the exact same focus that the Institute of Medicine has realized needs to be done to make health care safer, as described in "Crossing the Quality Chasm" -- local teams, which they call "microsystems", have to be realized and empowered to be self-managing based on real-time local information and feedback -- while, at the same time, still participating in larger scale coherence that can follow patients and patient care as it crosses from one such team to the next.

And, this is the same focus that Public Health has, as I learned at Johns Hopkins over the last few years. Aid and support for any group, whether teen-smokers in some rich suburb, or indigenous people in some remote country, has to be "culturally relevant" and rooted in local action, or it will suffer "tissue rejection" and be thrown out as soon as the intervention is over.

Central planning can realize there is, say, a problem with malaria that crosses teams, cultures, and nation-state boundaries - but the action has to be locally meaningful and sensible and fit with what else is going on locally, or it cannot work. Solutions cannot be imposed from above, as those that attempt to do so keep on discovering. Too much information is lost at the top.

I think these seemingly disparate groups need to pool their notes and cross-fertilize each other's thinking, because this is all the same problem surfacing in different places, manifesting itself in different worlds.

I guess if no one else is going to do that, or has already, it's time for me to start a "Wiki" so everyone can hang their fragment of knowledge on that framework and we can start to see what it adds up to, and where someone else has already solved that part of the problem.

Wade
(rainbow photo by me, on Flickr)

Monday, September 17, 2007

Small team feedback control in health care

  • (This is a rewrite of a prior post to make it more helpful).

    Thoughts on the IOM and feedback to small teams (“microsystems”)
    General “white paper”
    R. Wade Schuette
    5/4/07 (original post)

    [ some sections of my original document were not relevant and were removed, and I added some updated links.]

    So, where does the IOM refer to this? Searching the full text of the IOM report doesn't even hit that word? We have to start with the main author's after-thought (reformatted for clarity below):A User's Manual for the IOM's 'Quality Chasm' Reportby Donald M. Berwick, Health Affairs, V 21 No. 3 May/June 2002, p 80-90,http://content.healthaffairs.org/cgi/reprint/21/3/80.pdf


    ABSTRACT: Fifteen months after releasing its report on patient safety (To Err Is Human), the Institute of Medicine released Crossing the Quality Chasm. Although less sensational than the patient safety report, the Quality Chasm report is more comprehensive and, in the long run, more important. It calls for improvements in six dimensions of health care performance: safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity; and it asserts that those improvements cannot be achieved within the constraints of the existing system of care. It provides a rationale and a framework for the redesign of the U.S. health care system at four levels: patients’ experiences; the “microsystems” that actually give care; the organizations that house and support microsystems; and the environment of laws, rules, payment, accreditation, and professional training that shape organizational action.

    From the "Prologue" to the article:

    One of the architects of the [IOM] report, Donald Berwick, decided that it would be worthwhile to condense the message into a “user’s manual” for interested readers in the United States and abroad. In this paper he synthesizes the report’s structural themes and presents them, executive summary–style, as a framework that did not appear in the final report but was the basis for the months of discussion that led up to the report’s writing and dissemination.This framework comprises four levels of interest:

    the experience of patients (Level A),
    the functioning of small units of care delivery (or “microsystems”) (Level B);
    the functioning of the organizations that house or otherwise support microsystems (Level C);
    and the environment of policy, payment, regulation, accreditation, and other such factors (Level D) that shape the behavior, interests, and opportunities of the organizations at Level C ...

    As the author of more than 100 peer-reviewed papers in numerous journals,Berwick was ideal for the task. A pediatrician by training, Berwick is chief executive officer of the Institute forHealthcare Improvement (IHI).

    So we can see here a four-level multi-level model of patient care with a very surprising twist - namely, it seems to have skipped over the doctor, going from the patient right up to the whole small team that includes the doctor(s), nurses, and other staff who collectively deliver care within that clinic or unit.This gap is no oversight. It reflects some very profound hypotheses:1) when caught up in an institutional environment, the boundaries of individuals blur, because doctors behave differently than they would in solo practice. Their behavior is as much a function of the team they are in as it is of their own "self".and2) if we want to intervene in this 4-level health care system to improve things, the place we should intervene is at the small team level, not at the level of the individual doctor.
  • The first concept is an inevitable consequence of putting together groups of any kind of actor that is aware of and sensitive to its environment, in a social setting where collective action is the norm. It shows up in primates where there is a rule that "There is no thing as one chimpanzee," because the behavior of the "one", when isolated in a room, is so different than when the "one" is in social context.
  • This phenomenon shows up among interacting robots, or interacting electronic components in some device. This is a "systems" concept, and as primal as any physical law, such as conservation of energy or conservation of momentum.The second concept then, that this is the place to intervene, follows from the first. Again, experience robustly supports this in public health, where trying to change the behavior of "an individual" while not changing their peer group or family has proven to be extremly difficult, and the trend is dramatically shifting to "family-centered" interventions.
  • But, this is not just a theoretical model. Experience in the field shows that this does in fact appear to be universally true in institutional health care, and that interventions at the team level are, in fact, dramatically successful.This document discusses 20 different health systems in which this was found to be true.
  • Executive Summary for Health Care LeadersMicrosystems in Health CareRobert Wood JohnsonFoundationDartmouth
  • http://www.dartmouth.edu/~cecs/hcild/downloads/RWJ_MS_Exec_Summary.pdf

    OK, so then the question becomes “What sort of "Intervention" is necessary to improve the performance and behavior of this team level entity and produce safer care in a more cost-effective manner?”

    The surprising answer given by the IOM that very little intervention is needed.

    In fact, the primary intervention required is simply to provide the team sufficient real-time feedback of how they are doing, and trust them to respond to it appropriately, without any further management intervention. This is a mix of "Theory Y" of management, and Deming's models of the behavior of employees, who, he asserted, given the tools to do their jobs, would do them.
    (But note that the team remains within the context of a larger health system, and that is important too.)Here's a detailed but readable discussion of how that feedback can work:
  • Powerpoint: http://www.dhs.ca.gov/pcfh/cms/nqi/ppt/MicrosystemsHlthCare.ppt
    and
    Microsystems in Health Care
    http://www.clinicalmicrosystem.org/publications.htm Joint Commission Journal of Quality and Safety
  • So, what does this tell us about the role of Information Technology (IT) within a health system? It seems to me that this clearly indicates crucial role for the real-time capturing of outcomes and visible feedback to the team, as well as a crucial role for interactive collaboration tools between the team members.

    This is IT at the microsystem level, and is almost entirely absent in many health systems, in which IT is considered the exclusive province of levels C and D - the enterprise and national statistics. This recommendation of the IOM focuses on an area that is referred to as "technology-mediated collaboration” by the University of Michigan School of Information’s program in just such an area.
    (see that program here: http://www.si.umich.edu/research/area.htm?AreaID=3 )

    Note that a fully-integrated national health care system would actually provide the necessary IT support for all four levels - A,B,C and D in a coherent fashion.In conclusion, the national health information infrastructure model, as perceived by the IOM, really includes providing real-time self-managment tools as the crucial, key IT support to small teams of caregivers, whether the caregivers are "providers" in a hospital, or patients and their friends and family.This needs to be more central to the discussion of IT in a health-care environment, and it is a very different subject than simply automating medical records -- it is empowering small-team collaboration.
  • That, according to the IOM, is where we need to focus our energies.
    The realization behind this is very simply that we have good people who will figure out on their own how to do good things if they simply have the tools to see the impact of what they are doing, in as close to real-time as possible.
    • Saturday, June 16, 2007

      Being a robot - 101: The cybernetic loop

      I realized that I was just assuming that everyone knew how robots think.
      Or for that matter, how babies think when they have to grab something.

      We usually think of actions as big chunks, such as "Catch the ball."

      Robots have to operate on a much more detailed, step by step level, with everything spelled out for them. Nothing is certain, so everything is just a process of getting a little closer and seeing if anything broke yet. And repeat.

      They do this by following a very simple loop, over and over again. Spot where the ball is. Push your hand towards it a little bit. Remember that your hand doesn't always end up where you were trying to push it. Figure out which way the ball is NOW from your hand. Push your hand that way one notch. Figure out again which way the ball is now. Push your hand. Etc.

      In a diagram, it would look something like this:
      Congratulations! If you understand that diagram, you are much closer to understanding how anything works. Actually, I think you're one huge step close to understanding how almost everything works.

      There is a cycle of action, looking, planning, action, looking, planning, etc. Over and over.

      The "planning" tends to be very short-range, uncomplicated planning - but what it lacks in complexity, it makes up for with speed and persistence and never getting bored.

      So here's a very powerful fact about life. Not only does "a journey of 1000 leagues start with one step", but sometimes the ONLY way to plan that journey is one step at a time.

      In fact, a series of small steps is a thousand times more capable than one big step, regardless how clever you are, and regardless how well "planned" that one step is. It took computer scientists almost 50 years to figure out that many small computers is actually much better than one large computer for getting work done. It took "artificial intelligence" workers about 30 years to figure out that many small, dumb rules added up to a better way to work than one huge, complicated rule - and it was easier to write and easier to fix too.

      Why is this? Imagine that you are on one side of a small woods and you want to get to the other side.
      It is very likely that there is no direction you can pick to walk in a straight line that won't bump into a tree.
      But, if each step can be a slightly different direction, there are thousands of paths you can use to walk through the same forest without running into a tree.

      What's the moral? It seems so "obvious" now, but it baffled scientists for 50 years -- a "curved" path is more flexible than a "straight" one. You can get places with a stupid little loop as guidance that no amount of clever planning can get you if you have to move in one step in one straight line.

      This kind of cycle with many tiny steps and a very short pause to think between each step is called a "cybernetic loop". It looks deceptively simple, while it is amazingly powerful.

      It can keep on working if the wind is blowing, without having to be reprogrammed. It can keep on working if the ball is rolling on a bumpy hillside. It can keep on working if your robot arm is rusty and doesn't always move as far as it used to when you push it, and sometimes it sticks entirely. This deceptive little loop is all the computer programming required, essentially.

      Now, it will work a little better if the robot has some learning capacity and has done this kind of reaching thing before. The robot may learn that it should reach for where the ball will be, not where it is now.

      You learned this so long ago you have forgotten that you learned it. Imagine a baseball game where the batter hits a high, fast ball and the guy in the field runs towards home base instead of towards where the ball looks like it will come down again, because that's "where the ball is now."
      So, yes, taking the speed of the ball into account does help. But that's a minor change to the program. The same loop works, except the "planning" step is a little bit longer.

      So, this is profound wisdom I'm giving you here. It took all of mankind 50 years to figure this out, and some haven't got the news yet. You get it for free, right here, right now.

      So, let me run it by you one more time. Here's the same moral, or same story, in slightly different words:

      A plan of action that involves a repeated cycle of very small steps, with some looking and thinking between steps, is much more flexible, and much more "powerful" than trying to "solve" any problem in huge step.

      Furthermore, if the world is complicated, and tends to have hills and bumps and wind gusts and rusty arms, you can be guaranteed that no "single-step" plan will ever succeed. In that case, ONLY a multi-step approach will get you where you want to go. If your job involves "going through the woods" and around trees that you don't even know about yet, it is much easier to plan to go around trees than try to "collect data" on the location of every tree, put it into some huge list or database, print out a map, and find "a straight path" through the forest.

      This doesn't say "don't bother planning." It does say, "don't waste your time trying to find a linear solution to a curved path." There are millions of curved paths that can work just fine, in cases, like the woods, where there is no straight path possible.

      And, one more time through it, from the Institute of Medicine's perspective, as in dealing with small teams (called "microsystems"). If you are dealing with a "complex, adaptive system" (like a hospital), it is way more powerful to just rig up the team with eyes and a feedback loop than it is to try to have hospital management "plan" how to improve things. Ditto for "The Toyota Way", or the power of "continuous improvement" or what Demings taught, or a "plan do check act (PDCA) cycle".

      Empowering your front-line employees by giving them "eyes" and a little room to maneuver on their own to get around "trees" is a very powerful strategy that works in practice.

      It is based on the most powerful "algorithm" we know of today - the "cybernetic loop."

      Oh, yes, one more tiny thing. Since this is such a powerful "algorithm" or "paradigm" or way of doing things, much of Nature and your body already knew about it and uses it.

      Public Health is sort of vaguely discovering that the "action" step always needs to be followed with a "reflection" or "assessment" step, but hasn't yet sprung to the fact that it is reinventing the wheel, or more precisely, the cybernetic loop, yet one more time. It hasn't figured out that many smaller steps adds up to a more powerful path-generator than one large step.

      And, sigh, enterprise budget processes don't reflect this wisdom. For years I fought with the fact that Universities tend to have "annual budget cycles", and enterprise computing is seen as coming in only two flavors: "maintenance" and "huge projects". Maintenance money can only be spent keeping things the same. Huge Project money ("capital budgets") can only be used to take, well, huge steps in a big straight line, and the big straight line, or "project plan" has to be computed up front and committed to before starting.

      Well, duh, no wonder that doesn't work. That CANNOT BE MADE TO WORK. There are too many unknowns and unknowables, too many rusty arms, too many trees.

      But every time it fails, the "solution" is to plan every LARGER steps next time, with a much BIGGER database that lists every single tree and bush and pothole. THEN, oh boy, you betcha we'll succeed.

      Nope. That's a bad algorithm, a bad paradigm. The cybernetic loop model tells us the answer is way back at the other end: continuous, incremental, small improvement steps. Steps driven by local "feedback" that doesn't even involve upper management.

      You can get to places you need to go with a million simultaneous tiny, sensible steps that people can understand that you cannot get to with one huge project, regardless how many billions you spend on "planning" it. Our whole accounting system, meant to help us spend money wisely, is causing us to spend it foolishly.

      As the IOM report realizes - "We don't need a billion dollar project -- we need a billion, one-dollar projects." (paraphrased from "Crossing the Quality Chasm"). This isn't "sour grapes" or "some dumb idea" -- this is the most profound wisdom humanity has come up with yet.

      It's kind of the Chinese approach. If every person picks up one piece of trash a day, it's way more successful than if every person sends $1000 per year into a central location where we build the Institute of Trash Pickup and study the trash-pickup problem and produce endless reports and finally some huge trash collection system that doesn't really work but is really expensive to maintain when they're not on strike (thank you, John Gall, for that insight.)

      Ditto for installation of some kind of automated physician order entry system or other massive cultural change of the way things are done. It may seem "hard" to figure out what huge new system, in one step, will get us from point A to point B. Hmmm. Maybe that's because there aren't any "one-step" solutions to getting through the forest, and we need to reconsider our approach. Maybe a million tiny adjustments will solve two problems at once: the "What do we do?" problem, and the every popular "How do we implement it?" problem.

      Ten thousand tiny search engines (people) each looking for one tiny step that is possible and totally understood that would help "a little bit" actually constitutes a "massively parallel supercomputer" that can outstrip almost any other way of "solving" BOTH of those problems simultaneously. That's really cool, because it turns out not to matter how great a solution is on paper or at some other site, if there's no way to get it implemented here without spilling the coffee and crashing the bus. That's the lesson Toyota learned. Forget central planning, which the Soviet Union demonstrated doesn't work. Empower the troops to use their eyes and brains and good judgement and make a million adjustments of 0.001 percent size.

      It's an incredibly powerful algorithm. It doesn't require brilliant central planning officers. But it does require believing that the ground troops have enough brains to carry their coffee across the office without spilling it, even if they just waxed the floor. Turns out, according to Toyota, that's probably true.

      Oh, yes, I almost forgot. It would seem to make sense that, if this cybernetic doodad is so powerful, that it is in operation already in billions of places around us in society and biology. That would argue that it might be worthwhile to have cybernetic doodad detectors, and cybernetic doodad statistical tools available to use to spot and describe and tweak such thingies.

      Most of the last 6 months postings to this weblog have tried to make that argument, in more complex ways, and maybe that's my problem.

      The American Indians knew this - that the Great Spirit worked in circles, not lines. Taoism knows about circles and cycles. "Systems thinking" involves accepting that there are important places where feedback loops just might possibly be involved.

      We're so close now. Bring it home, baby!

      (Posted in memory of Don Herbert, "Mr. Wizard", who died last week, and taught millions of kids, including me, basic science-made-easy on his TV show.)