Showing posts with label planning organization theory. Show all posts
Showing posts with label planning organization theory. Show all posts

Friday, December 07, 2007

Building blocks for real progress

When I was an image-processing guru at Parke-Davis research, I used to tell people not to trust any result that depended on what threshold or cutoff they used to measure their pictures. Real results would be there at any cutoff, and publishing any other "results" would be regretted later. The same lesson applies to planning at all levels and time-frames.

So, if you or your company or your country comes up with some "great" plan that looks terrific on a 1-quarter or 1-year horizon, but looks wretched on a ten year horizon, you should "raise your hands slowly and step away from the vehicle."

Complaints hat this makes your planning "harder" are both ill-founded and ill-advised. They are ill-advised, because if all anyone needs to do to plan is pick the obvious short-run "solution", then they can hire kids out of high-school to do that and fire you and save a bundle. They are ill-founded because this strategy will actually make your life much easier, once you-all master it. (Unfortunately, the "all" part is an issue, and we'll get to that below.)

There's a trick used in physics to solve very complex problems. It goes like this. First, you find anything, regardless how small, in fact the smaller the better, that works. Then you find a second something, also tiny, that also works. You keep on looking until you have a whole library of tiny things that work. Then, you build the actual solution you are after out of any combination of the tiny things, because, if they work separately, they'll also work together.
(Well, if you included that as a constraint when you were looking for things that "work".)

The tricky part is finding things that don't depend on the "accidental" world you happen to be in at the moment. The reason is simple - if part of the world around you isn't fixed, then it can change. If it changes, and you answer relied on it being true, then your answer is no longer true. This is just common sense.

Well, easier said than done. It's like the old medical school parting words: "Half of what you just learned is wrong -- we just don't know which half. "

This, my friends, is what all this "diversity" is for. If the thingie you are considering for a tiny truth is actually true, and actually doesn't depend on context, then it will be as true for THEM, over there, as for you over here. If they are the same as you, you don't learn anything by comparing notes, because you all will share the same false assumptions and blind spots. But if they are different from you, then you can both learn something by comparing notes.

In fact, the more different from you the are, the more you can learn from them, from seeing what is true in their context. The the more different kinds of different you go through, the more wisdom you can get.

But, of course, "Half of what you know is wrong" is true for them too. So, the real question is, not to focus on what you disagree on, but to focus on what you agree on. If you pick a wide enough range of diverse contexts, and something seems to be true in all of them, that's a really good candidate for one of these building-block truths.

What's a building block look like? They usually are more like conditions or adjectives than nouns. So, "Half of what you know is wrong" is an example. It doesn't tell you which half, but it does tell you what not to rely on - namely, that your mental model is entirely correct. In other words, any strategy that doesn't include a feedback loop and a learning curve and permission to challenge the internal mental map we're navigating with will be ill-advised.

Like Mapquest or Google maps -- any map, ANY map, that has been around long enough to exist is already wrong somewhere. Some bridge is out. Some road is under construction. Some road exists but is under water right now due to flooding. Some road never was there in the first place - the guys that look at satellite photos mistook a fence for a road. Etc.

Google Maps, as I recently discussed, now has a way that users who see something wrong can post a comment and even, gasp, a correction - within limits.

Both of those are crucial elements in this sort of dynamic feedback loop, if you want BOTH flexibility and stability. It has to be rigorous, but not rigid. It has to move, but be stable and reliable. It has to change with feedback, but not change too much all at once. For some changes it has to get multiple people to assert something is wrong to make a change, so no one person can mess up everything by accident or on purpose. It needs the properties that a good Wiki has, like Wikipedia, of being "mostly right, almost all the time" which is superior in most cases for navigation than something that was 100% right - two years ago when the last expert had a chance to check it. That last thing is like replacing the dirty windshield on your car with a crystal clear video of what you would have seen -- an hour ago -- looking where you're looking.

Even a very dirty "now" trumps a crystal-clear "then" for the purpose of not running into something in the road now.

So, moral of the story - treasure diversity because it may not be right but it will help you figure out where you might be wrong and where to look instead for things that are actually "true" in all cases, not just the few cases you and people like you have experienced and mistake for "every possible case."

Anyway, another point here is that most of our lives is spent, like a helicopter's blades, spinning in our own prop-wash. Most of the "world" humans live in today is urban, and in places like the US, most of that is manually constructed. And most of the "problems" we face on a daily basis weren't even here 1000 years ago -- they're problems that we invented by "solving" previous problems the wrong way. That is, what we called "solutions' were only "solutions for the short run that make us happy today but that we'll really regret when we get back to them tomorrow."

Unfortunately, now every day is some prior day's "tomorrow", and every day we face the consequences of our own contempt for the future, which goes full cycle and comes back to us as, golly gee, contempt for us. We say "screw tomorrow, solve today" and when it's tomorrow and we get "screwed" we some how fail to grasp that we did this to ourselves.

But now that the problem is worse than it was, we repeat the process. The more extreme the problems become, the shorter our planning horizon, the sooner the bad-side of our two-sided decisions come back to haunt us.

So, almost everything the country seems to be striving for is actually some way, not to solve a problem, but to put off a little longer having to face a problem, which deep down we know is only going to make the problem worse when we finally are cornered and have no choice but to face it.

Example - all the budget nonsense that state governments are doing right now.

But more things are examples of this. Most of science and technology is focused on a similar misguided mission. So, for example, it was a "great thing" for farmers in India to get electric water pumps that would pull up water from deep in the aquifer and let them grow way more food than before. Well, it was "great" as the population increased and their dependence on this steady stream of water and income grew, until today, when the water has pretty much been exhausted. Now, they are back to facing poverty, but at a whole larger level than before. More water wasn't a solution, it was an "amplifying delay tactic". Let me call than an ADT.

Worldwide, solving the "big problems" are all like this. They're like trying to deal with congestion on highways by adding lanes to the roads -- that let more people live further away, move the suburbs, and then fill up again and now when the highway is closed, twice as much traffic has to snarl its way through back roads. We got a short term win at the expense of a long-term loss. And the short-term win was only good for a short while, but the long-term loss seems to go on forever. Looking forwards, the short-term looks inviting.

Looking backwards, we say "Why did I do that? Why do I keep on doing that? Why can't I learn?"

There's the point. The only way out of this perpetual cycle of digging ourselves deeper in a hole is to have a learning curve that is stronger than our short-term urgent greed.

They say "If you're in a hole, stop digging."

At this point, all our scientific energy seems to go into digging deeper - more water, more energy, more everything except what we actually need - namely, a social learning curve that helps us grow up enough to overcome the temptation, next time, to sacrifice the future for the present.

Once we master self-discipline and self-control, then cheaper energy and water might be nice, but right now they would only double the size of the problem -- like those science fiction monsters that grow with every attack on them.

We need to ask, what has ever in history allowed people to have long-term perspective? How was it that Moses could persuade everyone that they should save up seven years of food to tide them over bad years, thousands of years ago, and today we can't seem to plan one year ahead?

We have enough facts and enough knowledge. Now what we need is more wisdom.

Where does that come from? Well, that's the right question to be asking. We really need more wisdom, ahead of more health care insurance, lower (or higher) interest rates, different social policies on immigration, etc.

Back up a step everyone, and look at how we process the information we have now. Are we making good use of each other's wisdom and eyes? Nope.

Could we be? Yep.

Should we? Yep.

If we don't, what we will get will be more of the same, digging us deeper and deeper into tomorrow's hole with today's "solutions."

First, sharpen the axe. THEN cut down tree. Right now, we have more a club than an axe, and it's pretty bad at chopping down trees.

So, where do we store learning so that it has sufficient power over us "next time"? Well, it will have zero power over us unless we are willing to submit to some outside power.

Storing learning inside ourselves clearly doesn't work. Storing it outside will work only if we are in the habit of paying attention to, in fact, paying heed to some outside agency.

If we insist on being "free" from all outside agencies, we are also "free" from that annoying learning capacity, and "free" to keep on repeating the same mistakes over and over.

Some level of submission, of obedience, is required to make this work. This is a stunning concept to Americans, so I'll stop this post right here, for some reflection time.

Reality, having rigid bones gives a runner speed over a totally "free" jellyfish.

This dynamic motion thing, listening to authority but also being able to update the authority feedback loop, that's what we need to master, based on any examples of it we can find anywhere at any level.

This is another one of those situations where the best situation is somewhere in the middle, not at the end where you'd think it would be. It's like "Little's Law" which I described in an earlier post, where the maximum throughput though a system is not with things at "maximum capacity", but about 20% back from that. The maximum passenger thoughput on airplanes is at about an 80% load factor. The maximum rate of processing patients through a hospital is inthe 80-85% full range. Above those levels, due to congestion, everything slows down. The more you jam in, the slower it gets. If every nook and cranny is full, it stops entirely.

Same with "freedom". The maximum value of "freedom" is not at the 100% free end, but more like the 85% free end. Some constraints are valuable. Some yielding and submitting to outside agencies, even if freely selected and elected, will be required, for maximum power.

Runners with stiff bones are more free than jellyfish. Think about it. WHAT you submit to matters, but the fact of submission is built into the system where you can't remove it.

So, one more pass through that concept, which is part of the "vertical loop" of feedback that I mention in prior posts and my Baha'i presentation.

People have no internal absolute reference point. People live in a context that "floats" or in electrical terms, has no absolute "ground". Since everything you think is a mixture of your logic and your context, and the context floats, you have no absolute control, on a moment by moment basis, of whether you're right or now wrong -- it will look the same to you from inside your context.

BUT, you have the external world to the rescue. You can build your learning in outside you, where it is not subject to your own moods and context variations. That solves the "where to put it" problem but doesn't make it helpful if you (tomorrow, when you need it) refuse to listen to (you, today, when you see the downside.)

So you (tomorrow) will have to learn how to submit to you(today). That means you have to learn how to submit to an external agency. More than that, it means you have to be able to submit to the agency exactly when it matters, namely, when you disagree with it and are strongly tempted to override it.

So, you need to be in a very deep habit of obedience, and willing to let your external friends and agents tell you when you have gone off the deep end. That has to be practiced a lot during good times to hold up to the pressure when it's a bad time and you(then) really really want to do the same bad thing that you(now) realize is a bad thing to do.

All of which only works if the thingie you are submitting to has an update mechanism so that as the world changes, it changes with it to keep the wisdom effect constant.

This idea of needing to voluntarily submit to an external authority on purpose is the basis of all government and corporate authority, but it also only works if the authority has an updatable map and an update process that's working. Otherwise, you've gone out the other side, from rigor to rigidity.

It's this dynamic stability in the sweet spot in the middle that we need to learn how to make more vivid, and more obvious, and learn how to think about it and talk about it with each other.
We need to see it working in action, and see when it breaks and how to fix it when it breaks.

If we get this vertical loop working, and the horizontal one I talk about elsewhere, we're most of the way home. These are, I believe, invariants. They are constants. Any culture, any adaptive agency, human, animal, machine, Martian, robot, whatever, has to have these adaptive loops in place to learn and to remember its learned lessons when it matters later.

Like any "tough love" the system has to be both demanding and nurturing and responsive. But, I don't think humanity can get past the current white-water point without solving this problem of self-governance and learning over time.

That's more important than new clever ways to delay the inevitable.

All behavior-governing systems face this dilemma or paradox. They need, like starch-based colloids or silly-putty, to be rigid in the short-run and flexible in the long-run.

The normative system called "science", which denies it's a religion but has the same problem if not the same properties, both denies that it listens to authority and insists on enforcing its authority. In the short run, live within the rules. In the long run, challenge the rules, discover that the earth goes around the sun, that genes "jump", that continents "drift", that inheritance goes around DNA as well as through it, etc. In the short run, live within the model and map; in the long run, update the model and map. Both have to happen at once.

All legal systems have to face this same problem -- relying for learning on precedent, but knowing when to break precedent because the situation is "unprecedented." When do "new" rules have to be put in place to end up with the "same" result that we used to get with the "old" rules but we don't get any more, because something out there has changed?

All religions have to face this problem. When are the "old ways" something we need to enforce, and when should they be overturned and changed to "new ways" to accomplish NOW the exact same EFFECT as what the old ways THEN accomplished - but don't any more.

So, this is a pretty "big" problem. It's universal. It's underneath government, corporate policies, religions, and science. We all have run up against this same problem, this same "elephant" that is not really a tree, and not a rope, and not a leaf.

It's not an infinitely complex thing. It's just an unfamiliar thing. It's about as complex as a Stirling engine, with a few parts and something that shifts "phase" and carries momentum over time. But it does have this structure that cannot be simplified any further without destroying it. We have to rise to it, not drag it down to the simpler level that we would prefer it to be.

We can do this. We have computers and simulation and animation and tools. The basic "governance cycle" is something we can model and learn about through simulation, since it's pretty hard or immoral to experiment on the real world, even though we all do every day.

We should be able to pool our notes from different contexts, looking for what it is that's constant across all of them. What's the same about religion, science, and commerce? That's where to start.

What's the same today as it was 3000 years ago? That's where to start.

Learning. Adaptation. Dynamic Stability. Sort of a "bank" or reservoir or fly-wheel for "normative pressure" that we invest in in good times and can draw from (or down upon us) in bad times, meaning, times when if left to our own devices, we'd do the wrong thing but when, with social pressure, we'd do the right thing. This is a sort of flavor of "social capital." We have a flywheel for "obedience" as well, because if we don't learn that in the good times we will surely not follow it when we disagree with it, even if everyone around us assures us that it's us that has changed, not the world, and that we're having a really bad day.

If we can build F-22 jets, we can figure out how to build a dynamically stable governing process and troubleshoot the one we have, and see why the results today are, should I say, disappointing?

Monday, October 29, 2007

Central planning in a complex world


If the world is too complex to allow for long range planning, what should central management be spending its time doing?

As all the parts of the world, on many scales, start colliding and interacting, we now find ourselves inside what scientists would call a "complex adaptive system."

In that kind of world, nothing works the way you think it will, and everything has "unintended consequences" or "unforeseen side-effects." So, we might think that long-range central planning is impossible.

As usual, we're both right and wrong, and the situation is, well, "complex" and nuanced, and depends on what you mean by "planning."

Certainly "central planning" as practiced by Stalin in the Soviet Union or Mao in China ran into many unintended side effects, of the kind where millions of people died because the plans didn't seem to relate to reality on the ground.

But, today, with advanced supercomputers and high-speed global communications, now we can do central planning, right? Nope. Before the problem was too little information. We zoomed right past the sweet spot of "just the right amount" of information, and now we're deep into "too much information!" and heading deeper at an ever faster rate.

So, yes, we could deliver the equivalent of a moving van full of 3-inch binders to a small leadership committee every day, and ask them to read that, digest it, and plan based on it -- but I think the problem is obvious. That will simply never work. There is not enough "bandwidth," regardless how "smart" those people are , even to read that much new information, let along digest it well enough to grasp the implications in "real-time."

All technology is doing is further swamping the system, and that will never get better.

Actually, it's getting worse, because of the problem I've talked about before that information is "context-sensitive" -- that is, the meaning of some "fact" is really only evident if you understand the context of the observation of that "fact. " You can't just snip a fact out of context, slide it over to a central place, and expect it to mean the same thing there that it meant in context.

We all are familiar with this problem, yet, socially, we keep on pretending that it is some sort of local breakdown and that this is not a universal law. The problem is that it is a universal law. Information is not only context dependent -- it gets worse. Information is basically "fractal", like an evergreen where every branch, if looked at by itself, is the same shape as the tree, and each of its branches is the same shape, etc. There is, in other words, an infinite amount of information buried behind every detail, and under every rock, and in every "can of worms."

To try to "consolidate" this information and avoid the "moving van" of binders, each level of management "condenses" the information and "simplifies it." That process, alas, is "lossy", meaning, frankly, it doesn't work most of the time. What gets lost in translation are the key "details" that seem unimportant but that add up to changing the entire conclusion and outcome.

So, this cannot be fixed by having "even smarter" people at the top of this pyramid of information distortion. By the time information gets to the "war room" all the relevant detail has been stripped out by well-meaning intermediaries. And, you can't skip the middle because the volume of detail is too much to handle, again regardless how smart you are.

So, what to do? The only way to deal with this is to realize that the concept of central planning and central "control" is fatally flawed, and to push decision making outward, and delegate it down to as close to the decision as possible, where it still makes sense.

So, we find in The Toyota Way, an emphasis on Genchi Genbutsu, or "go down and look for yourself, because whatever they told you is going on left out something important that will change your decision once you see it."

This is not because the people "at the top" are not smart -- it's because "smart" doesn't matter if you were handed the wrong problem to work on, and the wrong facts about it to use.

It is what is known as a "system problem" and it is "structural." It will not go away with better information processing. The details cannot always be ignored. In fact, most of the time the details matter. Information is not "compressible" on the huge scale we're trying to operate on these days.

So, again, what to do? If central planners cannot plan actions, there is still one thing they can do, and that is to plan processes that, when distributed out, will result in coherent and successful action.

(Actually I think it's even one more step removed, and the best they can do is to plan processes that will lead to emergence of local processes that when carried out locally, times a billion, will result in correct and coherent action - even in the total absence of a "central plan." )

This is the problem that Computer Science is dealing with today, under the handle "emergent computing" or "evolutionary computing" or "swarm computing" or some such thing. This is the problem IBM has to solve for the "operating system" for their supercomputer (Big Blue?) that is really 860,000 computers consulting with each other about what each of them should do next.

So, the literature and research on this topic is buried in Computer Science, where managers and policy makers seldom tread.

The key take-away message, though, is that the problem for today, as viewed by Complex Systems people and Computer Scientists, is how to develop, discover, or evolve processes that lead to processes that lead to coherent adaptive action of the whole swarm.

Interestingly, as I understand it, that is largely the central focus as well of the Baha'i Faith, which focuses on finding what processes lead to the emergence of locally relevant decision-making processes that still combine and work together instead of fragmenting so that the whole thing hangs together with central unity and yet the power of local eyes dealing with local issues, while percolating larger issues upwards and getting guidance on those downward.

This is the exact same focus that the Institute of Medicine has realized needs to be done to make health care safer, as described in "Crossing the Quality Chasm" -- local teams, which they call "microsystems", have to be realized and empowered to be self-managing based on real-time local information and feedback -- while, at the same time, still participating in larger scale coherence that can follow patients and patient care as it crosses from one such team to the next.

And, this is the same focus that Public Health has, as I learned at Johns Hopkins over the last few years. Aid and support for any group, whether teen-smokers in some rich suburb, or indigenous people in some remote country, has to be "culturally relevant" and rooted in local action, or it will suffer "tissue rejection" and be thrown out as soon as the intervention is over.

Central planning can realize there is, say, a problem with malaria that crosses teams, cultures, and nation-state boundaries - but the action has to be locally meaningful and sensible and fit with what else is going on locally, or it cannot work. Solutions cannot be imposed from above, as those that attempt to do so keep on discovering. Too much information is lost at the top.

I think these seemingly disparate groups need to pool their notes and cross-fertilize each other's thinking, because this is all the same problem surfacing in different places, manifesting itself in different worlds.

I guess if no one else is going to do that, or has already, it's time for me to start a "Wiki" so everyone can hang their fragment of knowledge on that framework and we can start to see what it adds up to, and where someone else has already solved that part of the problem.

Wade
(rainbow photo by me, on Flickr)