Showing posts with label decision-making. Show all posts
Showing posts with label decision-making. Show all posts

Friday, November 02, 2007

Decentralized sense-making in a cluttered world



If central planning isn't helpful for sense-making in a complex and cluttered world, what is?

The world of "image-processing" in computing has come up with some techniques that seem interesting models for action. I want to describe one that I've used in the past. You don't need any math for this. I tried to make it easy to follow.

The problem we had involved finding the edges of a brain tumor on a 3-dimensional Magnetic Resonance Imaging image. This is actually a set of "slices", stacked like a deck of cards, across a section of the brain.

Each slice looks something like this picture, which is a cross-section image I pulled off the web from the NIH Image database of public sample images. That's a vertical "slice" through someone's head, facing to the left. (Note - the person isn't actually sliced or injured - the computer just makes it look that way!)

Maybe if you think of baking an orange into a loaf of bread, and then running it through a bread slicer -- you get the image of a stack of slices, starting with all bread and no orange, then really small circles of orange, then slices with larger circles, then smaller again, and finally bread slices with no orange at all. Our job is to find the orange in the pictures of the slices of bread and reconstruct what it looks like in 3-D.

If there is or might be a tumor, it's important to find the edges as accurately as possible, based on these kinds of images. That's not as easy as you might think, because when you zoom to the high magnification, the images are actually pretty blurry and "noisy" and hard to read as to where, exactly, an "edge" is.

Here's some structure in a brain, probably not a tumor, for illustration. If you click on the image, you can zoom it up and see some sort of black dot with a white border fairly easily in the upper right, "Slice #19". But if you look at the previous slice, the next "card in the deck of cards",
"Slice #18", the edges of this are less distinct and this slice of the orange is smaller.


Similarly around slice 20, maybe we can still be fairly sure we "see" the edges of the white structure, but by slice 21 it's not clear what's that structure, and what's just normal tissue.

And, we're using the magic of human eyes. We want some way the computer can do a better job than people at finding the edges of a structure, once a trained radiologist points it out. (This was all done over a decade ago and I suspect they have way better tools today, by the way.)

Anyway, let me describe how the edges can be found. Look at it on one slice first. Imagine surrounding the tumor with a line of people attached by stretchable elastic cords or "slinkies" or springs. In this picture I just drew a red dot instead of person, but you get the idea. Pretend that's the view from above of many people with red hats connected to each other with adjustable bungee cords.

Then, you ask each person, when he gets to a place where he looks down and sees dark changing to light rapidly, that might be the edge of the tumor, so he should dig in his heels and try to stay there. But, at the same time, you start making the springs stronger, pulling people towards each other.

As you do that, initially with the springs fairly stretchy and loose, the circle starts being pulled smaller and smaller, like a drawstring tightening on a purse. When each person gets to what seems like it might be an edge, they try to drag their heels and stop moving, independently.

After this has gone on a while, you may end up with something like this:


You can see that most of the people have found the edge of the tumor and dug in their heels there. But people #1 and #2 found a bright edge that is probably just "noise". And the people numbered "3" have found something that it's hard to tell if that's tumor or noise.

How should they decide?

In this technique, if you just start tightening the springs, at some point the collective pulling force of the majority will break #1 and #2 loose from the feature they are snagged on, and they'll snap into place around the tumor.

Based on just this slice, the people labelled 3 may not move, because maybe that's actually an edge. (Tumors don't have to be round - they can be irregular.)

Well, how do the people at #3 decide? Here's the trick. While all this is going on on this slice,
the same thing is going on on all the other slices, and springs connect the dots / people across the slices. In other words, we actually have a sphere of dots / people, connected by springs, kind of like an over-inflated balloon, and we let it slowly deflate in three dimensions at once,
around the feature in all the slices.

In other words, there is not enough information in the vicinity of any one person to be able to sort out image from noise with certainty. But, most of them are nearly right. We just don't know which ones that is. So, within each slice (or region) the people consult with each other, while at the same time they are consulting across regions as well, with a mix of believing their own eyes, and enough humility to know, at some point, to let go and go with the crowd.

This seems like a very simple plan, and no computer is required. It turns out to be a very powerful technique ("algorithm") that does a remarkably good job at sorting out "noise" from "signal" in 3-dimensions, with only trivial programming required.

Each person / dot simply has to pay attention to what it sees ("independent investigation"), but balance that with consulting with neighboring people and at some point yielding to peer pressure and moving into line. If the balance of these two competing forces is right, the overall network turns our to be a very powerful analog computer that can solve a problem we have trouble even defining well.

No single person ever needs to "see" everything or see "the big picture" - he just needs to see his neighbors, compare notes, argue for his position, and, if it seems warranted, yield to the majority. If enough different dots do this, coming in from enough different directions at once ("diversity"), and remain independent and yet consulting ("unity in diversity") the algorithm works. The powerful solution "emerges" from each person's behavior.

In image processing this is called an "adaptive contour" technique. It is part of the larger class of techniques called "swarm computing" that is becoming increasingly popular as "the power of crowds" is increasingly being appreciated.

An area this could be used is in any sort of boundary measurement, or in aligning fragments of images to make a coherent overall picture. Examples of these, and my US Patent 5613013 in image alignment using effectively a swarm technique, are described on my web site here.

Wade

Monday, October 29, 2007

Central planning in a complex world


If the world is too complex to allow for long range planning, what should central management be spending its time doing?

As all the parts of the world, on many scales, start colliding and interacting, we now find ourselves inside what scientists would call a "complex adaptive system."

In that kind of world, nothing works the way you think it will, and everything has "unintended consequences" or "unforeseen side-effects." So, we might think that long-range central planning is impossible.

As usual, we're both right and wrong, and the situation is, well, "complex" and nuanced, and depends on what you mean by "planning."

Certainly "central planning" as practiced by Stalin in the Soviet Union or Mao in China ran into many unintended side effects, of the kind where millions of people died because the plans didn't seem to relate to reality on the ground.

But, today, with advanced supercomputers and high-speed global communications, now we can do central planning, right? Nope. Before the problem was too little information. We zoomed right past the sweet spot of "just the right amount" of information, and now we're deep into "too much information!" and heading deeper at an ever faster rate.

So, yes, we could deliver the equivalent of a moving van full of 3-inch binders to a small leadership committee every day, and ask them to read that, digest it, and plan based on it -- but I think the problem is obvious. That will simply never work. There is not enough "bandwidth," regardless how "smart" those people are , even to read that much new information, let along digest it well enough to grasp the implications in "real-time."

All technology is doing is further swamping the system, and that will never get better.

Actually, it's getting worse, because of the problem I've talked about before that information is "context-sensitive" -- that is, the meaning of some "fact" is really only evident if you understand the context of the observation of that "fact. " You can't just snip a fact out of context, slide it over to a central place, and expect it to mean the same thing there that it meant in context.

We all are familiar with this problem, yet, socially, we keep on pretending that it is some sort of local breakdown and that this is not a universal law. The problem is that it is a universal law. Information is not only context dependent -- it gets worse. Information is basically "fractal", like an evergreen where every branch, if looked at by itself, is the same shape as the tree, and each of its branches is the same shape, etc. There is, in other words, an infinite amount of information buried behind every detail, and under every rock, and in every "can of worms."

To try to "consolidate" this information and avoid the "moving van" of binders, each level of management "condenses" the information and "simplifies it." That process, alas, is "lossy", meaning, frankly, it doesn't work most of the time. What gets lost in translation are the key "details" that seem unimportant but that add up to changing the entire conclusion and outcome.

So, this cannot be fixed by having "even smarter" people at the top of this pyramid of information distortion. By the time information gets to the "war room" all the relevant detail has been stripped out by well-meaning intermediaries. And, you can't skip the middle because the volume of detail is too much to handle, again regardless how smart you are.

So, what to do? The only way to deal with this is to realize that the concept of central planning and central "control" is fatally flawed, and to push decision making outward, and delegate it down to as close to the decision as possible, where it still makes sense.

So, we find in The Toyota Way, an emphasis on Genchi Genbutsu, or "go down and look for yourself, because whatever they told you is going on left out something important that will change your decision once you see it."

This is not because the people "at the top" are not smart -- it's because "smart" doesn't matter if you were handed the wrong problem to work on, and the wrong facts about it to use.

It is what is known as a "system problem" and it is "structural." It will not go away with better information processing. The details cannot always be ignored. In fact, most of the time the details matter. Information is not "compressible" on the huge scale we're trying to operate on these days.

So, again, what to do? If central planners cannot plan actions, there is still one thing they can do, and that is to plan processes that, when distributed out, will result in coherent and successful action.

(Actually I think it's even one more step removed, and the best they can do is to plan processes that will lead to emergence of local processes that when carried out locally, times a billion, will result in correct and coherent action - even in the total absence of a "central plan." )

This is the problem that Computer Science is dealing with today, under the handle "emergent computing" or "evolutionary computing" or "swarm computing" or some such thing. This is the problem IBM has to solve for the "operating system" for their supercomputer (Big Blue?) that is really 860,000 computers consulting with each other about what each of them should do next.

So, the literature and research on this topic is buried in Computer Science, where managers and policy makers seldom tread.

The key take-away message, though, is that the problem for today, as viewed by Complex Systems people and Computer Scientists, is how to develop, discover, or evolve processes that lead to processes that lead to coherent adaptive action of the whole swarm.

Interestingly, as I understand it, that is largely the central focus as well of the Baha'i Faith, which focuses on finding what processes lead to the emergence of locally relevant decision-making processes that still combine and work together instead of fragmenting so that the whole thing hangs together with central unity and yet the power of local eyes dealing with local issues, while percolating larger issues upwards and getting guidance on those downward.

This is the exact same focus that the Institute of Medicine has realized needs to be done to make health care safer, as described in "Crossing the Quality Chasm" -- local teams, which they call "microsystems", have to be realized and empowered to be self-managing based on real-time local information and feedback -- while, at the same time, still participating in larger scale coherence that can follow patients and patient care as it crosses from one such team to the next.

And, this is the same focus that Public Health has, as I learned at Johns Hopkins over the last few years. Aid and support for any group, whether teen-smokers in some rich suburb, or indigenous people in some remote country, has to be "culturally relevant" and rooted in local action, or it will suffer "tissue rejection" and be thrown out as soon as the intervention is over.

Central planning can realize there is, say, a problem with malaria that crosses teams, cultures, and nation-state boundaries - but the action has to be locally meaningful and sensible and fit with what else is going on locally, or it cannot work. Solutions cannot be imposed from above, as those that attempt to do so keep on discovering. Too much information is lost at the top.

I think these seemingly disparate groups need to pool their notes and cross-fertilize each other's thinking, because this is all the same problem surfacing in different places, manifesting itself in different worlds.

I guess if no one else is going to do that, or has already, it's time for me to start a "Wiki" so everyone can hang their fragment of knowledge on that framework and we can start to see what it adds up to, and where someone else has already solved that part of the problem.

Wade
(rainbow photo by me, on Flickr)

Sunday, October 14, 2007

U.S. Army as a learning organization

I've praised the U.S. Army as a model "learning organization" that has evolved a way to ask "hard questions" and internally debate extremely contentious issues, and to learn from its "mistakes" and improve next time.

Please note that I am very carefully trying to avoid stating any position regarding what decisions got made, in the interest of focusing on the underlying process of decision-making and mental-model adaptation itself. How did that work? Did it work well? How could it be tweaked so it would work better, not just for one specific instance, but in the general case, from now on?

In short, what can we learn from this experience that will be a permanent step upwards in how we make important decisions collectively, as a country, with both free-speech and a command structure to balance. What can we learn that we can apply to any organization's leadership?

The "unity" above the "diversity" of these two almost-opposing interests is the theme. Where is the sweet spot that we can rise-above the conflict and satisfy both interests without compromising either?

That's the serious question all sides should agree is worth asking.

Then, when we're done looking at the smaller problem we need to spin to a different lens and look at the larger question of how the American people and Congress worked in terms of utilizing information, interests, and politics to make the decisions involved. Did that work? Are people happy, looking back? Can that be improved? Can we learn something?

If so, what? If not, why not?

Is something interfering with our ability to learn from the past and adapt to the future? If so, what is it? What can we do about it? As with "the Toyota Way", we need to do what we were discouraged from doing in grade school, and keep on asking "Why?" at least 5 times trying to dig back to "root-causes" and go far enough to find the upstream things that can, in fact, be changed.


The lesson we should have learned from looking at Toyota's spectacular performance, and the "Making the Impossible Possible" video, is that mostly what is in the way tends to be simply cynicism and the incorrect belief that "nothing can be done" and "We have to live with that." Toyota's lesson in "lean processing" is "No you don't. In fact you must not put up with it. Stop and fix it!"

It often turns out that the cynicism is both unjustified and unsupportable. Change can happen, over time, a little bit at a time, with persistent efforts by everyone. Toyota has proved that.

Maybe, there are better ways and better models for us consulting with each other to make hard decisions about emotionally charged issues.

So, today's NY Times has a relevant article that hits many of those points, particularly the dynamic tension between keeping the command and control structure (and the US Constitution) in place, but also keeping the flow of surprising news going upward, so that we're not trying to violate the basic law of cybernetics and operate with the eyes disconnected from the hand.

For those at my talk Friday, here's the relevant image:


I added emphasis to the excerpts below.


At an Army School for Officers, Blunt Talk about Iraq
New York Times
October 14, 2007
by Elisabeth Bumiller

FORT LEAVENWORTH, Kan. — Here at the intellectual center of the United States Army, two elite officers were deep in debate at lunch on a recent day over who bore more responsibility for mistakes in Iraq — the former defense secretary, Donald H. Rumsfeld, or the generals who acquiesced to him.

No, Major Montague shot back, it was more complicated: the Joint Chiefs of Staff and the top commanders were part of the decision to send in a small invasion force and not enough troops for the occupation. Only Gen. Eric K. Shinseki, the Army chief of staff who was sidelined after he told Congress that it would take several hundred thousand troops in Iraq, spoke up in public.

You didn’t hear any of them at the time, other than General Shinseki, screaming, saying that this was untenable,” Major Montague said.

... Here at the base on the bluffs above the Missouri River,... rising young officers are on a different journey — an outspoken re-examination of their role in Iraq.

Discussions between a New York Times reporter and dozens of young majors in five Leavenworth classrooms over two days — all unusual for their frankness in an Army that has traditionally presented a facade of solidarity to the outside world — showed a divide in opinion. Officers were split over whether Mr. Rumsfeld, the military leaders or both deserved blame for what they said were the major errors in the war: ...

But the consensus was that not even after Vietnam was the Army’s internal criticism as harsh or the second-guessing so painful, and that airing the arguments on the record, as sanctioned by Leavenworth’s senior commanders, was part of a concerted effort to force change.

On one level, second-guessing is institutionalized at Leavenworth, home to the Combined Arms Center, a research center that includes the Command and General Staff College for midcareer officers, the School of Advanced Military Studies for the most elite and the Center for Army Lessons Learned, which collects and disseminates battlefield data.

...The goal at Leavenworth is to adapt the Army to the changing battlefield without repeating the mistakes of the past.

Much of the debate at Leavenworth has centered on a scathing article, “A Failure in Generalship,” written last May for Armed Forces Journal by Lt. Col. Paul Yingling, an Iraq veteran and deputy commander of the Third Armored Cavalry Regiment who holds a master’s degree in political science from the University of Chicago. “If the general remains silent while the statesman commits a nation to war with insufficient means, he shares culpability for the results,” Colonel Yingling wrote.

The article has been required class reading at Leavenworth, where young officers debate whether Colonel Yingling was right to question senior commanders ...

Discussions nonetheless focused on where young officers might draw a “red line,” the point at which they would defy a command from the civilians — the president and the defense secretary — who lead the military.

We have an obligation that if our civilian leaders give us an order, unless it is illegal, immoral or unethical, then we’re supposed to execute it, and to not do so would be considered insubordinate,” said Major Timothy Jacobsen, another student. “How do you define what is truly illegal, immoral or unethical? At what point do you cross that threshold where this is no longer right, I need to raise my hand or resign or go to the media?”

But Colonel Fontenot, who commanded a battalion in the Persian Gulf war and a brigade in Bosnia and has since retired, said he questioned whether Americans really wanted a four-star general to stand up publicly and say no to the president of a nation where civilians control the armed forces.

For the sake of argument, a question was posed: If enough four-star generals had done that, would it have stopped the war?

“Yeah, we’d call it a coup d’etat,” Colonel Fontenot said. “Do you want to have a coup d’etat? You kind of have to decide what you want. Do you like the Constitution, or are you so upset about the Iraq war that you’re willing to dismiss the Constitution in just this one instance and hopefully things will be O.K.? I don’t think so.”

Some of the young officers were unimpressed by retired officers who spoke up against Mr. Rumsfeld in April 2006. The retired generals had little to lose, they argued, and their words would have mattered more had they been on active duty. “Why didn’t you do that while you were still in uniform?” Maj. James Hardaway, 36, asked.

Yet, Major Hardaway said, General Shinseki had shown there was a great cost, at least under Mr. Rumsfeld. “Evidence shows that when you do do that in uniform, bad things can happen,” he said. “So, it’s sort of a dichotomy of, should I do the right thing, even if I get punished?”

One question that silenced many of the officers was a simple one: Should the war have been fought?

“That’s a big, open question,” General Caldwell said after a long pause.