"The Politics of Counting Dead Iraqis"
Excerpts:
The Politics of Counting Dead Iraqis
On October 10, 2006, at the height of the American midterm campaign season, the distinguished medical journal Lancet published an article on the internet that suggested a statistical estimate of the number of Iraqis who died as a result of the American invasion of their country in 2003. The estimate – 655,000 dead – was stunning because even the lower bound of its confidence interval was an order of magnitude greater than the highest estimates put forward to date. Perhaps not surprisingly given the prominence of the Iraq war as a campaign issue, the article proved an immediate sensation, maintaining a top spot in the headlines for several news cycles.
All of a sudden, everyone from local newspaper editors to the president was weighing in on the number of Iraqi dead. In a press conference held early on the morning of the 11th, a reporter asked U.S. president George W. Bush if he felt the study’s estimate of 650,000 casualties was credible. Bush’s response perfectly encapsulates the major substantive bones of contention that would emerge in subsequent media debates.
First, the estimate of 650,000 was simply too high to be believed, and the president reiterated his support for an estimate of 30,000 civilian deaths that he had been citing in press conferences for over a year. Second, he stated that the study’s purportedly scientific methodology had been “pretty well discredited,” thus making it perfectly reasonable to disbelieve the estimate. And third, he hinted that the exact number of Iraqis killed is not particularly meaningful in evaluating whether the war was, on balance, a good thing.
The article goes on to analyze what's going on in these different accounts. One thing it fails to mention is that many, I think a majority, of the deaths reported were not only civilians, but women and children, who died as a result of the disappearance of Iraq's health care infrastructure. Prior to the war, Iraq had one of the best health care systems in the mid-east.
I suspect another thing that is going on here is simply short memory. The US has had such a strong public health infrastructure - clean water, sanitation, refrigeration, food that is generally safe to eat - for so long that we have, as a culture forgotten entirely what it was like before public health accomplished those things. Adding to the difficulty in remembering was the concept of the medical establishment that infers, or actually states, that the majority of improvement in expected years of life in the USA is due to either drugs or medical care. About the closest public health can come in the press is a second spot, as in this quote, emphasis added to the part generally forgotten:
The period between 1930 and 1940 saw a sharply rising curve in longevity rates thanks to the widespread usage of antibiotics and the much improved standards in cleanliness, hygiene, and sanitation.
The reality is that the majority of such improvement was due to public health and hygiene, and occurred before the development of wide-spread antibiotics and the Hill-Burton act.
However, since Americans take such infrastructure for granted, it creates a blind spot for them in realizing what the impact would be of destruction of that infrastructure, or even such military moves as blocking the importation or creation of chlorine or other water-supply disinfectants.
Even washing hands seems to be a lost art, perhaps under the illusion that we can all simply take a pill in the morning if we "get sick". It seems so ... mundane, so much like "your grandfather's medicine." This doesn't even seem to be taught in school any more. It's not that unusual to see a patient's infant drop their pacifier on the hospital floor and the parent pick it up and stick it back in their child's mouth without a second thought.
Research in high-reliability organizations (nuclear power plants, the US Army, commercial aircraft cockpits) has shown that once a concept or "mental model" takes root in a person's mind, it can become self-fulfilling and even self-protective, repelling and squashing and quenching dissent and contrary data.
If a larger group of people is engaged, bad models can be revealed and rooted out, provided the group is fairly diverse and doesn't have a vested interest in the model. If a larger group is engaged that is homogeneous and does have a vested stake in a model, it becomes difficult or impossible to displace the model with contrary evidence, often even overwhelming contrary evidence. A million times, our perception says "No, that fact doesn't agree with my model, so it must be an error, I'll simply discard it before even showing it to the conscious person here."
In the extreme cases, the mental model becomes institutionalized. The weather satellites failed to detect the hole in the ozone layer for years because they had been programmed that very low readings must be errors, and should not even be recorded or reported.
The same error occurred in radio astronomy, where pulsars, the radio-frequency strobe lights in the sky, the brightest things in the sky after the sun and the galactic center in that frequency, were missed for years because "everyone knew" nothing was there, so filters were put into the equipment to discard all short-burst signals as "obvious noise."
According to one of my B-school professors who was involved in the creation decades ago of the Pentagon's original high-tech "War Rooms", the same filtering phenomenon occurs there, where the chain of command filters out "discrepant data" on the way up to the top, so that, by the time the Generals see the situation, all discrepant evidence has been "helpfully" removed.
I think the same thing may occur in many private corporations as well. The ability to "see" what is going on outside from inside the windowless Boardroom can be invisibly limited in these subtle ways. On a larger scale, one could imagine an entire country's news organizations deciding incorrectly what constituted "news" and helpfully eliminating "noise", so that the viewing public was making good decisions - given what they had to work with -- but what they had to work with had some serious gaps.
No "conspiracy" is required, only the work of "group-think" that hasn't been adequately detected and balanced out.
To quote the Banner of the New York Times, all too easily, the subtle, pre-perceptual "system effects" of our model-driven sensory networks trying to sustain a concept in a noisy environment result in making a surprisingly small fraction of the world available to us through the filter: "All the news that's fit to Print." What we end up with is "All the news that fits our mental model we print, the rest we discard before you see it. "
This seems to be a surprisingly common phenomenon, which means, to me, that we really underestimate the power of a concept to act like a living thing in defending its own turf and existence by twisting our ability to see what is actually going on.
As in MIT's "Beer Game", discussed in Peter Senge's The Fifth Discipline, it is just hard for us humans to recognize situations where there is not a person to blame, or a conspiracy to blame, for a surprisingly consistent result that is brought about by "systems effects." We keep seeing it, and it doesn't fit, so we reject it.
Which, of course, is expected, since the inability to perceive system effects is itself a system effect. We not only have a blind spot, we have a blind spot in what it takes for us to admit that we may have a blind spot. And that is a recipe for making a perfect niche habitat for bad things to inhabit without risk of being disturbed.
No comments:
Post a Comment