Showing posts with label authority mindfulness. Show all posts
Showing posts with label authority mindfulness. Show all posts

Wednesday, October 28, 2009

Mindfulness and fighting wild fires, and the value of simulations for training

Professor Karl Weick at the University of Michigan has written extensively on the need for "mindfulness" in emergency situations, such as, literally, fighting forest fires.

A mindful crew or crew-chief will be aware that they are operating on a mental model, and that model may be incorrect, so they must be alert to even very small signs that they have completely misconstrued the situation.

There are lessons here, on a longer time scale, for every leader, civilian or military.

Here are some public documents on the subject.

http://www.wy.blm.gov/fireuse/2009mtg/presentations/HROs-mindfulness.ppt

Teaching Mindfulness to Wildland Firefighters (Fire Management Today, Spring 2008, Dave Thomas)

For the last 3 years I have taught half-day workshops, conducted 1-hour lectures, and provided general awareness speeches about the Weick/ Sutcliffe model of High Reliability Organizing as described in their book Managing the Unexpected: Assuring High Performance in an Age of Complexity.

This article is a series of musings, conjectures, and recommendations pulled from this teaching experience. My intent is to pass on some of the lessons that I have learned teaching High Reliability Organizing, and to pose recommendations for further study...

Today, however, mainly due to the heating of the Earth through global warming and a build up of fuels -firefighters are working within an environmental framework of weather and fuel never experienced before. Errors that we might have "got away with" in the past could more easily become catastrophic today....

Next, I explain the irrationality (mindlessness) of always learning our primary safety lessons through trial and error. It is our job to be better at anticipating errors before they occur, before a brutal audit forces us to notice the discrepant events in the fire environment. The following quotation, which reinforces this view, is taken from French disaster expert Pat Lagadec:

"The ability to deal with a crisis situation is largely dependent on structures that have been developed before chaos arrives. The event can ... be considered an abrupt brutal audit: at a moment's notice, everything that was left unprepared becomes a complex problem, and every weakness comes rushing to the forefront."...

High Reliability Organizing

NEW! France-USA High Reliability Organizing in Incident Management Teams Project
Just like NYPD detective "Popeye" Doyle, who traveled to Marseilles in the 1970s hit movie “the French Connection” so too, did a Forest Service NIMO team this past December. Only it wasn’t for crime busting this time. It was a landmark match-up between two French and American Incident Management Teams to capture what makes these teams so successful in complex, rapidly changing, stressful situations. It is hypothesized that they exhibit many of the behaviors that directly align with high reliability organizing (HRO) concepts and principles.

( More to come)


More information:

The France-USA HRO Project (French Web Site, from Bouches du Rhone with video)
http://hro-fires.com/exercices_live.html

High-Reliability Organizing - Roberts, with Weick and Sutcliff:
http://www.wildfirelessons.net/HRO.aspx

Center for Catastrophic Risk Management, Berkely CA
http://ccrm.berkeley.edu

Communication and Information technologies:
New tools for DISASTER management
Jean-Michel DUMAZ (1)
Bouches-du-Rhône Fire Department – MARSEILLE - FRANCE
2nd International Conference on Urban Disaster Reduction
November 27~29, 2007

The Bouches du Rhône
Fire Department


Wade

Monday, December 17, 2007

New life forms from Synthetic DNA - Washington Post


The Washington Post today deals with "Synthetic DNA on the brink of Creating New Life Forms." Talk about children playing with matches... Rick Weiss begins " It has been 50 years since scientists first created DNA in a test tube..." I'd add - it has also been 50 years since Jay Forrester's classic piece on "unintended consequences."

Here was my reply:

wade2 wrote:
Bio-error indeed. Maybe error-gance is the bigger threat, and very real. Our social approach to low-odds of very-high-risk accidents, as Carl Sagan pointed out re return of samples from Mars, is completely overwhelmed by our normal intuition. At Los Alamos, the first atomic bomb was tested when only a minority of the scientists on the project (something like 6 of 14) thought it would detonate the earth's crust and explode the entire planet. No one was sure, so they tested it. Hmm.

Good books like "Lethal Arrogance" by Dumas and "Normal Accidents" by Perrow detail hundreds of examples of our tendency to run it till it breaks, and then, only then, stop to think.
The tools to even begin to think about the way coupled feedback-loops get their job done, such as System Dynamics, have languished for 50 years. MIT's John Sterman, in "Business Dynamics - Systems Thinking and Modeling for a Complex World" , details the lack of correct intuition, even for the MIT community, brighter than most. PhD's don't generally help, and most of us have less to work with.

So, at best we can model and simulate, which has been done at the Santa Fe Institute for the last few decades, with "artificial life" - virtual life and virtual DNA, genetic algorithms breeding and evolving, to see what happens. http://www.santafe.edu/ describes the work of many Nobel Prize winners.

In short (1) the little buggers are far smarter than we are and (2) parasitism evolves almost instantly in every case. The lesson of the movie Jurassic Park is a mild taste of the tenet "Life will find a way."

If the rest of our human affairs were measured and mature and stable, this would be a risky business. Having unstable tyrants convinced they must "master" this technology and use it to attack others, or defend from attack (exact same research), leads to the Russian model of stockpiling hundreds of tons of Anthrax or worse, in delusions that bio-warfare would be controllable or could be "won".

There are good odds the viruses and fungi and insects will win, not so good for humans.

Life is built with interactions with emergent properties on multiple levels, and we tend to think of "machines" at one level with only one function. But genes don't work like machines, they work like cooperative swarms.

Bio-warfare research has a "life of its own" that should already put us on alert that it is way easier to create things that "might as well be alive" than we think. Since we cannot stop it, we are committed to trying to get ahead of it and get the reins back, which means we should pour billions into understanding the world that the Santa Fe Institute has pioneered - massive interactions, how they go good, and how they go bad.

It becomes clear very quickly that, with complex systems, by the time you realize you "shouldn't have done that" it's too late. Experience is something that comes just after we need it.
For very high-stakes mistakes, that's too late. If we keep gambling with the whole planet on the table, sooner or later we'll lose one turn.

One is all it takes.

12/17/2007 6:07:22 AM
=========

Actually, all the research on high-reliability systems like nuclear power plant control rooms show that the maturity of the social system is what makes or breaks the technology-based system. Psychologically safe environments are needed for people to raise their hand, without fear of reprisal, and question what the heck is going on.

What we have instead is a whole culture used to using fear as a workplace and political context to "get things done", as described by Harvard Professor Amy Edmondson.

The Shuttle Columbia (picture at left) exploded because of an "o-ring" problem, that all the project engineers knew about, and had in fact gone in that day to tell the boss to tell the White House that it was too cold to launch safely. They all lost their nerve under workplace pressure to "deliver" so the Pres could talk to an orbiting teacher during the State of the Union address. She did, in fact, leave a message for us (picture at left) of what happens when we don't listen -- but, I guess we're still not learning that lesson.

Further reading

The classic paper in this field is Jay Forrester's congressional testimony:
"The Counterintutive Behavior of Social Systems",
https://mail.jhsph.edu/exchweb/bin/redir.asp?URL=http://web.mit.edu/sdg/www/D-4468-2.Counterintuitive.pdf

Quoting the abstract:

Society becomes frustrated as repeated attacks on deficiencies in social systems lead only to worse symptoms. Legislation is debated and passed with great hope, but many programs prove to be ineffective. Results are often far short of expectations Because dynamic behavior of social systems is not understood, government programs often cause exactly the reverse of desired results.

Another quote from the Washington Post article is this:

"We're heading into an era where people will be writing DNA programs like the early days of computer programming, but who will own these programs?" asked Drew Endy, a scientist at the Massachusetts Institute of Technology.

How true that is. I've been programming computers for over 40 years, and agree that the programs they write will be exactly like the "single-threaded" programs that mess up our airline reservations and everything else. In fact, a look inside some place like a hospital reveals the workings of the multiple legacy computer systems cobbled together in absence of any fundamental theory at all of how many interacting things should be structured in order to be reliable. Thirty years of research in computer science on "distributed operating systems" and how to build reliability in has had close to zero impact on the quick and dirty, cut-corners-now-and-we'll-debug-it-later model that vendors find locally profitable, but that always breaks down, producing, ta da!, more profitable rework. As a business model it's very popular; as a way of getting reliability, we all have seen the results. This is the culture we expect to "program" our genes? I'm not rushing to sign up.

The article quotes someone on the "unprecedented degree of control of creation" that the DNA technology gives us. Right. This is about the degree of "control" that a Labrador Retriever on your lap in the car at rush-hour has -- yes, it can turn the steering-wheel, but I wouldn't use the term "control" for what happens next. If you think our economy and business development and health care system are "under control", then maybe you would think genes could be "controlled" the same way - and they can, with about the same results.

Sadly, control requires maturity and depth of understanding, instead of simply strong muscles and a short attention span. I wish it were our strong suit as a nation, but see little evidence that it is, or even that it is valued or desired as a long-term goal.

We have instead young children playing with the cool gun they found in daddy's nightstand.

Oops.

======= Some after-thoughts:

Unlike the video games and computers this generation grew up with, life does not always have an "undo" button.

The core task of a civilization is to capture the wisdom we finally learn too late, and get it into a form that modifies the behavior of the next generation so those same lessons don't have to be learned all over again.

The hardest part of that task is that the next generation typically doesn't want to take advice from old people about situations the village elders seem way too concerned about - like, not going into debt over your head, you know, crazy stuff like that.

George Santayana said "Those who cannot learn from history are doomed to repeat it." I'd modify that slightly and add "Those who cannot learn from near-misses will someday not miss."

Each time we don't learn this, as a society, the costs go up. The biggest unknown in "the Drake Equation" about odds of there being other intelligent life in the galaxy that we could detect with radio is how long a civilization survives after it has gotten to the point where it has that much technology. The complete absence of any detectable signals from 100 trillion worlds "out there" suggests this is a pretty small number of years -- maybe under 200 years.

At the rate we're going, we're heading towards adding one more point to that data set.
Learning how to learn from our mistakes and our own past seems to be as important a problem as global warming, but actually more urgent, because time is running out a little faster on the 400,000 ways, besides global warming, that we can end human life on the planet.

Humans are remarkably inventive, and if every weapon and sharp object on the planet vanished, they'd find ways to attack each other with stones. Instead of tackling each symptom like global warming or genocide or terrorism, it would seem wiser to track further upstream and find the root-cause problem for why people are driven to fight, and fix that.

======================================

More further reading:

On High Reliablity organizations, which are sobering. They try really really hard to not have accidents, and still don't succeed from time to time:

http://www.highreliability.org/

I'm sure the US military tries very hard to keep nuclear weapons under control. Even that intense level of attention isn't enough to do the job 100% of the time, illustrating John Gall's law that "complex systems simply find complex ways of failing."

"Honey, I lost the nuclear weapons"

The US National Institutes of Medicine on how much the social relations of the front-line teams matter when your job is to get reliability in hospital care:

Crossing the Quality Chasm and other links

=========================
Photo credits :
Oops (car) by
estherase
US Space Shuttle by
Andrew Coulter Enright

Monday, July 16, 2007

When and how should we question authority?

New York Times piece today relevant to thinking about Toyota's Production System,
as well as topics of mindfulness, high-reliability, and how to teach and reach new MPH students that I discussed a few weeks ago in "What I learned at Johns Hopkins last week" where I bemoaned the fact that the students "just sat there, unresponsive."

The Times article, by Norimitsu Inishi
Japan Learns Dreaded Task of Jury Duty
NY Times July 16, 2007

Japan is preparing to adopt a jury-style system in its courts in 2009, the most significant change in its criminal justice system since the postwar American occupation. But for it to work, the Japanese must first overcome some deep-rooted cultural obstacles: a reluctance to express opinions in public, to argue with one another and to question authority.
Well, that certainly sounds like the class I was in. What insights can we gain from this cross-cultural view?
They preferred directing questions to the judges. They never engaged one another in discussion. Their opinions had to be extracted by the judges and were often hedged by the Japanese language’s rich ambiguity. When a silence stretched out and a judge prepared to call upon a juror, the room tensed up as if the jurors were students who had not done the reading.
Well, in my case it's likely that, literally, the students had not, in fact, done the reading and were in no rush to call attention to themselves and get a follow up question back to them that would reveal that fact. And they were reluctant to ask a question that all by itself would show they hadn't done the reading.

One more snippet is worth quoting:

Hoping for some response, the judge waited 14 seconds, then said, “What does everybody think?”

Nine seconds passed. “Doesn’t anyone have any opinions?”

After six more seconds, one woman questioned whether repentance should lead to a reduced sentence....

After it was all over, only a single juror said he wanted to serve on a real trial. The others said even the mock trial had left them stressed and overwhelmed.
So, I for one will be watching with great interest to see how this evolves.

One point I have to note regards our country's history efforts to try to implement abroad things that work for us at home. Sometimes we seem, like a teenager, with our 230 years experience, to be confidently giving advice to 5000 year-old civilizations -- maybe akin to a 2.3 year old trying to advise his 50 year old parents on how to run the household.

Maybe things are as they are for a good reason. Maybe, messing with a cultural system we don't even pretend to understand and casually plannint to replace one part of their system with one that "works for us" will have, shall we say, "unintended consequences."

As I said there, the classic paper in this field is Jay Forrester's congressional testimony:
"The Counterintutive Behavior of Social Systems",
http://web.mit.edu/sdg/www/D-4468-2.Counterintuitive.pdf

Quoting the abstract:

Society becomes frustrated as repeated attacks on deficiencies in social systems lead only to worse symptoms. Legislation is debated and passed with great hope, but many programs prove to be ineffective. Results are often far short of expectations. Because dynamic behavior of social systems is not understood, government programs often cause exactly the reverse of desired results.

I am deeply concerned about not just this context-blind approach to trying to walk in and transform existing cultures, that as far as I can tell we are not very good at, but at extensions of this mental model to deciding we are going to start tinkering with DNA.

My daughter recalled a conversation she saw quoted with China's former Chairman Mao meeting with the head of France around 1980 or so, and being asked what he thought of the French Revolution. Mao's response was "It's too soon to tell."

I can't help but note that W. Edwards Deming came up with key quality improvement ideas decades ago in the USA, which was totally uninterested in them, so he went to Japan, where he was welcomed as a hero and passed along ideas adopted by Toyota that have directly led to Toyota's impressive performance. So, maybe Japanese culture had some positive aspect to it that we should be careful not to damage when adding our new "jury duty" feature.

The systems literature shows that it is generally impossible to change just one part of a complex living system without impacting all the other parts. Living things are not machines, with sub-assemblies we can just remove and replace with the latest version. This has the feeling of someone removing the propeller from a small private plane and installing a jet turbine in 98% of the cabin space, since "jets are better than props." Hmm. Not always.

Maybe a better example that I recall actually happening was the period when the US was complaining about Japanese "barriers to entry" of US car sales in Japan, back in the mid 1970's.
GM was offering a car that had the steering wheel on the left side (Japanese, like British, drive on the left side of the road and have steering wheels on the right side of the vehicle.). Also, the cars were too large to fit down most alleys and many streets in Tokyo, too large to park anywhere, and guzzled gas that was running at ten times the US price. And the car interiors were scaled for 6 foot Texans, not 5' Japanese. And, the car had no place to put a bicycle in it for the rest of the commute once a parking place was found. The US attributed low sales to wrongful Japanese barriers to free trade. The reality was that most Japanese couldn't use that car if you gave it to them for free. The basics of marketing once upon a time, when I went to Business School, were "know your customer" and "Be driven by what the customer values, not what you think they should value." In "lean manufacturing" this would be called "pull" or "value chain".
But, then, we were too busy assuming things and talking to shut up and listen.
We were violating Japanese traditions from nemawashi (walking around and gaining consensus before taking action) to the "lean" concept of genchi genbutsu (going down to the floor to see for ourselves before making pronouncements from afar of what is wrong.)

As even Wikipedia realizes:
Genchi Genbutsu (現地現物) means "go and see for yourself" and it is an integral part of the Toyota Production System. It refers to the fact that any information about a process will be simplified and abstracted from its context when reported. This has often been one of the key reasons why solutions designed away from the process seem inappropriate.
So, I'm not sure this particular government policy has a learning curve and won't explode in our face when we turn it on. Maybe this has been deeply considered. Maybe not.

If the objective is to damage Japan's culture and gain a competitive edge, or at least remove their edge over us, then I suppose random tinkering might be a good idea. If the objective is the much harder task of improving the functioning of a 5000 year old civilization, it might be good to be mindful of any indications that our mental model doesn't match their reality, so we should stop what we're doing and address that and update our model with more current information. That's the key to high-reliability performance, and avoiding nasty surprises. The article gives no indication that the policy implementation is contingent on it actually working in practice when implemented.

In the classic PDCA (Plan, Do, Check, Act), there is that "C" step, "check" that what we did had the desired effect, not an unexpected contrary effect, in case we missed some crucial fact.

That's not a bad model.


W.