Saturday, November 06, 2010

Blind plus stubborn is not a winning hand

(A short piece continuing the reflection on some of the reasons why Big IT projects, such as EHR, tend to have such a Big Failure rate.)

It is the nature of large scale IT projects, such as Electronic Health Records,  that social processes have to change, from the micro scale times the number of people affected, all the way up to the macro scale.

These processes involve feedback within levels, and between levels, so there is  substantial "rise time" and "settling time" under the best of conditions,  when  a sudden change (a.k.a. "intervention") is applied to the system,  even if the structure of the system itself is not changed.  It is like a huge mobile, but one with some parts which can change very rapidly,  and other, larger, massive parts, which often only move slowly after effects have had a chance to percolate though and, as it were, be digested by the overall system.

Very much like the Christchurch earthquake,  you can think things have settled down when suddenly there is a new strong aftershock.  And like Christchurch,  any given aftershock may be smaller in absolute terms than the first earthquake, but it may also be much shallower and closer to home and therefore MORE damaging than a large quake further away.

It always takes, it MUST take, a substantial period of time, at least three full cycles of cause and effect in the feedback loops, before all the parts of the system have managed to "feel out" how the ground rules have changed, and to learn the NEW moves required to accomplish the OLD outcomes, since the old way of doing things no longer will work.     This kind of learning, and re-synchronization (Phase lock)  truly demands multiple cycles to even be recognized as a new state of affairs, instead of simply some accident of circumstances, so it can be appropriately counter-weighted,  understood, and ultimately anticipated and reflected in the new moves that, in this new context,  gracefully produce the desired result.

It is sort of like learning how to walk after having one leg suddenly shortened by 10 centimeters.  There will continue to be surprises until most uses of that leg have been experienced in the new circumstances at least 3 times apiece.

All of this takes time, and very little of it can be planned for specificially.  Like earthquakes,  the whole system will NEED to shift, but exactly where it gets hung up, and exactly when the stress gets large enough for it to break apart and slip along a fault line is really impossible for anyone to predict.

The implication of this which matters a great deal for planning the journey is that the route CANNOT POSSIBLY BE KNOWN in advance.     No amount of central planning can cover every possible way the system will shake-down as it restabilizes. Even when the system appears to "be over it", it can suddenly slip along some new fault line in some huge way.

So,  it is not possible to say in advance that THIS is the exact roadmap that will be followed, and the exact sequence of events that will occur during "implementation".     It is not even conceptually possible to say that.  This is technically a "complex adaptive system" which means it contains stored energy and its response to being pushed on could be first of all in any direction, not just the direction it was pushed,  it could respond with any energy, not just a small amount following a small push, and it could possibly respond in waves,  with a pulse of apparent response, followed by a false quiet (eye of the hurricane) as implications of some change become gradually apparent to people, followed by a resurgent groundswell of resistance or a hornets' nest breaking loose upon fully realizing what has changed in addition to what was expected to have changed.

Also, given the scale of the system,  no one actually knows all the things the system does, or even how it accomplishes those.   The cartoon diagrams with boxes and arrows of what processes exist are generally after-the-fact guesses, because no human being actually ever designed those details,  they just sort of happened to end up that way as the clever and innovative humans sorted out what was necessary and effective to get done the visible things they needed to get done.

The system, any system, will be full of "shortcuts" -- for example,  Mary makes a daily trip from clinic A to B, and agreed, some time ago, that as long as she was going that way, she could take along the mail.     The fact that Mary is part of the mail system is not on any diagram or chart, since it's a convenient shortcut.   However, since it is not shown, it is likely that, in a rearranged system, Mary will no longer make such a daily trip.  The impact is that the mail is suddenly no longer delivered by 1 PM, but now goes the long way and arrives at 4 PM, and things that used to take one day to complete now take two days.   To central planners, this change comes as a surprise, and you can see why.  No social or biological system does "just one thing" -- They are filled with hitch-hikers who quickly figure out shortcuts and go along for the ride.   The depth of reliance on visible system outcomes on such invisible dependencies cannot be known in advance, but the longer the system has been in existence, the more such shortcuts you can expect to have moved into place, all not shown on any organizational chart.

What this means in practice is that some central planning agent or group may say "We will do things in this order: A, B, C."  but then reality reveals that step B cannot be done before step C, due to something no one had previously realized or known.

All of this is no big deal,  provided everyone sees what is wrong and agrees to change the route of the journey to order A, C, B.

It BECOMES a big deal when there is stickiness or stubbornness in the central project management office,   which may refuse to accept the news that the completion of B has been "delayed" from what they had politically promised to some group of stakeholders.

The central command office therefore may issue a decree, a demand, that no one is allowed to work on step C until step B has been completed.     This is done in the name of efficiency,  or administrative orderliness,  or to deliver what was promised by someone to someone else.     The reality on the ground may be, however, that it is not physically POSSIBLE to do step B until step C has been done because of some previously unrealized constraint.

So,  impasse occurs.   This is the exact type of impasse that is the underlying shoal upon which so many ships have been shattered, battered, and reduced to rubble.

Some central group, far from the realities or "ground truth" decrees, based on their incomplete knowledge of the situation from afar,  that B must precede C.   It is "obvious" based on their mistaken knowledge of the world, that this order makes sense and is "good".     At the same time, the workers at the front lines directly perceive that C must precede B due to some undocumented news.

 If the situation is perceived as a frank and honest search for truth, done in an eyes-open way,  the new situation can be easily resolved, getting all of A, B, and C completed in a timely fashion.  But if central authorities perceive this as a situation where their wisdom and authority have been called into question and challenged by upstarts or trouble-makers or those who "resist progress", and dig in their heels and refuse to budge,  a very different outcome will result.

IN reality, the latter case,  (perception of unwarranted resistance) appears to my reading and experience to be far more common than the former cases (acceptance of wisdom from the front lines and adjusting of the master plan to acccount for the new information.)   In fact, it's quite common in the USA at least for front line workers who persist in raising such "issues" to be demoted,  transferred, removed from "the table" ,  forbidden to speak out,  or actually have their employment terminated.      That resolves the problem of visible conflict,  and forces the problem to become the "unacknowledged elephant in the living room."

In that sort of non psychologically safe climate,  this situation repeats itself multiple times, as reality becomes filled with unacknowledged elephants,  as the lead authority trumpets how well things are going,  and as things in reality get further and further from the situation being reported to the central authority.

Finally, at some point, the world collapses catastrophically -- the worst becomes unavoidably visible, there are massive calls for investigations and finger pointing and blame,  new people are selected for leadership positions, and the whole cycle begins again.

This is , by far, the most frequent way such projects fail, from what I have seen.

There's an article in the paper this morning that illustrates what this tends to look like in practice.  It may be a different situation, but then again it may in fact be a case of what was just described.

This is from today's New Zealand Herald:


http://www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=10685673

Clients feel pain of ACC cuts
By Martin Johnston and Simon Collins
5:30 AM Saturday Nov 6, 2010
New Zealand Herald OnLine at


Physiotherapy claims paid for by ACC have slumped by nearly a quarter, reflecting the social insurer's sharp cut backs in health treatment - at a time when New Zealand's rate of injuries is continuing to climb. ..."It's certainly having a huge impact psychologically on quite a few of our members. We've had several people who have had suicide attempts relating to the pressure they have been put under."

In healthcare, the most noticeable reductions have been in GP care and physiotherapy - because these are numerically and financially the biggest sectors funded by ACC (although these patients are generally levied a co-payment)….

But some of the greatest misery for rejected patients is likely to be in the elective surgery category and especially in shoulder surgery, where a new, hard-nosed policy has forced many on to public hospital waiting lists….

Advocates and lawyers who represent aggrieved ACC claimants say they noticed a big increase in rejections of surgery applications last year and a consequent increase in the number of cases being taken through the review process….

The proportion of surgery claims that were rejected rose to 20 per cent last year, from 12 per cent the year before.

Nelson-based advocate David Wadsworth, of Access Support Services, said, "It tells us their decision-making is flawed. Once they have made a decision, no matter what evidence we provide, they won't change their decision."... Previously ACC had often backed down when given strong evidence its decision was wrong.


Again, I have no first hand knowledge of what is going on in this sector that the piece describes, and no idea what factors are true or false or left out of the description.    What does raise red-flags on the sports play, as it were, is the language of the last two sentences, highlighted above.

My point is that we are surrounded in life, in all countries,  with situations that are parallel,  where facts that are "obvious" to one group are either unknown, or disputed, or irrelevant to another group, resulting in the type of perception and language (right or wrong) that is illustrated above.

If there is a mechanism to resolve these conflicts at the same speed as they arise, or faster, then things remain relatively stable.    But, if these types of conflicts arise FASTER THAN THEY CAN BE RESOLVED politely,    the situation can "snowball" and grow rapidly out of control, with massive increase in the conviction of "each side" that "the other side" is filled with morons or idiots or enemies or obstructionists or whatever perjorative is popular for the day.

This is the situation that has been observed at many, if not most, if not ALL "Big IT" projects.

My assessment of the situation is that the extreme difference in perceived reality in such dynamic, unpredictable, and hard-to-see fields as the social implications of software "installation" lends itself to such difficulties.  At least with a brick-and-mortar building,  everyone can go look and at least agree whether something has been built or not.  With software, people cannot even agree whether something has been built, let alone whether it is built well, functions properly, has few deep hidden flaws,  is fit for particular purpose, etc.

Conclusion -- for the SOFTWARE to "work", it may be necessary to have social and political processes in place that head off this kind of escalating battle over disputed facts, particularly disputes over whether something "works" or "doesn't work."      This situation is full of tempting opportunities to create the illusion of progress by silencing "opposition" forces with the use of authoritative cudgels,versus investigation to determine what the true facts on the ground are.

ADVICE -- if this type of political dispute is common,    this is not the right landscape in which to attempt to "install" sociotechnical software, such as an Electronic Health Record.  There are far too many surprises lurking and triggers for such breakdowns in changing social processes and all the invisible "epigenetic" processes that are not on anyone's diagram, or radar, or list of what exists today.

So, the question I'd suggest is this:   DOES a local culture intended for a BIG IT project (such as an electronic health record) HAVE a well-functioning process that detects such conflicts between "facts" and "authority" and that is capable of resolving them gracefully at least as fast as they spring up?

IF it doesn't,  then fixing THAT problem in governance and the "cybernetic loop" has to be done BEFORE proceeding with the "software installation."  Except, of course, there will be those who say, they agree it's an issue, and they'll get to it AFTER the software is installed.  ("it must go A, B, C, and occur within 5 years." )  And so it begins.....

IT's important for the sake of progress for both sides in such disputes to realize that they are operating with totally different sets of information about what is "TRUE".     Generally, and sadly,  BOTH sides of such disputes can marshall arguments and data to "prove" their own point, and BOTH sides feel they have the "high ground" of moral correctness.   Sadly, then, mistaking differences in facts to differences in INTENT,  both sides end up making unwarranted generalizations about the INTENT, COMPETENCE, FITNESS, and possibly genetic makeup of the other,  polarizing a resolvable situation into one that has dug into battle lines relating to  pride and integrity and identity.

This changes the "search for what is right" into a search for "WHO is right", which almost guarantees failure, because it now means that at least one party must be "WRONG", instead of at least one key FACT being wrong.

Big IT projects provide hundreds of triggers for this type of conflict,   a mode of failure that has nothing whatsoever to do with what kind of program is inside the shrinkwrapped box.

The extent to which this problem is widespread is hidden by a flawed process of attribution of the cause of failure to specific cases or people or events,  instead of looking more broadly and processes that are guaranteed to fail SOMEWHERE.

So,  if group A crashes their bus into a mountain due to failure of cybernetic feedback,   Group B may now say "well, that doesnt' apply to us, we don' thave mountains!" and proceed to crash their bus into large trees.    Group C comes along and says "Well that doesn't apply to us, we don' t have trees or mountains!"  and crashes their bus, due to the same failure mode,  over a cliff into the ocean.

Etc.
My point is that EVERYONE has the capacity for this failure mode, and there ARE lessons that can be learned from prior failed projects, if you look beyond the exact details of WHAT they crashed into and focus instead on how the driver-ship could have been so blind that it failed to hear the loud warnings that a problem was just up ahead.

As a rule of thumb, consider this.   It is known in project management that it is far more likely that something will go wrong with any given step than that everything will go right.    In other words, there are much higher odds that something will prove to be HARDER THAN EXPECTED than that when you try it, it proves EASIER THAN EXPECTED.

My simple question is,  in your own environment,  is there RESISTANCE to a message from the troops to upper management that a step is proving HARDER THAN EXPECTED.    Is this message considered normal run of the mill news about life,   or is it fleshed out with implications about the competence and genetic heritage of the person doing the reporting?

There is a classic Dilbert cartoon in which the "Pointy haired boss" is facing an employee. The employee reports that a project is running behind schedule, and the boss screams "WHAT?", grabs the employee, and hurls him through the picture window of the 4th floor office, presumably to his death.
In the last panel the boss sits and ponders in true bafflement "Why don't they come to me sooner?"

The reality of large IT sociotechnical projects, such as EHR's, is that you really have no idea whether this is even feasible, let alone how long it will take to accomplish.     I'd suggest it is far better to have a process that everyone buys into,  that at each stage gets everyone up to speed on what the latest news is and keeps everyone on speaking terms,  than to have a process that attempts to deliver a known state of software on a predicted timetable and budget.

Unless you have a long successful history of predicting exactly what the timetable and budget will be for such social transformations,   don't even try.  Say -- we will spend this time, we have this budget, we will use this process, we will keep everyone informed, and we'll get done whatever we can get done -- does everyone understand that?   At each step we'll learn new things about each other and about our processes and as we discover them, we'll adjust our course to take that new information into account, so even the course cannot be stated in advance, only the process by which that course is adjusted on a day to day basis.

That approach focuses our attention on a different type of question, where there is a great emphasis on picking the ROUTE from zero "completion" to 100% "completion".

Question - Is it possible (even conceptually) to define this pathway, from HERE to THERE,   with a series of intermediate points or states that are each, in their own way,  substantial progress? Or is this a type of animal that has no intermediate states of value, in which it's ALL or NOTHING?

Of particular note and value would be a 1/100th size version of the end product.  Can we come up with a fully-functioning micro-EHR which actually does work that does ONE THING of value for some people,  far far less than the total intended picture?
Here I'm picking up on John Gall's suggestions that "large systems that work are invariably shown to have evolved from small systems that work, instead of leaping full-brown from Zeus's head" (my paraphrasing of Systemantics - How Systems Fail.)

So the key question is, can we conceive of an intermediate SMALL SYSTEM,  of a type that could, over time, evolve into a large system,  that could WORK in its own right, and that we could put in, then take a break for a while and watch to see what this breaks loose in the social structure that we hadn't expected, giving us our first taste of how deeply embedded in invisible fractal space are the secret processes that "make" the "current system" capable of functioning.

Curiously, to me anyway,  This conceptual class of problem (Find a small version of A that works that we can build on), is not a common question in practicing IT organizations.  Yet, it's well known that a small victory can provide confidence, build trust,  and shake out a number of small coordination problems in a low-stress world without massive meet-the-deadline time pressure, so they can find a comfortable accomodation and their own, organic level.


It IS always true that a large scale IT system DOES require an "implementation plan."  Sadly, this is often sort of an afterthought.  Also sadly, it usually turns out that there is NO GOOD ROUTE by which to convert people from the old system to the new one.  We can't do it by building because of X. We can't do it by department organization, due to Y.  We can't do it by Service, due to Z.  We can't do it by floor because of FF.  Etc.   "HALF of the EHR" would have zero value is sort of the assumption and working belief.

THAT to me is the true research question.  We need to find a very small kernel of electronic improvement that is far LESS threatening,  far less traumatic socially,  that CAN be installed to start the DIALOG with users about "does this actually work for you? What have we missed?".

  We know it's not an EHR.  That's not the point. The point is it's an approachable subset that lets us get a far better feel and growing common understanding of even the nature of the beast that is coming down the road later.

I haven't pondered in detail or started the discussion of what such a subset might be.  It's almost certain however that there are coordination needs between departments that require far less security overhead and that could meet unmet needs.  For example, can the different department secretaries figure out who is scheduling large events when, so they don't schedule visiting speakers on exactly the same day someone else is scheduling a visiting speaker?   Very simple coordination systems are typically missing in organizations because they are too SMALL to capture the imagination of the IT budget process.   Yet,  these are non-terrifying applications that won't result in patient deaths if they have glitches, that don't need massive security, and that CAN start to feel out who can use a computer and  who is computer-phobic or actually unable to type,  etc.   Small steps. Very small steps. Unrushed. Capable of growing by contagion, not administrative decree.   Web-2 type applications that are free,  off-the-shelf, but actually helpful IN THE SMALL.

These can form the basis for an EMERGENT solution of succssively greater buy-in and willingness to use the technology by the "LITTLE PEOPLE" who are typically simply not on the planning radar.  IS there a tiny island of application that can be delivered to the Nurses?  To the nurses aides?

Small things,  which over time can slowly shift the climate and culture from computer phobic to computer-loving,  can turn out to be HUGE things.  If they are put in early, they have 3-5 years to work their pre-soak magic,  daily altering work patterns a little bit more every day.  These can pave the way to the much larger EHR delivery later. People can learn the names and phone numbers of the IT support people they will be working with. Etc.

Then, at every stage, people can say "Well, the overall project delivery has been delayed, but at least we got X, which is actually turning out to be quite helpful -- we use it every day now, so overall, we're not unhappy."

So, for example,  any decent Electronic Health Record system that has patient care plans will have a built in "tickler" or "reminder" system that people could use, when a patient does something, to also set  an alarm that 3, 6, 9, and 12 months from now they will need to come back, can you automatically remind them (AND US!) that this should happen and verify it DOES happen and let Helen in Scheduling know if it does NOT happen so we can follow up?


This kind of reminder system is pretty generic,  and I can't see a reason why it is even heavily dependent on decisions made about the REST of the EHR.   Architecture-wise,  the tickler and reminder system should be designed and developed and tested as a separate module, capable of standing on it's own.  So, then,  why not implement IT FIRST, long before the EHR, but applied to other tasks (outside the EHR) that every person and department needs to keep track of and be reminded of now?   That gets THAT step out of the way,  and out where it can be "burned in" by use, and stop being "green wood" and under the learning belt of everyone so that they all already know how to use THAT portion of the EHR so it's no stranger to them.


Similarly, there are other pieces of functionality of an EHR system that can be teased out, and delivered far earlier than the full EHR rollout of patient data.    Every one of these can build confidence, trust, and deepen the use of the help desk, support staff, hardware support teams, etc. Every success here on small things builds confidence and momentum and lowers resistance to the larger project goals.

BUT, at the same time, every tiny system is guaranteed to reveal show-stopping problems that can get fixed now.  For example, you may discover that the network lines to Clinic X always drop to totally useless performance every tuesday afternoon from 3 to 5 pm.   You have time to investigate why this is and fix it long before time-critical patient data runs into EXACTLY THE SAME PROBLEM.     These are all problems you will need to fix anyway as part of the EHR project, but this reveals them much earlier in the day when there is far more lead time to fix them on a convenient schedule.  It load-balances the delivery instead of putting it all in a massive peak near the end of "the project", so you can ramp-up staffing gradually and burn-in the staff working relationships instead of burning them OUT in a crisis mode the last 3 months of delivery.

No comments: