Friday, November 05, 2010

Social Thermodynamics

My undergraduate work was as a physics major,  which has colored my approach to problem solving.   Physicists prefer not to waste their own energy on useless computation, so when they are facing a question of "How would X happen?" they usually fall back and ask first "Is it even possible for X to happen?"   If it's not possible, then there's no point in straining your brain to figure out HOW it will unfold.

So, for example, suppose a set of balls with given locations and speeds are at the bottom of a valley,   often colliding with other balls, and you're asked to solve for which ball will be the first to get knocked out of the valley.   The mathematics of many balls bouncing off each other is quite complex.  However,  it may be very simple to look at the total energy kicking around down in the valley, which is just the sum of the energy of each ball,  and add those up, and see if the total energy is larger than the energy needed to lift a ball out of the valley.  If it's not, you can stop -- there is no combination of bounces that will get any ball out of the valley.

Physics has these laws of "thermodynamics" which state, in layman's terms, that

  • (1) you can't win, 
  • (2) you can't even break even, and 
  • (3) you can't get out of the game.
Paraphrased,  things in a closed system, (with no energy coming in from outside) always go, net, downhill.  If you want a particular thing, say a plant or an animal or a person  to grow MORE complex and LARGER,  you MUST bring in energy from outside (food, sunlight, etc.).    There is no point in looking for a free-lunch strategy where the system will improve "on its own".  

Systems, like sailboats or sailplanes,  always go and only go, DOWNHILL.

Now, of course, it turns out that sailboats can sail UPwind, and sailplanes (gliders) can often "ride a thermal" and climb up into the sky -- but they do these things by tapping into the external energy (wind) supply.    A sailboat on a calm day will go nowhere.  A sailplane / glider sitting on the runway will not suddenly take off on it's own and start climbing into the sky.

Now, holding that thought in mind,   consider the question of a system of people and machines which is a "health care system" that patients come to and are treated within.    Suppose we would like this system to become "more organized" or "more harmonious" or have "less duplication" or operate "more efficiently".    We are thereby asking a system, controlled by natural laws,  to move from a state of low energy and low organization to high energy and high organization.    In other words, we want the system,  now sitting in an energy and organizational valley,  to move, after some intervention, to a higher state up on some peak of energy and organization.    We are asking it to move uphill.

So, before we strain our brain and budget trying to figure out HOW this can occur,  we first need to save ourselves a great deal of time and money and ask WHETHER this COULD conceivably occur.

I'll affirm that, to a physicists eyes,  the answer is obvious and not in question.  The system will go "uphill" if, and only if,  energy is supplied to it from outside.    There is nothing the system can do, within its own borders, that will move it uphill. We don't need to known who will push on whom to do what.   We don't need to know what shape or color or flavor or vendor or kind of "computer system" might occur in the middle of the activity.    What we need to know is,  where is the social energy coming from that could EVER lift the system to a higher state of being?

If someone says "We will give you this free software, and then things will be better" you can be sure the answer is,  "No, that won't change anything.  In fact, by itself, that will only make things somewhat worse."   ("you can't even break even.")  

For the system to "get better", for life to "get better",  LIFE ENERGY has to be poured into the system FROM OUTSIDE it.

One way to accomplish that is for human beings to come into the system,  pour in time and effort and caring,  soak up into their own body the costs of disorder and pain,  and then the humans could "go home" and pour the pain out on their family and neighbors and other activities.   The people go home, "recharge", gather will-power and energy to "face a new day",  then "go to work" , pour in their love and attention and caring energy,  which leaves them empty of it -- or have their life energy sucked out of them into the maw of "the system",  and the system could grow and sustain itself that way.

Usually,  this amount of additional pain, suffering, patience and energy-sucking behavior is not well advertised, since it is not a great way to SELL people on the "new computer system" which, it is said, will "make things better" or,  if things improve due to all the efforts of the staff, will "claim credit" for the improvement.

It's critical to realize that the computer system by itself is inert, lifeless, and cannot possibly be a source of either energy or order.  By itself, the computer system is only a drain on energy and effort. BUT,   it may provide an occasion for the hard caring diligent work of someone to be "captured" and "preserved" and "transmitted" to another person elsewhere in space and time,  who is then "saved" the effort of digging out all that information themselves.  For example, an army of clerical staff and nurses could put great effort getting accurate and timely data  INTO the system,  which could then result in a smaller (net) savings of energy and time on the part of, say, a doctor,  who used that information to treat a patient.     The "computer system" per se does not save the organization (net) work -- it only captures work of many people at one point in space and time, often hidden out of sight in back offices, and conveys the resulting value to be utilized at a later point in space and time, whereupon the "computer system" typically CLAIMS CREDIT for saving the time of the doctor using the system.

In reality, however,  just as word-processors removed most "secretaries" from the workforce and thought it would "save money" to have executives do their own typing,    so electronic health records typically ultimately end up asking the DOCTORS to do their own typing,  and data encoding, instead of leaving these things to staff specialized in and trained to do those things more efficiently.

In other words, this is exactly the opposite of the whole concept of saving money and time by having specialists, with special equipment and training,   doing steps (eg, typing) very cost-effectively, and instead CLAIMS to "save time and effort" by asking doctors, instead of caring for patients,  to do the data entry, typing, and data-coding.       It is argued that this can be done at the same time as the doctor is "seeing the patient", but, barring transparent screens,  it turns out to be rather difficult to SEE a patient and fill in little fields on the screen at the exact same second.  Since the total time to "SEE" the patient is fixed at, say,  8 minutes or 10 minutes or 12 minutes --- the NET time the doctor has left to actually take their eyes off the screen, refocus,  regain local context, LOOK at, and actually SEE the patient is diminished by the length of time they are attending to data entry, under the banner of "saving time and effort" for .. um... er... so that doctors have more time to see patients?

Well, perhaps it is the hospital which is "saving money", because now they no longer need the army of transcriptionists and data-coders because the nurses and doctors are doing this added work, at no additional pay,  stealing the time, focus, and energy away from actual patient care.   Actually, again, there is no free lunch,  and doctors and nurses are already fully tapped out and accounted for,  so for them to put effort and energy into data-entry means it is coming OUT of and being removed from "patient care".

Only by the clever trick of broadly including data-entry into the category "health care" can it be said that health care has not been diminished by this additional data-entry burden.

For doctors or nurses to benefit from what is "in the record" (thanks to prior efforts by other people, not the computer)  they have to be able to acquire, understand, and believe the record in a short enough period of time to be usable in such a short-burst visit setting.

It is not at all clear that this benefit is produced by Electronic Health Records.  I've personally been to multiple "encounters" where I first met with a nurse or intern or several interns who asked me a set of questions which I dutifully answered,    followed shortly thereafter by meeting with the doctor, who asked me essentially the same questions as if I had never talked to the prior people .

I don't think my experience is unique in this event.  In fact,  I have trouble finding anyone who has had a different experience.    OK, so hit the pause button and let's stop and look at this.

The NURSE asks all these questions, because they have been assigned to enter this data "into the system".    The DOCTOR asks these SAME questions because at least one, and often all, of the following:

1)   the nurse hasn't had time to enter it into the computer yet

2)   it takes the doctor LONGER to read the nurse's typed notes than it does to simply ask the questions over again,

3)  the answers that the doctor gets are DIFFERENT from the answers that the nurse gets to the very same questions.

4)  The information the doctor gets is not from the CONTENT (data) of the answer, but from the meta-contextual epigenetic components of the patient's reply.     The doctor can distinguish an enthusiastic "yes" from a reluctant "yes" from an instantaneous "YES!" from a delayed and unsure "... um... yes, I guess so .. whatever that means I mean.... I guess."

5)   The doctor is not engaged in gaining INFORMATION or DATA from the patient, but is engaged in establishing a conversational context and relationship with the patient,  which nobody else can do for them.

Now, someone might ask, such as I am now,  if the doctor is going to ask these questions anyway, why isn't the nurse there at the same time writing down the replies so that the total length of time the patient is seen is decreased?  Or, if the nurse being there would somehow disrupt the intimacy of the doctor patient relationship ( and the computer doesn't? !! )   why can't the nurse be behind one-way-glass or at the other end of the TV monitor eavesdropping and capturing and recording the encounter and videotaping it for instant replay if some item is unclear.   Why must the patient always have to put up with answering the same questions twice?   (I await enlightenment in the comment section.)

Or,  wouldn't it be even more powerful emotionally for the patient, in perceiving a CARING environment (recall the advertising phrase "health CARE") if BOTH parent surrogates, a male and a female in white coats,  were there TOGETHER equally concerned with the patient, equally asking questions,  learning from each other's questions as well as from their own?

At the same time, a third party, a coding specialist, could be on the other end of the TV link, eavesdropping, and prompting for more information as they attempt to determine the correct unique primary and seconding billing codes for this patient's symptoms, condition(s), diagnoses,  and treatment? For billing purposes, the patient cannot be "maybe x, maybe y".   A false sense of certainty must be affirmed "the patient is / has X" -- which psychology teaches us is fraught with downsides of labeling, categorizing, and stereotyping the patient so that further data flow is FiLTERED based largely on that which CONFIRMS this initial diagnosis.    A toss-up call between equally likely diagnoses becomes increasingly supported by data which support it,  as data which don't support it are discarded out of hand, increasing certainty in the "correctness" of the initial diagnosis.   In psychological  terms,  there is "cognitive dissonance" occurring, which protects the physician from the stress of fretting about possibly being mistaken on somethings so crucial.

Sadly,  this does NOT protect the patient against the downside of being mis-diagnosed in such a way that the incorrect diagnosis is now STICKY and almost impossible to shake, regardless how much contrary or dissonant data now comes into the picture.  MD's are far less likely to override another MD now that "a diagnosis" has been made.

In other words,  here are some downsides of electronic health records.   The patient has to be categorized into pre-defined slots, which officially the MD can ignore, but there is strong administrative pressure for them to adhere to.   The slots, by their nature, constrain diagnoses to a certain mental model of medicine.    Some slots are far more attractive than others, in terms of how much money the hospital or clinic or doctor will make by picking that choice,   and this information is often known to the doctor, resulting in a strong bias pressure on their answers.

Some studies have shown that physicians will tend to pick the FIRST item in a list of alternatives they are presented.   This can easily be measured by presenting possible medications for a given diagnosis in different orders to different people or on different days.     It is somewhat scary that the drugs MD's select are effectively determined even somewhat by the order in which the possible drugs are listed.

UNCERTAINTY is not captured gracefully, and "boolean" (yes, no) fields tend also to force an uncertain reply into a falsely recorded certain answer.   The uncertainty is not removed by differential diagnosis, but by time pressure and social pressure, perhaps invisible, to stick with what's already there as the lowest risk, lowest conflict, fastest choice for an MD. After, say, 4 other physicans have similarly "concurred" with the first physician,  HE or SHE now is under strong pressure to stick with their initial diagnosis and not consider bringing it back up into question.  After all, others have agreed with it.

Summarizing -- the data in the EHR is forced to appear certain where there is actually doubt,  forced to remove all of the "epigenetic" and body language that the patients (or consulting physicians) used to soften or strengthen the words used,  forced to fit into a predefined set of categories determined by billing codes more than medical categories,    pressured (perhaps invisibly) to fit into the HIGHER VALUE diagnoses that result in more revenue or less chance of lawsuit later on,  and entered sufficiently late in the process that it has to be repeated or asked again when it is not clear that a DIFFERENT answer by the patient, a changed answer from what they told the nurse,  makes it into the EHR versus the patient's first answer to the nurse.    The doctor and patient may walk away confident that X has been cleared up to be TRUE,  while the nurse has already entered into the EHR that X is FALSE and may be off already taking action based on X being FALSE. 

( More on patient opportunities to REVIEW and CLARIFY or QUESTION the chart in a later post.)

None of this even gets into the problems when the patient, after getting back home and talking to say his wife,  reviews what he said and his wife goes "No,  you keep getting that wrong, X is FALSE, it's EX that's TRUE,  and the patient calls back and tells some clerk who answers the phone that the answer to X should be changed to FALSE, please, thank you,  hang up.   The whole question of how new information, coming into the SIDE of the EHR like that, is handled is a story in itself.   Who needs to be told that X has changed?   Who has already taken action on it and left the building? Has this incorrect information already gone out on the wire to 2000 other sites?  Can each of those be informed that the prior value is wrong, please fix it? HOW fast does that occur? What can happen during the period in which the WRONG data is thought to be true? Etc.

This gets to another terribly inconvenient fact of life.  In the USA,  when paper records are used, and a value is changed, there are VERY strict rules about the change.  It must be indicted by striking through the wrong data, leaving the old data legible.  The new data must be entered, with initials or the name of the person who made the change, and the date and time.        If computer systems are used instead of paper, they must do the equivalent.


No, the law SAYS that computer systems must do the equivalent, but none that I've ever seen HAS done the equivalent, because it's HARD TO IMPLEMENT.    The regulatory bodies all conveniently ignore this blatant disregard for the regulations.  It may be POSSIBLE to figure out which field was changed when, but if someone knows of a vendor system that makes it OBVIOUS that a field's value used to be something else (click here to see what), please let me know. 

The downstream result of this flaw or shortcoming in the EHR (versus paper) is that there is some pressure on people not only to GET the data right the first time, but, well, frankly, NOT to spend a lot of time looking for errors to fix.  Errors are expensive to fix.   Old data is stale, focusing on new data may be more important clinically.  Etc. Etc.  Regardless,  seldom is there a cry of joy and happiness when an error is detected in some field of data.   There is some social pressure and legal pressure NOT to tell everyone in the world that this field was wrong, but we caught it (now) and fixed it. It's kind of, you know,  wink, nod, OK with the legal department and "risk-management" (risk to the hospital of being sued). if it's not so obvious that mistakes were made.

Another implication of this climate of "confidentiality" also acting as a surrogate for "cover-up" is that it is POSSIBLE, theoretically, to rate the reliability and accuracy of each source of data and each type of field,  based on the total number of corrections that have ever been made to it,  but that sort of meta-information about datasets isn't generally funded to be collected or made public.

Among the people who have to pull data OUT of 20 legacy databases in order to complete new mandatory government reports,  there will always be some knowledge, rules of thumb, however, about which data source is more reliable and should be used when data FOR THE SAME FIELD (such as, oh, "date of visit") has different values in different legacy sub-systems.

Few hospitals have "data architects" who design overall data architectures that work to prevent such errors occurring, and fewer have quality-control processes in place that routinely and deeply compare data daily or more frequently, on a 100% basis, between various subsystems to detect discrepancies and take appropriate action to fix them.   To a large extent, historically,  if no one complained, "it ain't broke and don't need to be fixed."

The result whether intended or accidental, is that stewards of legacy systems often seriously over-estimate how correct their own data might be.  Often they have vendor systems with no built-in way to validate data,   and WISH they could, but don't have time to do it manually and don't have access to the data or convenient programming tools to write their own validation and correction scripts.

I've seen systems that recorded actions taken on patients which are shown to occur weeks or years AFTER or BEFORE the patient was actually in the hospital / clinic.  The values cannot POSSIBLY be true, and the simplest Quality Control program one can imagine could pick up and flag these errors, but I've been assured that they don't have the staff to write such programs, or don't have the authority to write such programs, or don't have the time to figure out what the right values should be, especially for data fields they themselves don't use for any clinical purpose, but are collected for someone ELSE to use LATER in the process of care or billing.

Again, systems, like many managers or bosses, are often GREAT at exporting data, or telling the world what is true, but are reluctant or totally refuse or have no mechanism to LISTEN or to CORRECT mistaken impressions they hold.    As a result, I've gone to people saying "here's a list of what's wrong" and been told "we don't have the staff to fix it."  I've gone to people saying "I can write HL7 transactions to fix all these errors" and been told "Our system has HL7 INputs disabled or  we never paid for it" etc. 

The result is that bad data are collected at the data-entry screens in one place and time by one person, who doesn't really care whether the values are correct or not,  and the costs don't show up, perhaps till billing time, or until the system has to try to MERGE data between this system and other systems, and the large number of conflicting values now suddenly come to light.

It is a general truth that quality control does not "just happen."   Data quality degenerates as attention is pulled elsewhere until and unless someone takes ACTION to detect and fix it.   This is a sensible method of operation in the real world - don't spend time and energy on things no one considers important.  The problem is that some of these things ARE important -- but important to OTHER people far away in space and TIME from the data-source process.

AS "times get tight" and "budgets get tight",  people and departments that used to take extra time to get data correct and "clean" can consciously decide this is "not their job", it's "not a priority anymore" and decide to abandon such quality control loops.    There may be no indication downstream that the upstream quality control process just terminated, because the fields are still populated with numbers, -- it's just that now the numbers are garbage and totally unreliable.

It can take some time for larger scale, system wide processed to detect that this data source just changed from "highly reliable" to "unreliable",  so the larger system may elect, based on data coming from a "trusted source"  to discard conflicting (but correct) data from other sources, for some period of time,  perhaps a year or so, before it is figured out that the previously reliable source is now no longer reliable.

Again, the central EHR design is seldom built in with a "meta-layer" to data, indicating the source of the data value in each field, let alone flagging the fact by color or typeface or font or something that multiple sources of data DISAGREED as to the value of this field and were over-ridden in arriving at the displayed value.   Probably national intelligence agencies keep such meta-data, so they can update databases when a previously-thought reliable source turns out to be a traitor, but private hospitals seldom track such things.   

Human beings are well known for their need for "UNDO" -- often with the realization that something is wrong coming barely a second after the "ENTER" key has been hit.    It is worth looking at an EHR system and screen to determine if such expectable realizations on the part of humans are dealt with gracefully, or awkwardly, or not provided for at all.

So, Paper systems allow for yellow sticky notes to be affixed (or lost) with ancillary meta-information on them that real human systems use FREQUENTLY to manage data and processes.  ("Sally - When you go upstairs to relieve Nancy,  tell Mary her records are available."  )   Like the human genome, a great deal of functional operating data "doesn't fit" into the coding scheme, and has to be encoded "epigenetically" or, in the case of EHR's, by some sort of workaround process.  ("DON;T PRINT THIS FORM! The printer in 37G is out of order. Phone this in instead this week!")

One thing paper charts could handle gracefully was multimedia such as a Photograph, say, for dermatology, or a sketch,  by a surgeon, of work done or a work plan.    Some EHR's have no provision for graphics ("in version 2.0, we don't support that yet.").   This is a major decrease in value of an EHR compared to paper for surgeons or anyone who is used to putting photographs in paper charts.  Also, a photo is a photo is a possibly faded photo, but a computer jpeg may vary depending on what screen is used to view it with. A whole new set of processes are required to deal with images

========== back to thermodynamics ===

I've listed a whole series of ways that an Electronic Health Record may constitute steps BACKWARDS,   away from the quality of care that existed prior to their introduction.

All of these "flaws" may be dealt with, but as I said yesterday, the path from one mountain of optimized care (the old system) to a higher mountaintop of optimized better care (the new system, once burned in)  is the valley of despair  (the new system partially instantiated but not yet fleshed out with work-arounds,  needed changes,  modified expectations about who now has to do what, etc.)

And I've asserted that only the DOWNHILL part of that journey will occur on its own, due to laws of thermodynamics that apply to all systems, including human care-giving systems.

The UPHILL part, the transition of care and transformation of disorganized fragmented action into coherent, organized action,  requires SOCIAL INPUTS of a large amount of time and energy.  The "computer system" is not going to provide those inputs.   Only PEOPLE can provide those inputs.

So, for the "computer system" to produce the benefits that the VENDOR will almost certainly claim credit for (and desire to be paid for),   HUMAN BEINGS have to do the actual work which is different work than the work they have been doing up to this point, and may in fact be work ON TOP OF, In ADDITION TO their prior work    In fact, it  may be on top of not only their prior work, but NEW work correcting the mistakes that partial reliance on a partially installed new system has generated.

This new work will almost certainly be somewhat uncoordinated at first, and often not even the correct thing to do, because of confusion in the ranks about who is supposed to be covering what base how.

If there is not provision for sufficient extra new space, time, hands, and staff, and managers for this NEW ADDITIONAL WORK during crossing the valley,    there will be rumbling, then anger, then outrage, then work actions,   then total tissue-rejection of the impossible task of doing 3 people's work on 1 person's energy.    Things will only go from bad to worse, as "the computer" cannot make the system go "uphill".   The project will crash and burn at this point if this isn't prepared for and funded by NEW EXTERNAL resources.    SOME kind of EXTERNAL energy source will always be required to 'cross the valley" and even get to the point of breaking even, doing as well with the new system (and the HUMAN epigenetic additions to it) as had been done with the old system (and the human epigenetic additions to it.)

 Then, to do BETTER with the new system than the old one, will require yet MORE human energy and caring and effort and pain be put into the system.

The Electronic health Record does not DO the work involved in getting socially reorganized on a higher plane of existence.  The PROCESS of IMPLEMENTING the EHR is a structured OPPORTUNITY for and OCCASION for people to do this additional effort,  but doesn't actually DO the work for them.     People need to do far more than "buy into" the idea of a new system.  They need to change all their work habits and patterns to accommodate the new tissue, the new kid on the block.
They need to actually DO the work that the computer geeks and vendor will be claiming credit for and getting paid to do.

The only sustainable way for humans to be doing more work is for them to be receiving more social support than they were before for doing that work.  They need to be appreciated and respected more, and know they are appreciated at last, and feel it to be sincere.   This is possible.  This however cannot be purchased and 'installed" by the vendor.   This is a "side effect" of the pathway used by the IMPLEMENTATION TEAM in  selecting,  installing, implementing, cutting over to the new system.

If people are going to put out more, and sustain that,  they need to drink in more social approval and appreciation than they did with the old system.  They need to see that their concerns were actually heard, understood, attended to, and addressed by management and the implementation team.

There is nothing the vendor or the computer can do to get around this fact.  The success or failure of the implementation of a new EHR is in the hands of the users.  For the new system to die and fail a terrible death, the users simply have to just continue working at the level they were working at before the implementation began.

Users can be EXHORTED and URGED and PERSUADED to "put out an extra effort" for a relatively short period of time before they get additional social energy in return, or they will burn out.

The pathways by which people feel heard, respected, valued, and appreciated are not in the skill-set of the geek squad or the IT department.    Generally, they are not in the skill-set of the vendor of the Electronic Health Record.   These human factors cannot be purchased at the store, or bought-off for cash.    Often on huge EHR projects,  the problem is NOT a shortage of cash, it can be raining cash.  The problem is that cash doesn't purchase love.  Cash won't purchase honest respect.

Maybe this post has added some insight into T.S. Eliot's observations;

They constantly try to escape
From the darkness outside and within
By dreaming of systems so perfect that no one will need to be good.
But the man that is shall shadow
The man that pretends to be.

Electronic Health Record systems will not cover-up all prior errors and make-up for failures of human systems to surface and cope with problems in an honest fashion.

The social transformation and changes during implementation of an EHR are not "side effects" -- they are and should be direct explicit intended and monitored and managed and desired EFFECTS of the project,   put in place by the steering committee.

These social factors cannot be managed by IT-management or even by clinicians. Clinicians are not generally social psychologists and anthropologists.    Culture needs to be changed.  This is the type of work professional anthropologists and behavior modification specialists understand. Those people need to be at the table and not as an afterthought.

You need to plan "what else needs to go right" not be shaking your head asking '"where did we go wrong?" If human beings are involved all along the way, and if they have honest input and are heard and their social needs met, then the project will be "a success" whether the computer system itself "works" or turns out not to work. 

The success factors are not "inside the box" or "inside the shrinkwrap."

Sphere: Related Content