Tuesday, November 30, 2010

Two minutes - the myth of EHR's

We are assured by government and IT vendors that Electronic Health Records (EHR's) will improve patient care, if only we could overcome those who resist progress.   Balderdash.

Let's examine this myth.

First,  there is an assumption that, prior to seeing you, a doctor would read the EHR to get the "big picture", making the visit more efficient.   My experience, and that of everyone else I've talked to,  differs.   Another person, such as a nurse,  often has a sort of mini-interview,  capturing data to put it INTO the EHR -- presumably so that the very busy doctor can be spared the effort to ask those questions, as he simply needs to read the EHR to see the answers you just gave.

What actually happens is that the doctor starts with "So why are you here today?" or some such thing,  indistinguishable from what they would ask if you hadn't just talked to the nurse and answered all those questions.

To put it very succinctly, the (EHR + doctor) hybrid unit fails the "OMG" or "Oh, my God!" test.  In situations where any of your good friends who hadn't seen you  for a while, meeting you, would go "Oh my God what's wrong?!!",  the doctor asks "What brings you here today?"

Worse, this does not seem to change with time.   Familiarity with your "normal" state is neither captured by the EHR,  even over time,  nor is it somehow communicated from the EHR to the physician at the point of care where you'd expect it to.   Reliance on the EHR has replaced memory.

Hmm.  Well, how about the value of all those prior visits and what the EHR has captured about those, now that it's all electronic and legible and stuff?    (a) are those read? and (b) if read, what do they change?

In answer to (a),  yes, probably, if you have one prior visit, the information from might be read.  Far more likely, if available, a very short 1 page summary of it might be read, listing allergies and previous diagnoses and major events.

Let's suppose that on a prior visit to a prostate surgeon specialist  the specialist wrote that your prostate was "precancerous" and "immediate surgery was recommended."   Further, let's say you are familiar with this surgeon, who has a reputation that "He never saw a prostate that didn't need to be removed."

So, are you as a doctor going to follow that advice and get the patient admitted and off to surgery?  More likely, you will raise an eyebrow and discretely suggest the patient "get a second opinion."  

In fact, for pretty much anything that is asserted as a "fact" in the EHR,  your opinion may be that the source of that information is biased,  inexperienced,  working off an entirely different model of health,  out of date, or otherwise not to be believed and acted upon.

So, exactly why then was it worth $100,000 to get this information in front of you?

Or, let's take a case where an older patient, seen over 100 times by a health system,  has multiple problems, sees multiple specialists, and has over 1000 medical documents in her file from these visits.

Again,  of the 12 minutes the health system allows the doctor to deal with you,   and the two minutes of that the doctor might choose to spend reading the prior EHR,    what fraction of these 1000 documents do you suppose he'll read?    The most likely answer is:   zero.   In fact, as in the rest of life, the MORE extensive the EHR is, in terms of number of documents and complexity of issues described per document, the LESS likely it is that your current physician at the current visit will elect to READ any of it.

A graph of blood pressure historical data might be glanced at.   Details about blood type might be looked at, with a note that this should be redone before giving blood, "just in case the prior value is wrong."

So let's back up a step and think about what we think should be going on.   There is a lot of information, encoded into text or structured text or forced-choice fields in the EHR.   For the most part, this information is effectively divorced from meta-information, such as the name and qualifications of the source of that data, or any qualifications they might have put on it or caveats regarding it.

Maybe the supposition is that a doctor, in the two minutes allocated,  is going to "process" all that information and produce a "big picture", a mental concept that includes all of the relevant parts of what has been done before, so that, today,  he can look at you and advance "the big picture" even further in understanding what might be wrong with you and the plan of addressing it.       I don't think so.   No one can read 100 documents and process them well in 2 minutes.

So,  let's say there is some process that summarizes the prior documents and sort of encapsulates a distilled "big picture" of the patient.   By definition, unless this is a wiki or otherwise heavily linked electronic document that allows "drill-down" into what lies behind assertions,     the summary is going to leave out most of the details.

However, it is precisely the small details that don't quite fit,  the nuances that aren't quite right, that a doctor's thousand of hours of training can spot and realize that prior diagnoses are incorrect.     These details are suppressed in the summary, because it is a summary.

So,  whoever writes the summary and elects which details are "relevant" and which matter enough to be put into the summary actually, in effect, determines the outcome of anyone reading it.   (The same problem is true of War Rooms in the Pentagon, by the way, and has been studied there.)    By the time some low level person, who has the hours,  has used THEIR judgment to filter out all the "irrelevant details" in this report to his superior officer,    there isn't really any room left for the superior to question.

We have, in effect, by using the hierarchical summary feature of the electronic record,   removed the doctor from the loop.    An army of low-level staff members have, in summarizing, removed the need for a highly-trained doctor at the top to read the summary, because there's no details there left for the highly-trained person to respond to differently than a low-level person would.

So. let's summarize our own thinking so far. The EHR is alternately too thin to be of value, as it is missing too much,  or too thick to be of value, because no one can possibly read it in the 2 minutes allowed. The solution to this problem with text-based concepts is to have low-level people (the only ones with time to do so) do a summary of the case ,which could now be read in the two minutes allowed.  In almost no EHR system are the details of the summary cross-linked with hot-web-links back to the source of the data, in case the source is changed, deleted, addended,  or questioned.   Doctors may be given an opportunity to challenge the summary, but  to do so would require going back and doing the summary themselves, which, by assumption, they don't have time to do.

So, the clinical picture that can EVER be embraced by an EHR is actually quite simplistic, and has to fit in a short series of listed bullet-point items.   The nuanced, net, effective clinical picture of your complex medical condition is limited to what fits on a Poweroint slide,  effectively a cartoon that discards all traces of uncertainty or ambiguity or conflicting readings that might open the door to a realization that your primary diagnosis is incorrect.   Alternative framings of your condition, alternative diagnoses that might be relevant are forcibly discarded since there is ":no field for them on the form."    Clinical impressions of "maybe" are forced into one of "Yes" or "No" to simplify billing or to satisfy the mental model of some low-level non-clinical programmer somewhere who was trying his best to "validate data."  

Furthermore, the clinical picture stored in the EHR does NOT have the property that it improves with time, or with use.     No facility is included to allow a doctor to highlight relevant sections of a document to save themselves time the next time they come back to this patient.    No facility is provided to let them select a section of document and "drag and drop" it into a summary document, pulling along with it all the cross-references to the work cited.     No facility is included for them to put on a yellow sticky with a note to self challenging some fact in the existing record.

In fact,  there is nothing in EHR systems that would look at text descriptions and recognize and flag that completely inconsistent conclusions are drawn in different places in one document or across documents.

I would be most astounded if any hospital had a section of the summary which revealed, let alone highlighted for attention such conflicts.   Picture reading "Well, Doctor Smith thinks X is true, but Doctor Y thinks Smith is an idiot and X is clearly false.  Doctor ignores them both and assumes Z is true. "   It may be a true summary of reality, but it is very very unlikely to ever be clearly articulated in an EHR for lawyers to find. So, instead, it will be covered up and buried.  ALL such conflicts will be suppressed, and even their existence whited-out of the summary report, as if everyone happily and confidently agreed with each other.  The EHR facilitates this, because it has no room for "conflicting opinions" in the structured field, which has to be either "YES" or "NO".  

This is the conflict within a hospital.  Imagine what will occur when different physicians at different practices or hospitals have to contemplate and respond to conflicting opinions from the competing practice or hospital, in order to come up with the "master, nation-wide health summary for this patient."    Imagine the heydey attorneys will have if the differences and discrepancies are revealed and highlighted.  Imagine the fraud and damage to clinical truth that will occur if the differences and discrepancies are shoved under the rug and made to "go away".

So, I challenge the designers of these regional EHR summary databases.  What IS your plan when you run into conflicting and incompatible diagnoses by different doctors for the same patient?    As you surely will, and very quickly indeed.

Are you going to highlight them, so it's clear that none of them can be considered definitive?  Are you going to code them "under dispute"?   Do you even have capacity to store such a code?   Are you going to use your own judgment or your own people to override one, or the other, or both doctors?    Are you going to refuse to show anything until the two doctors reach a consensus opinion?    Who is going to pay for the costs of resolving such discrepancies in "the master patient chart?"  Who is even capable of resolving such disputes? 
Are you going to pretend that such situations don't exist, or only exist "very rarely" in the hope that funding will not be held up on such a little thing?  Nail it down people.  Do you admit that these problems will occur (and therefore open yourself up to questions about how you intend to deal with them?)  Or do you deny that these problems will occur (and therefore open yourself up to a delay in funding until you say how you will deal with them?)

In point of fact, this "unclean data" problem will present not just a problem to regional health warehouses. It will document, clearly, for all to see,  just how BAD clinical records actually are.  It will document, for all attorneys to discover,  just how much disagreement there is among professionals.  It will document, for patients,  that their unqualified trust in any given doctor should be tempered with the evidence. 


And it will document for all that there has been, to this date, a conspiracy of silence about this problem. Did no one know about this?   When exactly were you planning on mentioning it? Only AFTER we'd spent $100,000,000 getting to that point?

Each time a doctor opens up even his own records about his own patients,  he is faced with documents he's not allowed to mark on, cross-link, color-code,  put post-it notes on,  etc.  If he attempts to go into length about complex conditions,  he is punished by failure to meet his scheduled case load as well as called by the transcription department about having documents that are "too long" and cost way more than other doctor's document to transcribe and summarize.

The text stream called an EHR, therefore,  may have a good ability to persist pixels, or facts such as a blood pressure reading,  but as the complexity of the concept or medical condition gets higher,  the EHR is unable to follow along and store, in any kind of retrievable fashion given the 2-minute-rule,   the "big picture" and all the nuances that picture should be resonating.

What will get passed on to the next shift, or the next doctor, or the next visit, is at best a cartoon summary of things to date, prepared by a non-physician with all trace of nuance and uncertainty removed.   One hospital I visited told surgeons they couldn't store the normal pictures with circles and arrows they used to plan a surgery or summarize what happened, as the computer system wasn't sophisticated enough to do what the paper chart system did, ie, allow pictures to be attached to the patient chart.  Again,  what is stored is getting dumbed down and reduced to what is easy to fit in a computer form.

Which perhaps explains why the doctor doesn't bother to read it, or, often, even to read the notes his nurse made at the start of the visit.  He may ask the same questions again, not because he is interested in the "answer" (as seen by the EHR) but because he is interested in the nuances, the body language, the uncertainty or certainty that surrounds those answers.   He cares about the meta-data, because a large part of clinical judgment is based on intuition and reading the meta-data.   Sadly, none of the EHR has room for such metadata.    A transcribed document codes an emphatic "YES!!!" the same as a neutral "yes" the same as a hesitant "um... yes, I suppose, sort of..."   To humans, these are very different answers.  If my girl asks me if I love her and I say "YES!" versus   (pause) (ponder)( delay)( fidget) "... yes?" I am conveying very different (and actionable) information.

The EHR throws out all this meta-data.   If you're going to do that,  you might as well just have clerks sitting and following a flowchart or the computer have a set of rules that guide the "next question" to each issue down some tree that comes up with "the diagnosis" or "the proper recommendation",   at the expense of throwing out every point at which a trained doctor would say "wait, that's the wrong question. That's not quite right."

What it doesn't explain is why the country is so gung-ho on spending billions of dollars to install Electronic Health Record systems in every nook and cranny of the so called "health care system" especially for Medicare.    Children may have relatively simple things wrong with them that "fit" in the EHR.  A broken arm.  65-year olds probably have at least 3 chronic conditions and are taking over ten different prescription medicines for a variety of interlocked an inter-related system problems.

Apparently some programmers, managers, and insurance companies think that can all be neatly and correctly summarized in a few Diagnostic Related Group codes (DRG's) and everything is fine.

In the real world, it's hard to even imagine how such a system could possibly deal with the complexity of even one older patient over multiple visits.

In reality, of course,  every actor in the health-care is multiplexing and distracted. Doctors, nurses, labs are acting like short-order cooks,  starting on one patient,  taking one step,  leaving them to go deal with some other patient or crisis, trying to remember where they were, reprioritizing, re-triaging,  going back to the first patient for a minute, etc.   None of that interrupted action-coordination is contemplated by the programmers who designed systems as if the doctor or nurse, with all the time in the world,  sat down and did everything for one patient before even beginning to think about the next one.

A short order cook who did orders one at a time in serial order would be fired by the end of the first day. You just can't operate that way, you have to overlap, predict what's coming, allow for lag times, etc.

On this account EHR's are equally out of touch with reality.   The EHR expects you to sit and do everything for one patient at one time, so it can do "validation" and help "support your decision."

There is no way that serial text capture and summarization can possibly do that job, in a real environment, with real medical conditions.

The IT people don't need to force clinicians to "get with the program" and "stop resisting computerization." They need to go back to the drawing board with a better sense of how badly they have conceptualized and modeled what goes on in a hospital, and design a system that supports real people doing real work with patients with truly complex clinical conditions, in the fragmented, interrupted, and multiplexing distracted mode that clinicians are forced to accept as terms of employment.

There are other issues as well, that I won't go into in this post. One of the biggest ones is inappropriate persistence or stickiness of a diagnosis.   Once one doctor, whoever goes first,  states an opinion and a diagnosis, regardless how tentative, there is some legal and professional courtesy and psychological pressure on the next doctor to agree with, or by silence not challenge it, even if they believe in their hearts that the diagnosis is pretty suspect.  The third doctor to see the record will have even a harder time going against the flow and disagreeing with the first two doctors.  From then on, very few doctors would challenge the "consensus opinion" about the first diagnosis.       The diagnosis has been electronically locked-in-stone by the EHR process.   If the second doctor had gone first, a different diagnosis would have been locked-in-stone.

You have to worry about any process where the order in which people see data changes the outcome.  The order drugs are listed in a pull-down menu, for example, has a strong impact on which drug a doctor using an EHR will select.  By itself, that should be a big "WHOA." until THAT gets sorted out.

Monday, November 29, 2010

Is text better than video for communicating messages?

There was a a question on the NMC forum about video versus text. Here are my thoughts. I reduced them to a few lines of text (!) as my response there.

======
 (The image is from Cognitive Behavior Therapy Self Help Resources at http://www.getselfhelp.co.uk/interpersonal1.htm )


Summary of points :
1)   many people graduating from college today have never read an entire book. They simply don't know how to process large blocks of complex-logic well-written  structured running-text, either to read it or to write it.    Beyond a threshold length, which is surprisingly short,  they won't even try.  They were educated with the largest concept restricted to the size of a Powerpoint slide.

2)  People do not have the leisure of time,  let alone non-multitasking time, let alone an uninterrupted stretch of time.

3)  With semi-literate international workforces,  you may be limited to a vocabulary of 2,500 words or less, and some of those may be misunderstood.    People tend not to raise their hand to point out that they have no idea what a given word means.

4)  With interrupted attention,   it might be safe to figure that at least 10% of what you say is missed entirely and the gap filled in with what the listener thinks you probably meant should have gone there.  These gaps and fill-ins are silent on both sides, but can surface later when they try to reconstruct what it is you said in context.    Text is far more vulnerable to damage if a portion is missing or wrong than images.     If you miss one step in a set of directions to my house,  your directions are useless.  It's hard to put an X on a map, on the other hand, in a location that doesn't even exist.  It's easy to make that mistake in words.

5)  The retention rate for text, alone, after 48 hours is probably close to zero.   If the words tell a human story they can relate to, retention may be much higher.      If you make a video that tells a human story,  retention is higher still, even years later.

6)   Video is far more likely to convey tacit knowledge than text.   Watching a group of people contemplating a new idea, and then accepting it, with all the associated body language and non-verbal signals, is a much more powerful experience than reading about a new idea.


=====


detailed discussion:


Interesting discussion topic.  (taking another munch on my cookie and  a sip of coffee...)

I'd like to push the envelope back further, or perhaps fall entirely "out of the box",   and share some thought's I've had over the past 15 years since my masters degree program in computer science, specifically distributed artificial intelligence and a focus on problem with collaborating "agents" attempting to make sense of a scene.    For the record, I have a US Patent as well in the area of image processing and data-fusion,  so I've thought a great deal about this sort of thing.  These are not new thoughts, nor "off the cuff" ideas, but substantial core issues.

These thoughts are quite relevant to the question of text-based communication in business,  if you can bear with me. Of course, the length of this post is part and parcel of the whole problem.  No body can stick around long enough for anyone to actually get into deep thinking about an issue.   We aren't all down to a maximum attention span of 140 characters of a Twitter message, but we are definitely heading that direction.

One on-going battle in computer science is the question that might be phrased in English as:  "What fraction of important information can be expressed in words?"    Alan Turing did some key work on the irreducible core nature of computation, in the abstract, in general,  back during World War II, while he was cracking the German's ciphers for the British.    He dealt with questions of what kind of thing was "computable" at all, given infinite time,  and what kind of thing was simply not computable.

This is beyond Whorf's question about whether what we think , or CAN think, is limited and shaped by the language we are using.  Can you think things in  a largely parallel language, like Chinese, say,  that you cannot think,  let alone articulate,  in a serial language like  English?  Another interesting question.   Turing's question was,  heck with WHICH language,  are there things that are important that cannot be expressed in ANY language?

Again, this goes beyond but is related to the faddish myth that crippled Western Science for the last 100 years or so,  in a worshiping of mathematics, that said that if you couldn't put it in an equation, it wasn't real.  I think, finally, we are getting more and more cases of important things that "count, but cannot be counted."    That, in my mind, is good.

Anyway, cutting through all that,  Turing focused on what sort of problems can be expressed as strings of symbols,  and then solved by manipulating those symbols by ANY "computer" of any kind,shape, color, architecture, man-machine hybrids included.  I suppose that is a super-set of what kinds of problems can be articulated in words and then solved by "thinking about them" in a logical fashion.

Turing's work, however,  and his model of a completely general computer,  involved the concept of an infinite "tape" of ones and zeros, moving through a reading and write section,  for unlimited time, as the machine "worked on" the problem.

My objection to that work,  and the whole school of Computer Science which evolved from Simon and Newell's ground-breaking work at Carnegie Mellon in, oh, the 1960's I guess,   was that, at the time, the idea of "image processing" had not yet taken shape.    I was part of a new wave of "Young Turks" who argued that,  had Simon and Newell owned an image processing engine, they would have based all their work on it instead, and discarded the linear-symbol-string model of "everything important".

The crux of the matter is this:  in the real world,  the one human beings and societies operate within,  there is not an unlimited amount of time to work on a problem.  We do not have unlimited budgets in either space or time.    In fact, the total time the average serious researcher has to work on "a problem" is probably under a decade or two in total,  which has to be interrupted by activities of daily living,   so maybe, say,   5 years of actual thinking time is a rough upper limit.  That's the upper limit of the upper limit.

For normal mortals,  dealing with social issues around us,   maybe 200 hours is closer to an upper limit, and 40 hours (a work week), is more than most real-world problems get allocated.   The sad truth is that, today, most people graduating from college today have never actually read an entire book.  Ever.  I kid you not.   They are not given Mortimer Adler's "How to Read a Book."  They are not trained in how to tease apart a complex argument with detailed sub-branches of logic from its expression in linear speech in "a book." They are not trained how to take a complex thought, and articulate it in said format. 

Some might question (and do) whether they are even capable of entertaining a complex thought, regardless of input and output considerations.

So, a more relevant question that Turing's grand question about what is ultimately expressable and computable in symbols (or words),  is this:     What kind of stuff can human beings process in 4 hours, clock time, start to finish.

Sadly, the higher in the "chain-of-command" a human being is,  the less time they have to address any specific problem.  This, again, is a crucial piece of information.      While it is conceivable  that you could get a freshman to spend 40 hours on a particular problem,    it is not conceivable that you could get the University President to spend 40 hours on a particular problem, let alone the President of the USA, or the CEO of any large corporation, or the head of any military war effort.

I'll assert that without proof, but I think your experience probably supports it too, as a good first approximation to life in 2010.

Try, for example, to imagine an MD spending 40 hours on "your case".   The idea is absurd.  Nobody gets 40 hours.   Maybe, at the outside,  for a really complex, challenging, and compelling case,   if you had a really good health care system,  you could get 4 hours.    One hour is more likely.  Let's say the doctor really cares about you, the health system will accept the time spent this way, and you are able to get one hour of a doctor's thinking attentional time,  to the extent that every other important case in their mind is not "taking up RAM" or "background cycles."

Even that hour is unlikely to be "an hour".  It is more likely that you will get 12 minutes here,  5 minutes there, 8 minutes somewhere else, etc., that could "add up" mathematically to "60 minutes".  Whether it adds up in terms of "effective equivalent of undivided attention time" is a different (but important question.)

Still,  the reality today is that "attentional time" from pretty much anyone is fragmented,  and, for the most part, plagued by hundreds of other competing problems that lurk just below consciousness and suck up energy keeping them at bay.  Surgeons may learn how to totally "be where they are" (thank you Buddha),  but most of us,  given the slightest excuse or pause during "a meeting",   find ourselves immediately pulled away to some other problem in the hopper.

I used to do stage magic.   It's a fact known to magicians that adults in an audience spend, maybe, at most,  1 second out of every 20 actually present and looking at what you are doing.    They sample and "fill in the gaps", while their head is actually busy working on something else.    They see you lift the scissors towards a rope,  go off somewhere in thought,  then see two ends of rope drop as the scissors move away, and they will swear afterwards that they "saw" you "cut the rope".     Our heads are great at "filling in the gaps" so even we are unaware of the fact that we do this.  Kids, by the way, are terrible audiences to work with, because they tend to actually be present and watching,  not zombies  like their parents.

Back again to our question.  How can you communicate with people who have at most one hour of divided attention time to give you, and who do not have a large number of complex mental structures you can simply tap to resonate with. There are no shared classics,   only shared TV shows and songs and movies, none of which have great depth or complexity. 

Those are the strings of the instrument of their mind that you must work with in order to play your song in their head.

So, there are some choices here.    You can go with blocks of text.   You can try equations.   You can try graphics like charts and diagrams.  You can go with "PowerPoint" slides.  You can go with short YouTube videos.  You can drag them into virtual reality and give them an immersive experience in a different and possibly far more colorful, interactive, and exciting world.

For explicit knowledge,  words might work,  but again,  back to Turing's work and reality,  it is not enough to play your song in their head, hard as that is.   You must play it in such a way that,  48 hours later,  there is a residual change in their head from the way they would have been had you never played your song.

Otherwise, the whole enterprise is pointless, at least in terms of education of "getting somewhere" in an extended social discussion about, say, social policy issues or anything more complex than "what channel should we watch?"

Summarizing so far: Let's face reality here:
    *  You have very finite time -- maybe an hour of clock-time.  Less if the person matters, in the hierarchy of power.
    *  You have an audience or conversation partner or business associate who has many OTHER pressing problems,
         and whose attention you only partly have.
    *  You have problems that do not easily lend themselves to being described in words.   If you attempt to be accurate,  the number of words grows explosively, because you lack a common shared shorthand with the audience, and must try to define all terms and their nuances.    If you abandon accuracy, at the risk of being called on this "error" later,  you may be able to "boil down" a complex problem into a sufficiently over-simplified cartoon that fits on slides in a one-hour presentation.
    *  You have to assume that the audience is distracted, and is going to miss some of your key points but fill them in with their own thinking about what you must have meant or said in there, thereby distorting your message silently. 
    *   If you have an international audience or workforce, you may need to limit yourself to a vocabulary of the 2,500 most common words in English. 
    *   You STILL have the problem that there are things we have no words for (but might someday),  as well as things that there simply cannot possible ever be words for, that won't fit through this keyhole you're trying to talk through.

A brief plug for image processing.  Images have two dimensions, and text typically has only one dimension to it.  The implications are profound in terms of noise-correction and robustness.  Images are infinitely better.

If I give you "directions to the party",   and it turns out one of the roads is blocked, or I get one of the instructions wrong or ambiguous,   the whole set of directions becomes useless.   If I give you a map of the area and mark on it the location of the party, and if you know how to read maps (possibly a big if),  then you are in a mujch stronger position.  From that image you can derive linear word instructions ("turn left here"),  but you can ALSO derive other instructions if the first set fails. It is also very hard to put an "X" on a map in a location that is not on the map,  but it is trivial to write instructions that direct you to an impossible location.

This is a profound issue.     Serial strings are inherently vulnerable to "point errors".  Attempting to correct such errors in advance results in a word-explosion so that now you have a string of words which is too long to be processed in the finite time available.

I can take a photograph of  George Washington and randomly change 30% of the bits (pixels) from whatever they are to pure black, or pure white, or a mix of the two,  and you can still recognize that it is a picture of George Washington. The "image" is robust against that kind of point noise.      If I take a set of equations and randomly change 30% of the symbols,  all I have left is garbage.   In fact, if I get ONE symbol wrong, it may be garbage, or, worse,  silently wrong in such a way that it still looks correct. If I take a book or text and randomly change 30% of the words to something else,  it is very unlikely that the intended meaning will shine through on the far end.

FACT:  The multiplexed, distracted audience that your communication is intended for WILL miss or mis-interpret a significant fraction of what you "sent".

So there are four questions. 
(1)  How much can you send, in terms of volume and complex structure in a very short window?
(2)  How much of that will be received correctly through a noisy channel?
(3)  What will the residual of that message look like after 48 hours?
(4)  Can you convey, consciously and intentionally, tacit knowledge through this medium?

If you can use video that tells a human story,  you will be far better off on all counts than if you try this using text only.

=====



Wade

Sunday, November 28, 2010

Or, then again, maybe NOT nurses in the lead...

(continuing my last post)  The first problem in any problem  should be the "problem problem" -- or, are we asking the right question?    While nurses may be better positioned than doctors to reduce costs or increase safety of health care starting right here,  that still doesn't mean they know anything about leadership.

And, given the AMA's response and leadership within their own ranks,   it is likely that physicians will in fact stubbornly and strongly resist nursing leadership on what doctors think is their own turf.

So let's fall back and ask a different question.   It seems to require both physicians and nurses to create institutional health care.  Nurses have broader but shallower knowledge.  Doctors have narrower but much deeper knowledge.  Both groups have a strong stake and therefore a strong bias.  So, which one should lead?

How about "neither one!"  Both groups currently have substantial issues in trusting the other side's relevant competency and intentions.    And,  "leadership" should be about facilitating a solution, not dictating a solution.     So,  what we have here is a need for impartial third-party mediation and facilitation.

Even if doctors are experts at doctoring, and nurses are experts at nursing,  neither group is expert at group process facilitation,   mediation,   reconciliation,  etc.    There are people, however, who ARE in fact expert in those areas,  who know little about health care.

This seems eerily reminiscent of the late 1960's and 1970's when "only doctors could be hospital administrators".     After much emotional debate,  doctors finally let go and admitted that running a hospital where medicine was practiced was not,  in fact,  medicine, but was administration.

I'll put on the table the assertion that running a good meeting and achieving some kind of sufficient consensus to move forward is not "medicine" or "surgery" either.  Nor, these days does it appear to be an activity that professional politicians engage in.     The arts of mediation, facilitation,  and leading a group of warring factions into a successful mutually-advantageous solution takes, I would suggest, as much skill as nursing or medicine or surgery.  It certainly take a great deal of specialized knoweldge of a type not currently taught to doctors, or nurses, or MBA's, or hospital administrators. It also takes diplomacy and the ability to be taken seriously by all sides as being an honest, impartial broker.

Even if nurses or doctors have a lot of experience with such roles, there is little chance they will be taken as "impartial" by the other stakeholders.  And, for that matter, taken from outside the system, who is to say doctors and nurses won't work out a sweetheart deal where they all get richer but patient care doesn't improve in quality, improve access,  or decrease in cost?

There are other major stakeholders here with "skin in the game."   The Leapfrog group and business interests have a strong need to reduce net costs to business.  I'm not suggesting it, but a single goal like that could be achieved by letting all sick or injured employees die as rapidly as possible before they consume expensive services.   One cigarette company actually presented the case to a one country that, with cigarettes widely used,  total costs would go down because everyone would die before they exhausted the retirement system funds.      In point of fact, business has a legitimate need, but no one would trust THEM to lead the discussion either, given a number of high-visibility cases where business seemed to only be out for business, at the clear expense of human beings.

Information Technology companies and departments are scrambling to volunteer to help reduce costs, although initially it appears the effort will substantially increase costs and payrolls.  Few people want to trust them to be impartial, unbiased leaders who focus primarily on keeping people healthy.

The group from public health has the longest standing concern on record for large-scale population health,   but some consider public health to be too academic and out of touch with reality,   some think public health is biased in favor of the poor,  and some think public health considers all corporations and business (and jobs) to be the enemy,  and wouldn't trust THEM to lead the effort.

So it looks like we may need to "grow our own" cadre of meeting facilitators and reconcilers and diplomats for this need.

I'd assert one thing that's just crucial.   There will NOT be a stable solution with high flexibility,  low costs, and high quality care until and unless doctors and nurses and IT and business and public health people all get on the same page and actually reconcile their differences and establish earned and justified trust in each other's intentions and expertise within their own areas of practice.

Any group  that could get a local concentration of power and jam "their solution" down the throat of the other parties might win in the very short term, but would suffer in the long term.

So, above all, the purpose of interactions between these groups has to focus heavily on healing the rift between them.    Solutions to procedures and practices will FOLLOW healing and become feasible only in a climate of earned trust.

So, yes, we need healers at the helm.  Not healers of individual humans in the role of patients, but healers on a larger scale between organizations and very different and hostile cultures.

I'll take that as a working hypothesis, that what we need is organizational-scale healers,  and use subsequent posts to examine whether the idea of "raising our own" is feasible,  or whether we can find such people somewhere on the planet,  or whether we have exemplars we could follow but simply no cash behind the effort, say in programs in "mediation" and "conflict resolution",  or what.

It is certainly clear that if we had some extra expertise in that area, the various local, regional, state, and Federal governments could sure use some of it as well.   The skills and processes and frameworks required to "bring people together in hard times" seem to be totally lacking in Washington and Sacramento and Albany these days,  let alone in the mid-East or India-vs-Pakistan, or India-vs-China, or the Koreas,  or in religious battles between Jews, Christians, Atheists, Moslems, Hindus, etc.

Clearly this would take a lot of education and cost a lot of money.  So, frankly, is computerizing the entire Electronic Health Record of the country, and it's not clear that EHR's by themselves would actually fix much, net, all things considered including disruptions due to implementation and due to getting doctors and nurses and IT and insurance companies and hospital administrators back into power struggles over who does the work, who takes the pain,  and who gets the benefits.

So far, the track record of installing EHR's in a landscape with low trust and poor working relationships between groups has not been promising at all.   Most don't succeed, and of those that are deployed, it's not at all clear that, if given the choice,  the people there would elect to do it all over again,  looking back on the true costs to get there and the true benefits delivered.

Whereas it is very clear that improving the working relationships between these groups would tremendously lower the burdens in considering rearranging processes and adjusting work flow, etc., in a way that no imposed solution could ever possibly accomplish.

There IS one way to lower the high costs of hospital-based services right now, and it is becoming increasingly popular.   It's called "medical tourism" and it means that people, and now insurance companies, have run the numbers, tried it, and found out that it's possible to fly to some foreign place like Dehli in India,    get extremely good care for some operation,   get resort-quality accomodations with servants,  and fly back home, all for a tenth of what it costs to get the exact same operation done in the US system.   Example:  knee replacement surgery in Detroit: $44,000.   Exact same operation in a hospital accredited by the same Joint Commission in India?  $4,000.   That's in a hospital where everyone speaks Engish and the nurse to patient ratios are much higher than they are in the USA.

So, here's the deal.  If the US does not get its act together soon,   the trickle abroad will change to a flood, and that means that hospital after hospital in the USA will go out of business, and EVERYONE will be out of jobs -- nurses, doctors, and administrators.     But,  just as we seem to love foreign cars these days, we may all love foreign health care for the same reasons -- low cost and high quality.

Sadly, about then, if the dollar keeps falling,  no one will be able to go abroad except the very wealthy.    And the final state of man will be worse than the first.

As Microsoft Chairman Bill Gates said in the Wall Street Journal yesterday,  the things we really need to worry about are "pandemics and bioterrorism".    So far we're not addressing either one.

I'd submit that leaving the "health care system" unrepaired in the US will destroy what's left of the system here, then price us out of care abroad, and in the resultant third-world disease-ridden consequent society,  what's left of business will go abroad as well.

On the other hand, solving the problem of generating a Service Corps of qualified facilitators, mediators, and conflict-resolvers could fix far more than just health care issues -- they might turn to fixing our businesses and economy and social divisions and hostilities with other countries and cultures as well.  It may be time to start a new branch of our UNARMED Services.

The future of Nursing - and the AMA's response

Doctors, individually, are nice people; doctors, collectively,  are hostile, stubborn, and dense. I'm disparaging the American Medical Association here.

There doesn't seem to be much love lost between hospital-based doctors, collectively, and the doctors and others in public health.  "Public Health" by the way is concerned with the health of the public, not with insurance for poor people.  The increase in life-expectancy in the US over the last 100 years was primarily due to public health measures, such as sanitation, regulation of the food supply,  and provision of clean water.   It had little to do with the vast hospital system (formed after World War II) and less to do with the use of antibiotics (which also occurred after 1945).

Despite those facts,  if you read publicity blurbs from hospitals or the AMA,  you'd think that hospital based high-tech devices, surgery, and drugs were responsible -- they certain take credit for the nation's health, where it has any.

 The fact that "prevention" of illness is far more cost-effective than "treatment",   heroic "life-saving" measures in hospitals gets all the great press, and efforts to keep the water supply germ-free are seldom reported at all.    A regular oil change is far cheaper than a new engine for your car, but not nearly so sexy.  This is the massive paradox of public health, and for that matter, all preventive maintenance services -- if you do your job perfectly,   nothing ever breaks, no one ever gets sick,  and the policy wonks decide they don't need your department any more and you get fired.

Still,   as the costs of "health care" skyrocket and bankrupt not just individuals, but, increasingly, small and large business and the economy as a whole,   these basic facts become more important and are grudgingly recognized.  

The recognition doesn't mean, of course, that anyone would ever say that those pushing public health for the last 50 years were "right" --- it's as if these concepts were just discovered last week, by hospital-based doctors and the AMA.      It's like the way Apple popularized the use of visual interfaces,  dumped on repeated by IBM as stupid and childish and irrelevant,  until suddenly IBM ame out with something called "Windows" and ... gosh.. invented the visual interface, I guess.   I didn't hear a lot of "I guess Apple was right all along after all!"

So,   we find the new President of the American Medical Association is a doctor with a long history of interest in Public Health.   So far so good.   But when the Institute of Medicine came out a month ago with a report on the Future of Nursing,  with nurses having a much more visible and prominent role in caring for Americans,   suddenly the AMA went knee-jerk blind again and reverted to attacks.

Now, to paraphrase, the IOM report said explicitly that the future of health care was going to require much more focus on prevention and the life-long care of people with chronic diseases outside of the "clinical" setting -- that is, at home, at work,  in the lunch line as a diabetic selects what to eat,  at the gym as a teenager decides to exercise,  etc.    This care also should, according to the report, deal with palliative care when "curing" or "healing" was not possible.  It should deal with all of the other facets of human life that affect health,   such as nutrition,  social work,  family interactions,  child abuse, workplace abuse,  etc.  It should deal with everything that happens "BETWEEN office visits".

With the comprehesive view of patient health inside and outside "the health care system",  and far more focus on prevention and home visits and community health than hospital-based acute-care by doctors, the IOM recommended that nurses partner with other clinicians and with suitable additional training,  lead the teams utilizing experts from all areas to design improvements to the structure and process of American health care.  

Furthermore, the report noted that nurses, [ note, not MD's] are the professionals who spend by far the most number of minutes actually talking to the patients, working in the areas that have been targeted so far for process improvement by the federal government,   specifically delivering medications,  avoiding infections due to all the inserts and tubing around patients,  etc.

The AMA response was rapid, knee-jerk, and totally missed the point.  It's hard to tell that they even READ the executive summary of what the IOM had to say.

Here's what the AMA website put as a response (highlighting added by me)

===========

AMA Responds to IOM Report on Future of Nursing

Physician-led team approach to care helps ensure high quality patient care and value for health care spending
For immediate release:
Oct. 5, 2010

Statement attributable to:
Rebecca J. Patchin, MD
Board Member, American Medical Association

“With a shortage of both physicians and nurses and millions more insured Americans, health care professionals will need to continue working together to meet the surge in demand for health care. A physician-led team approach to care—with each member of the team playing the role they are educated and trained to play—helps ensure patients get high quality care and value for their health care spending.
“Nurses are critical to the health care team, but there is no substitute for education and training. Physicians have seven or more years of postgraduate education and more than 10,000 hours of clinical experience, most nurse practitioners have just two-to-three years of postgraduate education and less clinical experience than is obtained in the first year of a three year medical residency. These additional years of physician education and training are vital to optimal patient care, especially in the event of a complication or medical emergency, and patients agree. A new study shows that 80 percent of patients expect to see a physician when they come to the emergency department, with more than half of those surveyed willing to wait two more hours to be cared for by a physician.
“The AMA is committed to expanding the health care workforce so patients have access to the care they need when they need it. With a shortage of both nurses and physicians, increasing the responsibility of nurses is not the answer to the physician shortage.
Research shows that in states where nurses can practice independently, physicians and nurses continue to work in the same urban areas, so increasing the independent practice of nurses has not helped solve shortage issues in rural areas. Efforts to get health care professionals in areas where shortages loom must continue in order to increase access to care for all patients.”
# # #
OK,  here's my read on this.

1)   The AMA totally missed the point that health of Americans depends on many factors other than billable "care" time in hospitals.   The AMA is very concerned that doctors, not nurses, should direct the in-hospital clinical medical and surgical "care" part of this picture.     The IOM never said otherwise.

2)  The AMA did NOT suggest that doctors, instead of nurses, should be the primary professionals who make house-visits, and who see how patients live and what they are up against as human beings besides their specific disease-category.   

3)  The AMA focused a great deal of attention on how many hours of education and clinical exposure doctors have, compared to nurses.     They did not mention that, unlike nursing education, a typical doctor's education included zero hours, until very recently,  on key determinants of health, such as "nutrition", "exercise",  the role of poverty in making "compliance" with doctor's orders complex or impossible,   the role of social support in maintenance of healthy behaviors and avoidance of hospital visits in the first place,  etc. Doctor's education certainly didn't even mention topics such as the demonstrated value of  "therapeutic touch."
Let's be clear about this.  The term "health care", as bandied about in the last year in policy circles, generally translates to "insurance coverage" and has to do with the flow of money, not health. The majority of factors that can be controlled outside a hospital by a person who doesn't want to become a "patient"  the main things people do to take CARE of their HEALTH,   are not even part of "health care" as the term is used by the AMA.   Pointedly,  the huge "education" that doctors receive that they believe makes them qualified to lead the overall process doesn't include ANY of the factors that determine the health of people and PREVENT them from becoming "patients". 
So, yes, medical doctors ARE the most qualified by far to make moment by moment decision regarding acute care in hospital settings and the ten percent, or so,  of American's health which is determined that way.   Sadly,  the reality is that the doctors are vastly over-booked and don't have the luxury to even be present on a moment-by-moment basis, for the most part.    Even in those settings,  the doctors are not rushing to be the ones giving medications due to their "vastly greater education than nurses. " The doctors only order the meds, they don't deliver them, or deal with all of the problems patients have with taking the drugs as ordered.    The nurses, not the doctors, are on the floor and are the first line of defense of the patient to notice adverse events,  adverse effects of drugs,  change in condition of the patient that is clinically important, etc.   The doctors only come around "on rounds" at large intervals.  Nurses need to make the moment-by-moment calls on what to do when a crisis occurs, until the doctors can be located.
 Recent significant improvements in the safety of surgery resulted from implementing a procedure that FORCED surgeons to stop what they were doing and LISTEN to what nurses had to say about it for at least 30 seconds.   
The fact that this made a huge difference in patient safety and avoidance of things like wrong-side surgery demonstrates that doctors are not,  if left to their own judgment,  good at taking input from nurses. 
Also,  the fact that this made a difference says that all of the huge number of hours doctors had were apparently NOT effective at getting doctors into a frame of mind where they could listen to nurses.
Very few of those hours, if any, were devoted to topics such as "How to run an effective meeting".   Many studies and anecdotal evidence supports the idea that doctors do not, in fact, seem to be very good at actually listening to what patients are saying.    The average time a patient can talk before the doctor interrupts and typically truncates the conversation is 18 seconds.   ( data quoted by Groopman).    For that matter,  studies also show that patients typically leave the clinical setting with only a very vague idea of what it is they are supposed to be doing and why,  so it appears doctors are also not well trained in how to communicate with, well,  you know,  lesser beings, mere mortals,  "patients". 
So,  I'm with the IOM on this one.  I'd rather have policy and health care review sessions led by trained nurses than by doctors.   Leading  does NOT mean dictating the outcome -- it means doing a good job of asking and listening and getting all of the experts in the room,  from doctors to nutritionists to social workers to nurses,  providing good feedback about the area in which they are the experts. If doctors act as the meeting "leaders" for such meetings,  our experience to date makes it seem very  unlikely that the viewpoints of the other experts (outside medicine and surgery) will be heard at all, let alone taken seriously.
The AMA does not recognize that what doctors do is less than ten percent of what matters in the care of the health of people.  That fact alone makes their members unwise choices to "lead" discussions about keeping people from becoming patients.   Most doctors never visit patient's homes.  Again, thier opinion of what goes on at home, for all their education, seems less based on actual observation than the nursing professions' experience would be.
In reality, it is not a "patient-centered" electronic record of hospital-based events which is missing today,  for all the money flowing that direction by the immediate beneficiaries of it -- the insurance companies.    What is needed more is a "person and family and community centered" record of events BETWEEN visits,  OUTSIDE the clinic or hospital.
We need less in the way of "decision-support" for doctors in ordering drugs, and way more "decision support" for people that help them avoid needing any drugs in the first place.
This is basically the point of the Institute of Medicine.  Doctors, for all their very specialized training, have obtained enormously good DEPTH of perception by having an every decreasing and narrow WIDTH of perception and field of view.

They have become, in a way, "idiot savants" -- knowing a vast amount about a tiny area, at the expense of being almost totally ignorant of anything outside their tiny specialization area.   One example -- the accident rate in civilian aviation (small planes) is tracked by profession, age group, etc.   The only group that has a worse accident rate than MD's is ... teenagers.    In my mind, this strongly suggests that doctors (a) are used to an environment where their mistakes are picked up and fixed by others, and (b) are very bad at knowing the limits of their own capabilities in an objective fashion.   (Other interpretations of the data? Comments?).   In aviation by the way, there is not a category for accidents "caused by weather" -- there are only accidents caused by the "pilot's decision to continue into conditions beyond their skill and experience."  Fog does not cause accidents. The existence of mountain tops does not cause accidents.  A decision by a pilot to fly really low in fog MIGHT cause an accident.

Nurses certainly know less about medical niche specialization areas,  but have a much broader view of human beings and the nature of the factors that go into health.

Doctors in the American system really have little choice -- by the time they are through medical school their debt load is so high they have an enormous pressure to keep on specializing into something that might someday get them back out of debt.    They become fantastic, say, at removing prostates without damaging nerves -- at the expense of pretty much everything else, including, say, triple billing the Navy for plane tickets.  It's sad. 

I agree entirely with the IOM -- nurses are a far better choice to lead the charge in designing the future of how Americans care for their own health,   despite what doctors know about "health care."

A good nurse, by the way, is NOT "simply a poor doctor-extender".   Nurses are professional at many things that doctors are clueless about.  Nurses are not "poor doctors" -- they are, one hopes, great NURSES.  The fact that doctors think of nurses as "doctor extenders" reveals how little doctors know about what goes on in hospitals, etc.

Here's one last example:  Jerome Groopman, M.D., wrote an excellent book titled "How Doctors Think".    He discusses the various ways doctor make cognitive mistakes in diagnosis, and how the medical record and handoff system propagates and strengthens that stereotyping and error.  He talks at length about things patients might do in talking to doctors to catch such errors and reverse or mitigate them.

One might think such an open-minded physician would have a lot to say about the role nurses could have, and have have in "time outs" in surgery,  in catching doctor's mistakes.    One would be wrong -- the book doesn't even have the word "Nurse" in the index.

One pictures the New Yorker's famous picture of the USA, with Manhattan being most of the country, New Jersey being another smidge, and 'everything else' being one more almost invisible smidge.

I recall one comment a nurse made --  "I've worked here 17 years, and every day I say "Good morning Doctor X",  and he mutters something.   He's never learned my name."

That, to me, says worlds.








Saturday, November 27, 2010

On non-verbal verbal programs. and nusing simulators

Continuing the discussion,  let's look at the ultimate non-verbal learning skill -- language skills.

There are two competing models here of how humans operate, and they have very different implications. This is the MBA / Zen training argument from the last 2 posts carried to extremes.

One model (or myth) is that humans are really just sort of computing machines.  You put "facts" and "rules" into the mental hopper, and, if they don't fall out again,  you're done.  Stuff has been learned, people can pass tests,  success!

I've watched this model fail repeatedly with foreign language instruction.   Well meaning educators teach vocabulary and grammar rules to students,  who stuff it into their short-term mental hoppers,  and can regurgitate it on exams.   Here's the funny thing though -- if you take one of those students of, say, French, and put them in Paris, you'll find that they are unable to communicate in French. 

Why is that?  Well, it turns out that the human brain CANNOT do on-the-fly algebra and compute, using rules of grammar,  what needs to be said in the time available to say it, even if the student "knows" the rules of grammar 100%.  
 
This kind of "knowing" -- having stuff stored in stacks and heaps in the memorization hopper -- does not automatically equate to or turn into ability to USE the information.

In computer terms -- we cannot use knowledge in real-time by running what's called an "interpreter" program on it. It simply takes too long.  Human brains are pretty lousy at algebra, and even if they do it well, they do it very slowly.   Like real world computers that need speed, we can't operate from raw "code" or "programs" , but need instead to compile the rules into a totally different pre-processed, pre-loaded shape for "the program" to actually be of any real-time value.

In other words, we need practice, and very specific practice -- we need to develop context-sensitive recall of information and preferably entire mindsets and phrases or sentences that work in that situation, and maybe, if we're writing a paper,  AFTER we recall it, we can use the rules of grammar as a post-hoc check on whether what we recalled rapidly (without computation) agrees with what we get when we re-compute what it "should be."

That, of course, is the whole basis for a totally different approach to language acquisition, namely the Berlitz method,    which manages to teach people to speak the same way children learn -- amazingly, totally devoid of any conscious or explicit awareness of rules of grammar.  

So we have a paradox in our school systems, in that first our children learn to speak English, completely without the aid of any rules of grammar at all,  and then, later,  with considerable effort and sometimes marginal success,   we attempt to teach them "rules of grammar."

Frankly, I would rate as a total failure any school that successfully taught my children "rules of grammar" and "vocabulary" and produced as output children that were unable to speak.   I would rate as a success any school that resulted in my children able to speak fluently in a language, even if they had no idea what "rules of grammar" were implicit in what they were saying, and didn't know a gerund from a noun from a pluperfect subjunctive.

We all learn language DESPITE the rules, not by using the rules.  We all know that it is correct to say "a big, red ball" and it is just plain WRONG to say "a red, big ball",  even though we'd be hard pressed to say WHY one makes sense and the other is just eerily wrong.

What I hear the Japanese saying regarding an MBA education is that stuffing facts and theories and logical models and other "MBA curriculum" type stuff into students brains does not equate to making them "better managers".    Even if facts and rules are "correct" (and some are questionable),  there is essentially no benefit to having the rules made explicit and articulated in words.

Any company would prefer to have a manager who did the right thing, but couldn't explain WHY it was the right thing, to a manager who could tell you later what the right thing would have been to do, but didn't do it and didn't realize that was the "applicable rule" when the situation went by.

The same is true of nursing education.  I'd rather have a nurse who did the right thing, but couldn't explain why, than one who did the wrong thing, but later could tell you exactly why it was wrong, based on "Bennigton's Model of the Seven Gotchas", or God knows what.

There is an implicit assumption here that, if only the schools can cram symbolic knowledge into the brain's hopper and confirm it can be replayed on a test,  that "learning" has taken place.    Of course,  without "practice" they do know perfectly well that if you put the nurse into a practical situation and the patient goes into cardiac arrest,   odds are essentially zero that this symbolic form of knowledge will come to mind, spring into action and guide what the nurse does next. 

Yet, this always seems to come as a surprise.

So, now comes onto the stage "virtual reality simulators" where it is possible for the nursing students to be put into a realistic scenario,  where certain behaviors would be correct, and let them ATTEMPT to compute, in real time, which of the zillion facts they learned in school might be relevant, while the "patient" expires.   After they have failed in that manner,   we can all go in the other room and "debrief",  ie,   with lots of time and no pressure we can have the luxury of computing ("recalling") what the RIGHT thing would HAVE BEEN to do in that situation.      According to normal practice in using simulation in nursing,   at this point we are done.

Astounding.   The nursing student has practiced slow recall, and discovered a fact (we could have told them in advance) that their symbolic knowledge store doesn't actually work in practice.   Then,  we can post-compute what should have occurred there, and the student can IMAGINE what it WOULD HAVE BEEN LIKE if they had done the correct thing.

Here's what's crazy -- in pilot training, the next thing to occur would be to REPEAT that very same simulated drill,   in which case maybe 50% of the students would "get it right", despite having just gone over it in debrief.    Then we REPEAT AGAIN the very same drill, and maybe 90% of the students "get it right".    Then we repeat again and again, until 100% of the students have "mastered" the situation->response linkage.     Then, we wait a few days and do this again, and discover that many of these linkages have evaporated, and performance is back to 70% correct.  So we drill again until it gets back to 100%.   And repeat after another few days.  Etc.   After a long enough time, with sufficient repetitions at intervals,   the linkage will become permanent and the reflex will become automatic, a new reflex.  

But be clear about two things.   One,  the reflex occurs first, and the mental confirmation ("grammar") that it is correct occurs afterwards, not vice versa.   This is a different kind of "learning."
Second,  this kind of learning comes from actually DOING the desired response when faced with the triggering situation,  which requires MULTIPLE sessions of EXACTLY THE SAME SCENARIO.

So, a student pilot, for example,  will take up a plane and spend hours trying, over and over, to execute a left 90 degree turn without losing altitude,  or a particular type of landing.

Because real-life simulators in nursing schools are so expensive,   there is neither time nor inclination for the students to do the "same drill" twice, and it is NOT done.  It's done once, there is a "debrief" which is still reported by the students as "very valuable" in terms of "learning",  and they stop there.

I have to conclude that nursing schools are misunderstanding the role of simulation in pilot training, if they think they are doing "the same thing."

Furthermore, I think it is just critical that VIRTUAL REALITY simulators be available to nursing students so that they CAN afford to go back to that situation and PRACTICE "doing it correctly", with only themselves or other students checking that they in fact did finally "get it right".

In fact, the order should be reversed.  FIRST students should watch a video of what it is they are about to learn.    THEN, students should watch themselves DOING The correct thing in virtual reality, where their behavior has been fully "scripted" and all they have to do is go along for the ride.    THEN the students should practice doing the right thing and scoring themselves with some sort of checklist after watching the instant replay of their own actions.   THEN they should put this situation into their list of "quiz me" flash-card type file, which the simulator now has permission to throw at them unexpectedly from time to time.   And THEN, when they are scoring 95 to 100% of the time correctly,  they should go do this on the REAL manikins in the REAL "simulation lab" --the one that costs $1 million to put in place and is only open M-F 9-5.

Frankly,  the virtual practice is probably worth more in that sequence than the "real simulator" confirmation that they got it right.  If you had to pick only one or the other, I'd go with the virtual simulator repeated self-assessed drill than a one-pass-and-debrief with a "real"simulator.

Friday, November 26, 2010

On non-verbal Higher Education programs

Can higher-education make money with non-verbal programs?  

In my last post, I go on about problems with the analytical model for science, math, and MBA programs.    I quoted this: “Students read about the philosophy of Zen Buddhism, among many other things, and learn about how leading Japanese companies have innovated through sharing of ‘tacit knowledge’ — knowledge that is best communicated through long-term, close, personal relationships,” she said. “This is the polar opposite of the Wall Street view of things.” 

There are two questions tangled here, and let me comment on them separately.   I do believe that there is important knowledge that is "tacit" and simply CANNOT,  regardless of effort, be put into the symbolic strings we call "words",   transmitted via words to some other person, and then transformed, within that person back into a changed perspective and new improved insight and intuition.

Much of what is involved in being a "nurse" appears to me to be that kind of knowledge.   The whole existence of Zen koans and the paradoxical method of trying to convey wisdom without words is based on this fact that "much of what counts cannot be counted."  I do think that values and philosophy are transmitted more by contagion than by going through the value->concept->words->teach->learn->concept-> value pathway for sharing.  That pathway has many problems, and many real world important things does not FIT down that pipeline, regardless how clever you are at packaging.  First, it's really hard to package.  Second it's hard to convey in words. And third, there is a huge distance in the receiver's brain between words learned in class and an internalized value system in place where it affects actions correctly.

It is not enough to make it into the left/right brain, whichever it is -- it has to make it into the heart, which is a whole different activity.  It has to be put directly where it influences and affects actions and perceptions,  which is NOT a "verbal" activity nor an "analytical" ability. We have to deal with the FACT that human beings are not computers,   nor should they be, and most of what influences our behavior is not a rapid back-room computer-like rational computation that yields a result we then act on.

If there IS something MBA schools could be doing that would add value, it would probably be either intensive internships,  or intensive use of virtual reality to put students "into" real world or real-world-like situations where this non-verbal transfer of culture and other tacit wisdom could occur.

Of course, as my last post mentioned,  people tend to go where they can measure things.  It would be much harder for schools to evaluate whether a person had in fact acquired such tacit wisdom than it is to evaluate factual knowledge or analytical skills using facts on "standardized tests".    It is even harder to evaluate what the role is of the teaching faculty of such schools, whether they are successful or not, and how to not rumunerate them and give them useful feedback on improving what it is they are "doing" that involves teaching-without-teaching.

Still,   that is precisely where the real world need is.   How can we use a 1 or 2 year break from "employment" to somehow give students situations far more vivid and intense and dense with wisdom than real-life, so that they emerge with the same qualities that they would have acquired, say, in 15 years on the job?

And, in my mind,  how can experiences in the densely social, immersive world of virtual reality provide precisely that kind of human to human contagion transfer of norms, values, and culture that is only found intermittently in real life?   Can we "tune" this so that the "good stuff" is transferred much faster than in "real life" and the "bad stuff"is not transferred at all?

To me, that's the challenge for educators using virtual reality.  Not to figure out ways to convey symbolic knowledge across time and space, though VR does solve that problem nicely, but to figure out ways to transfer non-symbolic knowledge between humans,  assisted by and mediated by a very active synthetic computer-boosted contextual environment.

MBAs get no respect in Japan

According to the NY Times today,  the Japanese are abandoning Western MBA programs and creating new MBA schools of their own, with a unique Japanese flavor.  Here's a few snippets:

“They believe in business know-how gained on the job, not in the classrooms,” said T.W. Kang, a Tokyo-based businessman who holds an M.B.A. from Harvard Business School. “They’d say you can’t learn it there. You have to learn it with your feet.”
Hitotsubashi’s dean, Christina Ahmadjian, said that students at her school are required to take a course in “knowledge creation.” “Students read about the philosophy of Zen Buddhism, among many other things, and learn about how leading Japanese companies have innovated through sharing of ‘tacit knowledge’ — knowledge that is best communicated through long-term, close, personal relationships,” she said. “This is the polar opposite of the Wall Street view of things.”
Mr. Kang, who has served on the boards of both Japanese and American companies, said the majority of Japanese managers at large corporations viewed business knowledge learned at school with suspicion and skepticism, bordering on disdain. 

In fact, Reiji Shibata, chief executive of Indigo Blue, a human resources consulting firm in Tokyo and formerly the chief executive of a number of Japanese firms, said Japanese M.B.A. holders generally do fine in the management consulting field, but not necessarily in the general business context. “They have a tendency to overemphasize logic,” he said. “Their approach at times leads to clashes and dead ends and deals don’t go through as a result. This is especially so when you are working with different types of customers and partners.”
 So, I got one of these MBA degrees a few decades ago, and in fact went back to my alma mater, the Johnson Graudate School of Management (JGSM) at Cornell University,  as a staff member for a few years and also was a lecturer for two courses for MBA students.     I've pondered the worth of what I learned there ever since, not in terms of what it did for my salary, but for what use the lessons were in actual life.

At the time, we did a survey of whether our European alumni thought we should open a branch in Europe.   A common response was something like "Frankly, we don't think Americans have much to teach of value to Europeans."  I think the faculty at JGSM were superb, but the Europeans were underwhelmed with the content.

Of course, the content emphasized a highly rational, analytic, quantitative, "logical" approach to problem definition and solving -- the logic mentioned above in that wonderful quote “They have a tendency to overemphasize logic,”

I think that quote hits the nail on the head and exactly captures one problem with all of Western Science and the "scientific method",   a methodology I was raised within and learned very well, but every year find increasingly inadequate and inappropriate for dealing with the real world problems our planet faces.

I no longer believe that the real world can be hammered down into a flat world that can be reduced to mathematical equations and numbers that still mean anything.   I find the assertion that quantitative knowledge is the only way of knowing to be smug and contrary to evidence.  In fact, I no longer think that quantitative analysis is even a good way of gaining insight or making decisions.   In that regard, I am loudly opposed to the trending in the US to put more and more emphasis on "math and science" at the expense of other subjects.

There's a lot of loose talk about what we need today being "critical thinking" and "innovation",  followed by a worshiping glance at "math and science" as being the obvious way to increase both of those.   From what I've seen,  math and Western science, as taught today,  interferes with critical thinking and pretty well destroys innovation.   Many PhD's, the greatest product of such training, seem to be idiot-savants,  specialists in such a small area that they are incapable of meaningful social discourse.

As to teaching "logic", even, the supposed end goal of all of that,  I had an entire undergraduate program in Physics, with emphasis on math, which never once got into the nature of "logic".  In fact, the only training in logic I got was by going to the library and checking out a book on my own on logic.  There was zero training in recognizing, say,  the 20 most common logical fallacies in reasoning and their names,   let alone a culture that would recognize a type of flaw by name if you tried to discuss it.   There was zero training in reading, say, a news account, and dissecting the logic and locating the structure of reasoning, the assumptions,  the weak spots, and the blatant errors.

It's not clear to me how people think learning algebra, say,  will improve our thinking,   or that if improving thinking is the goal, that there's not much faster and more direct and explicit ways of doing it than teaching algebra and calculus and sort of hoping somehow that those equations, in one part of the brain, will cross over to illuminate thinking, in the other parts of the brain.

Here's one example of clearly flawed logic:     "All terrorists drink water.  Therefore, to get rid of terrorism, we should ban people who drink water from entering our country."

Take any group of people who are not trained in math and science and give them that statement, and ask them to come up with a consensus statement about whether it's true, and if not, what is wrong with it. What is striking is how inarticulate the conversation will be.   People flail about verbally,  trying to put into words what is wrong, and why we should not believe that conclusion.     What is even more striking is that a group of scientists will be no less clumsy at trying to reach a concensus statement of exactly what is wrong. 

We are clearly NOT training our people in how to have a conversation about logical anaysis.  We are clearly NOT only not good, we are just miserable at working with each other to dissect even simple logical thinking, let alone more complex arguments.   Ten years of math and science do not seem to have accomplished much at all in that regard.

So,  even working in the plane of logic,   we are not very good, and don't know how to take a task and have more people accomplish it more easily than one person could.   We have no shared language for sharing insights in a quick fashion and knowing exactly what we mean by it.

But life does not please us by remaining in the flat plane where logic prevails, or should. Life is quite comfortable extending out into curved space, or disconnected space,  where logical rules are unable to go.   

The largest single category of that I see everywhere around us is the misuse of statistical reasoning in situations where there is a feedback loop.   Almost the entire array of statistical techniques in use today are based on work by R. A. Fisher,  who was working on treatment of plants to get them to grow better.  The whole array are based on something called the GLT - General Linear Theory,  all of which is based in turn on the key and core requirment that the situation being modeled has something over here which is a "cause" and something over there which is an "effect" and that interactions only go in exactly one direction, from cause to effect.      None of the common statistical tests are even valid, regardless how rigourously the math is done,  if the "effect" can turn around and alter the "cause" in a loop,   mushing out the idea of a "cause" and an "effect".

All of our basic statistical training and logical reasoning comes to a crashing halt when we find that both are true -- chickens  lead to eggs, and eggs lead to chickens.    When asked "Which came first, we shake our heads and try to change the subject."  

Or even back to plants and fertilizer "treatments".   Yes,  treatment with the right fertilizer may cause, say, these soybeans to grow better yields.  So that's good, right?   Well, what if everyone does it, and better yields lead to collapsing farm prices, which lead to farmers going bankrupt, as well as to fertilizer manufacturers then going bankrupt, so that there is no more fertilzer?   So now,  is more fertilizer good?

The problem illustrated is that, yes, IF you remove everything that doesn't fit easy logic, THEN easy logic works on the problem.   It's just that real life doesn't give us this luxury.   The key assumptions and requirements of almost every analytical technique of science or MBA's are generally NOT MET in the real world.  It is only by carefully not looking very hard and not paying attention that we can even persuade ourselves these techniques are applicable.   Instead, we just run the numbers and take action based on them and then are surprised when what we expected to occur does not.

Instead of hedge funds getting wildly rich,  the economy collapses around them.  The mathematical model used by the "quants" was correct .... as far as it went.  It just didn't go far enough to consider the case of what if EVERYONE did the same thing at the same time.    The problem is, NONE of the analytical techniques go "far enough" to encompass all relevant factors.

They are, then, not truly methods of knowing what will occur -- they are only methods of reaching a concensus decision and silencing dissent so that action can be taken.   IF everyone agrees with this very logical model,  THEN we can conclude (well, some of us), IF we run our sums correctly, what the right course of action should be.   And, please, anyone who disagrees with the model itself should be ejected from the room or fired,  for getting in the way of our analysis.

The tyranny of "analytical rigour" is everywhere around us,  distorting reality by causing a massive game of not seeing, denial,  and pretending that things which are clearly true are not true, because we cannot put them into numbers.

You think this is a minor effect?  It is not.    Case in point - the recent conference I went to on "Self determination of health behaviors."   Accepted fact:  most of the costs of health care the the US today are controllable by "life-style changes" -- ie, exercise, nutrition, etc.     Fact: these costs are more or less killing the economy and business, all by themselves.   So this is important.   Fact - all the researchers at the conference, bar none,  mentioned during their presentations the curious thing they had observed, which was that the most successful strategy for altering behavior involved other people, not the person they were trying to change. For example,  if you paid $10 to a woman's CHILDREN for each pound she lost,  its was far more effective than if you paid HER. 

So, I asked,  if everyone agreed that group interactions were the most effective strategy, why were NONE of them mentioned in the published papers by these presenters?   Oh, I was assured, these were "hard to measure" because they involved interactions (feedback) and since they didn't know how to measure them, they LEFT THEM OUT, so they wouldn't look bad.

These are trained research scientists,  uniformly holding PhD's in at least one area,  and finding nothing at all wrong with presenting an incorrect model of the world (but one they could MEASURE) instead of presenting what they had found,  including parts they could NOT measure.

This phenomenon has dominated our culture for the last 50 years,  focusing huge attention on the simple problems that COULD be measured easily and WERE amenable to analysis -- which we named "HARD sciences",  and diverting attention and money from the hard social problems all around us, which had parts we could not measure,  which were then defined as "SOFT sciences" or NOT science at all, in a dismissive voice, as if, not only not "science" but not even something you should have in the house if you were having company over.   These "soft areas" were Western equivalents of "unmentionables".  Rather than blaming the analytic tools, and indirectly the godhood of the users of the toosl, for being obviously incapable of tackling these problems -- the problems were "explained away" as being non-scientific or wooly-headed or soft and in any case irrelevant.

Take that same reasoning into business, and you have analytical MBA's,  confidently striding out to conquer the world with their new found analytical tools, and discovering, to their shock apparently, that the tools didn't actually work on REAL problems, only on pretend class-room exercises.

Multiply that by a hundred, and you have the entire class of people called "economists" -- who have been snidely referred to as people who see something happen in practice, and wonder if it could happen in theory.  These people are the ones we look for guidance from in running the country, the economy, the Federal Reserve board etc.  They definitely fit the bill of "Often wrong, but never in doubt." They have zero learning curve, and zero humilty, because their first rule of action is to dismiss anything that challenges their entire core assumption that their techniques have any validity at all in the real world.

When things "work" they claim credit, and when things don't work, they blame the result on some sort of enemy action, so, yeah, by their measurements, they have a perfect track record.  To the rest of us, not hampered by this restriction of what we are allowed to look at, their track record looks closer to 100% wrong. They can't grasp our attitude,   which must be based on "soft" things, not "real" numbers, like, say, the Gross Domestic Product.

So, no, I don't think we need "More of the same" kind of education of math and science and economics and analytical thinking that comes with a self-congratulatory smug culture that is perfectly willing to leave out of the discussion all those messy real-life factors that don't fit the model.

I DO think we need more critical thinking skills.   However, I also think that one of the first thing peole who think critically will realize, is that math and science worship has become a sort of religion,  and one that keeps trying to turn our attention away from REAL social problems,  on which it has weak or no muscles,   and turn it to tiny special case problems,  in which it has some demonstrable power -- in the short run.

Again, I say short run.  Consider the problems of "We need better energy sources" and "We need better ways to get clean water" -- both unquestioned assumptions of the engineering world today, and problems our universities are busy trying to train people to tackle.

It turns out, both of these problems are bogus problems, and the LAST thing our society and planet needs is to have either of them "solved."    In the real world,   a shortage of unlimited energy is the  only thing stopping massive corporate rape and pillage of the planet that would stop only when the biosphere completely collapsed and life on earth terminated.    It is NOT a good thing to be working on FIRST.  It is not a "value free" problem that it's ok to work on regardless how the results will be used, any more than coming up with a bomb 1000 times more powerful than a Hydrogen Bomb that could fit in a suitcase.    I have a problem with the way "problem" is defined and the implications of same.

Or water.  WHY do we want more water?  Water is used primarily, by humans and industry and other living things, to flush away toxins.  Cool.  Sounds good. Let's get more water!

Uh... wait.  Flush away the toxins to where, exactly?  To the local aquifer, or to the ocean.  To local aquifer is bad for living things, so say it flushes to the ocean.  Then what?  Then to get "clean water" we separate the toxins from the clean water, and send the clean water upstream. Fine.  And what do we do with the toxins?

Oh, those. ... um.   Dump them back in the ocean and kill of all life in the ocean, which then removes the source of food and oxygen for the rest of us which then kills us off?    Stack the toxins in huge reservoirs of toxic material on the very edges of the oceans, where, inevitably, there will be an event that will release them back into the ocean?      Stack the toxins on land, where they will ultimately leech into the drinking supply?

The only reason these so called "scientific problems" are even acceptable to be worked on is that the full picture, the full implications of working on them, like working on biological warfare, is held off in a world of denial and delusion that, somehow,  that is not our concern, that is not our problem, and besides,  that is not "numeric" or "quantitative" so it shouldn't be included.

When "Science" was something done in small labs in universities, and didn't itself alter the planet we live on,  that fiction of it being "separate" was fine.   Now that Science is backed by corporate MONEY,  and is done on a global scale sufficient to eliminate entire ecosystems,   those factors that didn't fit in the picture DO come back to haunt us.

Or,  SHOULD come back to haunt us.    WE don't need more energy. We don't need more clean water.  We don't need more economists or MBAs.   We don't need more scientists, or more eduction in math and science in our school system, as currently taught.  Science as taught is the art of ignoring everything outside your mental model, so you can get it down to something small enough that you can work on it comfortably and make "progress" -- regardless how damaging that "progress" turns out to be if you put back into the equation all those things you left out in order to get a "solution".

We need more education on what's WRONG with math and science and economics and MBAs, as used today in the real world.