One of my favorite bugaboos is the latest fad in hositals, Computerized Physician Order Entry systems ("CPOE"). This kind of software, we are told, will let doctors enter their drug orders, or other orders, on-line into a handy computer terminal, perhaps a portable device like a high-end cell-phone, where the orders are then whisked off to be completely smartly without any problems due to legibility and doctors' infamously poor handwriting. Further we are told that many errors will be avoided by "decision support" that will help the doctors pick the latest and greatest drug to use, or detect that the doctor is considering a drug to which the patient is allergic, or which will interact badly with some other drug some other doctor has already prescribed for some other ailment. (And older people come in with multiple chronic ailments.) Finally, we are told, everyone wins because the doctor can be guided to select a generic instead of an expensive brand-name drug.
Well, sounds great! The reality is different, and the opinions about why the reality is different cover the spectrum. Many implementations crash and burn, and after a very expensive and ugly scene, the systems are "backed out" and removed. Some systems are installed and measures of outcomes, such as mortality, rise instead of falling. In general, getting staff to use the systems is an "uphill" struggle, which should say something about how much easier it is making their lives, at least in the short run.
Some administrators and vendors believe that "the problem" is "bad doctors" who are "contrary" and refuse to learn how to use a computer and need a swift kick in the pants or to be let go if they don't "get with the program." They see the solution as "bigger whip."
The assumption behind that view is that the software actually works.
And, as my readers will already suspect, I will define "works" as something that can only be specified if you have also specified the context in which this "working" occurs.
It is true, largely, but not entirely, that vendor CPOE software "works" in the sense that if you type in "27" and hit "enter" the system will record "27" somewhere, not "42". That much generally works, but it is remarkable how many cases can be found where even that much doesn't seem to have been tested by anyone before release of the product. Often some "bug" can be found in under 5 minutes of use. Articles in JAMA describe such systems, and generally note many inconvenient but "minor" issues which are often described as something that could "easily be fixed."
Nothing could be further from the truth, or more at odds with actual evidence. This mental model is entirely wrong.
These "small" imperfections are generally in a system that is running in dozens or more hospitals, and that have been running for at least 2-3 years. Yet, they remain unfixed.
Now, this should cause an alert observer to sit up and say, "Wait. Why aren't they fixed?"
The answer, and I base this on personal observation and 40 years in the IT field, is that these systems are largely un-fixable. Or, more precisely, the whole collection of software, vendor representatives, bug reports, bug-fix pricing, departmental prioritization, and hospital-wide prioritization means that 99% of such problems will never "make the cut". Ever.
There is a common flaw in priority scheduling for job shops that fails to account for the "age" of requests, and prioritizes purely on some other factor. The result is a sludge of lower-ranked requests that just grows and grows but never makes it up to the "top ten" or "top three" that will be selected. "We must focus on priorities" is the mantra, which I have attacked before as completely flawed, and largely for this reason.
So, yes, IF this bug fix were considered "top priority" by hospital administrators who pay the bill, or by the vendor, it could, in many but not all cases, be "easily fixed." The problem is that this "IF" will never happen. Never. Ever. So what we get is what we observe - hundreds of inconvenient but maybe not show-stopper problems, just below the "I can't tolerate this pain" threshold, but well above the "I feel the pain" threshold.
A second effect of this social process in which software evolution is embedded is that it quickly becomes clear that
- there are a lot of such bugs, and
- no one seems to care about them, and
- it is pointless to complain because no one ever listens, so
- one should just shut up and figure out some work around instead, while pretending to comply.
Part of what is so frustrating about it is that the system cannot seem to learn anything. It remains as stupid on day 20 as it was on day 1, making the same dumb comments, and not reading or responding at all to my tone and body language and force with which I am smashing down the keys that "Yes, I know that. Any idiot knows that. Why do you keep telling me that?" And. ultimately, the "decision-support" features are shut off because they seem designed to support some kind of decision making that is not the kind we use here, or need.
The rest of the system remains like a stone in the shoe or a key that sticks on a keyboard, a constant nuisance, and a constant reminder that one's own good judgment has a net weight of exactly zero in the big scheme of things. Some novice programmer in Pakistan has more power to design my own work flow than I do, despite obviously never having done this job.
But I need to make a crucial point here. The problem is not that the system is wrong - it is that the system stays wrong and is impervious to social feedback. It is non-adapting.
Well, in some cases, the vendor happily chimes in, the system could be "customized" to fit better, at an enormous cost, which might address the "top priority" items, take 6 months to 2 years, by which time the needs will have changed but the work-order won't. After all that agony, the final result is no better fit than the unchanged system, so there is rapid learning socially to shut up and stop asking for changes.
If the social system of the hospital is sufficiently adaptive, and doctors are very clever, they can figure out work-arounds and make-do with this permanent new pipe across the living-room at a height of 4 feet that they now have to duck under when they remember, every time they cross the room to answer the phone.
It doesn't have to be that way. It's that way because we put up with it being that way.
In that sense, it's a metaphor for social systems and governmental agencies and regulations in general -- someone, somewhere, decided to impose something that might have worked for them, although that isn't clear, but that clearly is more pain than benefit now, but that no one can do anything about and you just exhaust yourself trying so shut up and put up with it.
But that, grasshopper, is not the Toyota Way.
The Toyota Way is "That's a pain in the ass, let's fix it!"
So, how did Toyota manage to do this allegedly impossible task, of fixing the internal legacy stupidities that were encrusting everything and smothering innovation and efficiency?
Not by central planning some huge new system, which would "fix everything."
It was done, to paraphrase the Institute of Medicine, by a billion $1 steps, not by one $billion step.
In other words, they figured out how to let people make very small changes, gasp, on their own authority and judgment call, to their own workplace and work flow. Not expert executives, not hotshot consultants, but, you know, regular people on the front line, "employees" and "staff", secretaries and the guys who pick up the trash and move boxes.
And a process of massively-parallel continuous improvement, in very small steps, compounded over time, took the company where no executive plan and no consultant could.
So, the problem we need to focus on with CPOE systems, or with anything else like that, is how to enable and empower the people at the front, who actually have to endure and work with both the real world and "the system", to modify "the system" to be easier to use. The crowd knows thing they don't realize they know, and can do things, if empowered, that no central planning office would ever think of or could do.
A second amazing feature of the Toyota Way is that there is no "implementation resistance" hurdle. The person making the change is making it because they want to and they think it might work to make completing their task easier, and because they want to try it and see. No one needs a whip to "get them" to "buy into" the change.
Third, the changes are very small, individually. The amount of "disruption" any single change causes is roughly zero, and that's if the change got put in backwards, which will be detected in a day or less, swapped around to frontwards. That is, "undo" is possible and not considered a "failure". And rather than "disruption" the change is, or should be, and can be, "anti-disruptive". The change actually makes things flow more smoothly, not less smoothly, not only for the person making it, but for everyone near them, in the social context in which employees care about the people near them because they have to work with them and people have memories. And, gasp, they might even be asked if this "works for them?" And, gasp, they might even be asked before making the change if this might work for them, or if not, what factor has been forgotten that I can learn right now about how you do your work, so I can take it into account from now on, and learn more about you at the same time, and things I never realized you had to take into account or do.
In other words, the changes are connected to a parallel process of social development of deeper friendships and working relationships with everyone upstream and downstream from my own neighborhood, treated with increasing respect as I understand more about why they do what they do the way they do it, and they learn the same about me.
OK, let's bring this home. Can enterprise-scale software ever possibly have that sort of property that, if you don't like it, you can actually adjust it so it is less painful to use? Or, gasp, more helpful than the designer intended?
Yes. I've done it. More precisely, a team I led evolved some software over time under continual advice and suggestions from users of, couldn't this field be up here instead of over there, because we always are doing this other thing at the same time and that window covers this, etc. The result was called "indispensable" by the users, who screamed if it was not available, but we were baffled as to the question by higher-ups of "what it did" and "what we called it." It had no name, and it was hard to describe exactly what it did, except that it had accumulated hundreds of things that were individually helpful in one place. It was a sort of a swiss-army-knife that dealt with issues that no one had realized were issues and couldn't describe easily in words if they had realized it, but it turned out we didn't need to describe it as a whole to build it, and we didn't need to figure out what the users needed because they told us, slowly, as they realized they actually had some say and control in what it did.
It was a "stone soup" system that kept getting better with additions that the users themselves supplied, or suggested but were "trivial" to add or adjust in under 5 minutes apiece.
It ran below the budget radar because it never got a name as a "project" or got funding.
There's a lesson here.
An important lesson.
And we need to turn back to John Gall's wisdom to see what's going on.
First, solutions that work to large problems are not created in whole cloth, nor do they spring full-grown from the head of Zeus. They start as small solutions to smaller problems, that actually work, and evolve slowly from there, always keeping the property that they actually still work. The reasons are myriad, but have to do with debriefing the social context about information we needed in words we lacked and concepts we lacked, but that could be easily done even if they couldn't be easily described in words. The context is important. And the lack of ability to express this in words is important.
This is not a pre-planned solution, developed in India, to a social context they've never met, with very real people already populating the seats. This is a home-grown solution that grew to meet the exact needs of these exact people doing this exact work.
Well, two problems immediately arise. One, what's the business model here? And two, how general is this concept? Won't the resulting software be totally messy, fragmented, and impossible to maintain or debug or explain to new staff? How can we validate or re-validate that it does what it is supposed to, and only that, when it keeps changing? Isn't this irresponsible and bad practice? And how can we sell the resulting software to anyone else and make money, when it is so specific to this environment? Aren't we just institutionalizing bad legacy practices in code instead of analyzing them and fixing them?
We'll get to those. The key point is that the "wisdom" needed to guide the development of a working system is located in the social context down where the company meets reality, on the front lines. It is not embodied in words, often. Sometimes it defies description in words, even though people do it -- like trying to describe how you crack an egg without making a mess.
Much of the wisdom can only be revealed and surfaced by actually trying something to see if it works or not. Great judgment is not required, only judgment of keeping the experiment small and keeping the "undo" button ready to push if we got it backwards.
This is similar to the successful strategy of Loeb of Loeb Rhodes, or whatever that stock broker is. He described a strategy that was "wrong" 80% of the time and made lots of money. He'd pick a stock he thought should go up, set expectations around that, buy it, and watch it like a hawk. If it turned out to start going the wrong way, he'd bail instantly, not "just wait until it comes back up before I sell it so I don't need to admit that I made a mistake and recognize a loss." So the 20% winners stayed on the table for a long time, and the 80% losers were cut off almost instantly, and the next result was an ever improving portfolio of stocks, chosen not by wisdom of perfect picking, but by the cybernetic wisdom of just keeping your eyes open and reacting to what is actually happening, not what you wish was happening. And small steps, so undo was possible.
This is a remarkably powerful strategy.
It's the strategy evolution has used to get our bodies and society evolved this far, without, apparently, a central planning committee.
Massively parallel, experimental at the edges, alert, aware, not only no hesitation to admit "an error" but spring-loaded to admit an error very very rapidly. No massive brilliance or Nobel prize winning "quants" required on the staff.
This is the power that the current process of CPOE, and for that matter most policy development and governmental development defeats, by trying to move in very large steps that are centrally planned, and frustrating the local wisdom where the concept meets reality so as to suppress it instead of enhance and encourage and harvest it.
This is the difference between successful marketing and product evolution based on real-time customer feedback, and trying to use a bigger whip and ad campaign or discount incentives to sell, say, GM cars that people don't actually want. It is a way to "listen" to the wisdom of the crowd", especially if you need the crowd to make things go.
OK, back to software. What might be done?
First, the whole budgeting process is in the way. Classic budget cycles are way too long, and way too chunky, and require detailed proposals of exactly what is to be done and how much it will cost, etc. There are only two worlds envisioned -- either the software is static, and simply being operated or "maintained" out of operating budgets, or there is some huge "project" attempting some huge change, out of "capital". There's nothing in-between where the sweet spot actually is. We need something that funds "maintenance plus a little"- daily incremental improvement in software, responsive to customer feedback.
Second, software design policies are in the way. Most companies do not use high quality design techniques to make structured, top-down, object-oriented systems that are easy to maintain and change with known results, and easy to revalidate, because that costs more to get the first one out the door than writing crap, and they weren't planning to make very may modifications to the software anyway -- just to ship 200 copies of exactly the same thing to 200 different contexts and suppress the dissent at every one of them, assuring management that one size fits all and, oh by the way, we have a special on larger whips today.
There are good tools to design good software that can be changed and revalidated in minutes, not months, but most people don't ramp up to use them. That, however, is where the investment would do the most good. Instead the money is spent trying to make changes to poorly designed software and revalidate the tangled mess that produces, which is a very expensive process and fraught with unexpected interactions and side effects and retractions, which then motivates the vendors to tell their software people "Just try to not change anything at all - we think maybe we have it working now."
So, I see a process effectively equivalent of this. All day, people use the software, that I picture as a huge object floating in space. As they work, they attach rubber bands to parts of it, pulling them slightly towards slightly different places than they are. At night, every night, while the users sleep, the software people come alive (maybe on the flip side of the planet where it's day), and assess the total net torque on the system produced by all those rubber bands, and first rotate everything to reduce the net torque to zero, then slightly shift the details to bring the gap between the system and individual requests towards zero, even if they can't all be met on this change. The total change is small, and revalidation can be done to be sure it's still on stable ground, since most parts won't have changed at all except around pivot points, which were previously validated.
The process should be, on a large scale, cost effective. The price of not fixing "small things" needs to include the costs of discouraging and squelching user interest in improving things more or any innovation at all, which changes the equation.
I'm sure there are all sorts of principles and guidelines that can be applied to define this process better. The point I'm making is that there is value in capitalizing on evolutionary design processes that actually work. This requires decentralizing the eyes, while retaining control of the central system, but in a way that is maximally responsive to the eyes. And it requires everyone repositioning themselves to assume the new world is dynamic, not static, so don't put your coffee cup right there cause that part rotates.
The invariants that are being invested in here, over time, are processes that capture massively parallel pressures and respond to them, a little bit, every day, in real time. We can get rid of the whips.
Once we admit that knowledge we need, about the social structure and environment, is actually distributed holographically and non-verbally in that environment, we can give up trying to have a three stage process of first having a massive effort to debrief everyone in words as to what they think they do, then going off to India and building some compromise least-common-denominator system, then coming back and forcing it down the throats of staff that are trying to say it doesn't fit. The debriefing and adjustments are done in a massively-parallel way, every day, and nothing needs to be forced on anyone, for the most part.
As with Toyota, much or most of the value here is actually having the people in the company have a structured way to begin to understand more and more what it is the other people are doing and why, and build respect and mutual support and collaboration in improvements.
The initial software that does this, like the rock in rock soup, doesn't need to be perfect, but it has to be small, not over-promise or be over sold, and be extremely dynamic in a stable way.
Dynamic stability is a real concept that airplanes and flywheels and cars need to have. Things change, but within limits so the overall thing doesn't crash.
As most people have found, if you delay repairs until huge changes are necessary, they will almost certainly not fit socially, and generate huge push-back that can be overcome, but at a huge cost. This "big project" way of debriefing social reality and adapting to it has a miserable track record. It institutionalizes the sins of the "we must prioritize!" approach to life's complex problems, which structurally guarantees that the needs of the people "on the bottom" will never be heard, let alone met.
And that forces competition and conflict as people try to push their way high enough up to have one of the few projects or needs or ideas that passes the "let's prioritize" cut-off point.
We need a new model, based on what's been working for a billion years all around us, a proven design pattern.
It has much less risk, much less cost, and gets more people what they need to do their jobs, while not requiring any of us to be Nobel prize winners, or in the clutches of the "suits" from huge consulting companies that promise to deliver "solutions" (made in India) to our "problems", as described by upper management, which by the nature of complexity and speed of change and "indescribabilty" of life is out of touch with the front lines. Oh yes, and "Prioritizing" because changes are so expensive that we can only do a few big ones and the rest will "have to wait." (Forever).
A living, real-time adaptive strategy doesn't need management to detect that the outside environment has changed, order a study, commission a planning group, prepare an implementation agony in order to respond. It can simply detect the change itself and pivot around to adjust to it, and management can read about it later.
The job of management is to realize that real-time control of huge complex adaptive social systems is not something they can micro-manage, and to put in place the kind of internal adaptive skeleton that is response to millions of pushes simultaneously, not a few huge pushes at the expense of everything else.
The world is past the point where "a few top priority items" is sufficient, and the world isn't going back to that point ever again. To cope with that we need to redesign corporate and governmental decision-making processes to be agile and parallel and responsive.
I fret about those with the mental model that "the problem" is all this "push back" and if a sufficiently strong top man were in place that gave and enforced orders with a big enough whip, why then everything would be back to running smoothly, you betcha.
Nope. There'd be no more painful delays in gathering information, and actions could be taken very rapidly -- with total assurance that they would be totally out of touch with today's new reality at the front lines, that wasn't even there yesterday but showed up in response to what we did yesterday, with some whole new set of things going on over here that don't make any sense at all, and half of the responses being, surprise, exactly the opposite of what we had planned -- except that we don't hear about those for a lag time of months, because of the strong dissent-suppression field we're using to get our way despite the potential push-back.
No comments:
Post a Comment