Tuesday, February 04, 2020

Systems Thinking about Outbreak Science versus Business as Usual

The novel corona virus pandemic has arrived and, as expected, we are not prepared.  So, while 98% of the effort should go to executing current protocols,  we should still dedicate 2% of our time to keeping notes as we go, and reflecting on how well current systems are performing, and what we might consider doing so that they perform better next time.

There will always be a next time.

I'm bringing to this analysis an undergraduate degree in physics,  an MBA, and an MPH -- Three very different worlds with very different mindsets.   The overlap of those worlds makes the job of any decision-maker much more complex.

Recently there has been a movement to delineate a field called "outbreak science" and get some momentum behind improving things.   Here's an excellent introduction to the idea, in a piece by a very impressive set of contributors.    The paper mentions difficulties where the scientific (and possibly academic ) worlds of the modelers come into contact with the social, business, and political worlds of the decision-maker(s).  My discussion below argues that these difficulties are far more fundamental in nature than can be resolved by having the two groups spend more time together - but I agree heartily that interaction should be practiced with a learning curve on both sides.

Rivers, C., Chretien, J., Riley, S. et al. Using “outbreak science” to strengthen the use of models during epidemics. Nat Commun 10, 3102 (2019). https://doi.org/10.1038/s41467-019-11067-2

Glance at the authors and their institutions here to see who has gotten behind this movement.
https://www.nature.com/articles/s41467-019-11067-2#author-information

In pondering the penetrating power of "Outbreak Science", it would help to have a visual diagram of the parts of the overall system we are talking about evaluating, understanding, and altering via some intervention.    We will be looking at a socio-technical subset of a complex adaptive system,  after all, and there are definitely interactions within that which are not easily captured by our current scientific models and methods.   

 In fact business and politics operate with cultures, values, practices, and  mindsets that are startlingly different from those used in Science,  and that is a substantial factor to reckon with when trying to lay this model out flat for inspection.    

Going into this, I have to bring to mind a quote from Lewis Thomas,  who said this so well in his book The Lives of a Cell: Notes of a Biology Watcher  ( 1971-73)
When you are confronted by any complex social system, such as an urban center or a hamster, with things about it that you're dissatisfied with and anxious to fix, you cannot just step in and set about fixing with much hope of helping. This realization is one of the sore discouragements of our century. You cannot meddle with one part of a complex system from the outside without the almost certain risk of setting off disastrous events that you hadn't counted on in other, remote parts. If you want to fix something you are first obligated to understand... the whole system.. Intervening is a way of causing trouble.
So let's back up a few steps and consider what the boundaries are of "the whole system" - the complex web of interlocking, overlapping, and hierarchical systems from which we are trying to extract one piece to study.

I'll borrow a diagram from the following paper

Shearer FM, Moss R, McVernon J, Ross JV, McCaw JM (2020) Infectious disease pandemic planning and response: Incorporating decision analysis. PLoS Med 17(1): e1003018. https://doi.org/10.1371/journal.pmed.1003018


Situational Analysis

 The lower left grey box,  labeled "Situational analysis",  is the part of the world most amenable to Science,  and with the power of computers and the amount of "big data" available today,  the easiest to expand and improve.    Classic techniques of gathering data are rapidly being supplemented with everything up to Artificial Intelligence scanning store purchases, airline tickets, social media text analysis looking for mention of symptoms, etc.  

The state of the art is startling. 

See:  Mohanty, B., Chughtai, A. and Rabhi, F., 2019. Use of Mobile Apps for epidemic surveillance and response – availability and gaps.. Global Biosecurity, 1(2), pp.37–49. DOI: http://doi.org/10.31646/gbio.39
 
Intervention Analysis
The lower right grey box,  Intervention Analysis,  also is much more amenable to computational modeling today than it was even a few years ago.   Agent-Based Models can simulate social responses over tens of thousands of possible parameters in an afternoon, fit big data to the results,  and convert knowledge into broad statements about response options and likely impacts, as well as helping define what real-world data is simply not known, or only known very poorly.    

Here, however, there are huge uncertainties about how people, companies, and entire nations will behave that dramatically affect the outcomes of given actions or policies.   A freeze on airline travel for example changes everything,  let alone events generated primarily by an effort for Presidents or Kings or Rulers to look good, or stay in power, or shift blame.    

This presents a substantial problem, in that most adult humans are remarkably bad at describing, understanding, or dealing with uncertainty.     Many people, and perhaps particularly high ranking politicians, are loathe to say "I don't know" as that is considered a weakness to be attacked.   The civilian population has no mental framework for understanding even simple statement such as  "Event A has a 30% probability of occurring".    In fact studies have shown that people react quite differently to being told "Event X has a 30% chance of occurring" versus being told "Event X has a 70% chance of not occurring."  

In one hospital I worked at the Biostatistics Department worked out what the odds were of cancer progressing based on pathology and histopathology data, and we asked the doctors for feedback on being provided those numbers.   They didn't want to know about probabilities. They wanted a simple yes/no decision.  


Humans are demonstrably far worse at understanding situations where there are low or very low probabilities of events with very high costs.    For example, what if there were a 1 in 10 million chance that rocks returned from the moon would contain something that destroyed DNA and if released would probably destroy all life on earth?      The quarantine of the returning astronauts wa sort of a joke, as they were put into isolation after floating around in the warm waters of the Pacific Ocean.

Furthermore -- almost any realistic metric of social interest is multidimensional not scalar.  And any sort of value system used to compare options will be multi-objective.  At a minimum there are competing health outcomes and financial impact outcomes,  with different "winners" and "losers", and stakeholders with different amounts of political power and sway. 

What that means,  and this is often missed entirely, is that outcomes cannot be meaningfully ranked mathematically.  They are "non-transitive".   There is no such thing, even conceptually, as "best".

There is therefore no such thing as "optimization", which by itself would require a continuous, differentiable, single-valued "fitness space" over which to optimize.  Social reality has none of those properties.

See the wikipedia article on NonTransitive Dice for more information on that.

Furthermore,  my personal opinion is that the immense number of active feedback loops in real, global social systems means that even the concept of "causality" may not be meaningful.  It is far more likely that entity A is in a closed feedback loop with entity B,  probably multiple feedback loops with different time scales and delays in them,   and it is not possible to say that A causes B, or that B causes A,  because it is really the system structure of feedback which is causing the observed outcomes.  

( See Peter Senge's "Beer game" developed at MIT to demonstrate this phenomenon.)

Again it seems to me to be little appreciated that such feedback loops violate the core assumptions of what in statistics is known as the General Linear Model,   on which many analyses such as multiple regression and published papers rely.    There is no such thing as a "dependent variable" or "an independent variable" in such a feedback loop.   Those terms are meaningless.  Statistics based on them are invalid,

Finally, almost all global social phenomena, probably all, are scale dependent.  Long-term consequences may be completely opposite to short-term consequences.  

Add all those up,  and throw in the fact that the person or persons doing the modeling may, in fact, make a mistake in even the equations for the small part of the universe they select to model.
In my day,  and I grew up with a slide-rule,  we had to learn to estimate the order of magnitude of things, because the slide-rule did not give you decimal points.    We got good at that.  My observation is that most people today have lost the ability to look at a number and say, with confidence, "that must be wrong!" and explain why.   The numbers are sort of a black book.   I had one student in a class I taught to use Excel compute that the unit price of a car was, I kid you not, a billion dollars,  and he wrote that down and turned it in as his answer without comment.  

Summarizing,  the input data are often sparse and uncertain;  the model for how variables interact is far too complex for the average decision-maker to follow; the results should contain certainty brackets, but most people cannot cope with those;  the results cannot be eyeballed to provide a sanity-check that the modeler did not make an error in software; embedded reasoning about causality and statistics may be invalid on a deep and non-obvious level. 

Add these up and it becomes clearer why a decision-maker or politician may be hesitant to stake their career on the trade-off curves the model generates for proposed interventions.

Decision Analysis

The following is my thinking on this subject.  Take it with a grain of salt, at least. 

This section of "the whole system" is the least understood and least discussed, yet if we somehow managed to get perfect surveillance systems, perfect disease progression predictions under various perfectly understood intervention scenarios and mutation trajectories,  we could still expect to see essentially system catastrophic failure to perform as intended at the decision-analysis stage.

To academics, the world of politicians and CEOs is a black box.   What is generally missed, in my opinion, is that a decision-maker lives in a world where uncertainty is the norm.  Most of the world is only poorly understood.   A typical CEO's process of leadership involves deciding a step to take,  taking it, gingerly,  prepared to undo it and retreat rapidly if it turns out the planning they did missed some key variable and things are not as they were modeled.     In other words, fail often, fail rapidly, and adjust your next step.   This is the basic "cybernetic" model.   

This, in fact was basically the strategy used by Loeb ( of Loeb, Rhoades & Co.) in picking stocks.  He said in one of his books that he was wrong about 80% of the time, but that when he was wrong he cut his losses immediately, while he let his winners ride. That was enough to get very rich. [ sorry I don't have the reference.]   The average investor's inability to admit they read the tea leaves wrong, or read bad tea leaves right,  and wanting to hold on to a bad stock "just until it comes back up" is what bankrupts them. 

CEO's know from experience that half the numbers they get are wrong, and the analyses are wrong, but they read them for directionality ( will it go up or down?) not the second or third decimal point of accuracy.   

But, politicians on the other hand are typically trapped by a combination of "face" and strategy that says they are never allowed to be ignorant of some fact, they are never allowed to say "I don't know", and often they are never allowed to admit that the step they just took turned out wrong.    Like doctors, politicians face an audience that does not distinguish between a great policy in an uncertain world which results in a bad outcome,  and a bad policy call.

And, to a real-world leader, there is never time to get things right the first time.  Failure to take action is itself an action and often the wrong action.   "Paralysis by analysis" is death.

So, politicians are forced to act, and forced to pretend, perhaps even to themselves, that their actions are sound.   Consequently, when the results turn out to be bad,  they are prepared to respond and survive by being exceedingly good at shifting the blame to someone else or something else which clearly intervened out of their control, and possibly even enemy action.

The providers of data,  scenarios,  modeling,  and recommendations are therefore directly in the possible line of fire.    There may be pressure for the modeler to come up with recommendations that agree with what the decision-maker, for reasons unknown,  wanted to do anyway.   These are working conditions that many modelers would not remain in long.

And, in many locales, the decision-makers are not about to be honest with the modeler, because the decision-maker has their own secret agenda, and secret stake-holders,  and secret values that may differ from the public good.  

As an extreme example,  we studied in class one cigarette company in an Eastern European country which told leaders there in private that one of its benefits was that it would leave working age people alone,  but kill off a number of them in retirement so that pensions would not have to be paid.   In the USA, there were politicians who were delighted that HIV was killing off gay people predominantly, at the start. 

On a more mundane level,  certain actions, such as arranging transportation of supplies, might need to be routed only through companies that the King's brother owned and operated.   Etc. 

Furthermore,  like all humans,  decision-makers are human.  They get fatigued, especially in some event that drags on for months or years,  instead of being over in a week.   They become overwhelmed.  They possibly or probably do not understand some or even all of what the modeler said, or the reasoning behind it.      Perhaps they cannot bear the responsibility of making life and death decisions for 10 million people.    Perhaps other decision makers in other countries visibly were removed from power or even jailed or executed for making bad decisions.  

Despite all that, politicians are people people.   They need to meet with multiple stakeholders, some with significant power or cash,  and generally in private persuade the stakeholders to go along with some plan.   Or they need to try to figure out a plan, any plan, which could be sold simultaneously to multiple stakeholders each with different needs,  value systems, and mental models of the world. And at least one of that set of stakeholders is the public at large. 

So say we have a top decision-maker who is quite bright and honest.  The area for development then would be whatever type of technology and modeler mediated process or system could bring about an action plan in the middle of such a raging conflict of interests with a limited time window.

Well, actually, for a pandemic,  this would be an 18 month, ever-tweaking process of taking one step, learning new information including the results of a step three time periods ago,  and deciding the next step to take, and repeat that loop forever.

In some venues, the stronger stakeholder simply wins and actions are taken in light of their interests, regardless what the impact is on other stakeholders  including the public and health workers.

In some cases,   these arguments about stakeholder interests have ended up, after a years long process, in the creation of fixed written policies that determine what should be done.  
The bad news as any military commander knows is that no plan survives the first bullet.  That is, as soon as the battle begins,  people look at the policies and action plans, and the plans are based on assumptions that turn out not to be true.  So the entire plan must be discarded.  And there is no time in the middle of a crisis to come up with a better one, in the standard way.

However we are in an area of advanced and accelerating Artificial Intelligence.  What this means is that it is conceivable that a rules-based expert system could be employed.   In such a system, the decision logic is coded into many separate rules, such as "If A happens, do B".    
Properly constructed, such a system would be able to apply all of the rules simultaneously to a given set of conditions and assumptions, and generate recommended actions.   If you fed it the expected conditions, it should basically generate the same results as your written policy manual.  

The difference is that, if the situation turned out to be different, that could be fed into the system and within an hour it would generate a new policy manual that fit the actual conditions on the ground.

Such a system, unlike "neural nets" which are used for much of commercial AI,  could not only generate what actions it recommended, but could explain the entire set of logic and facts it used to reach that conclusion.

Given the accelerating speed of change in the world,  and the dynamic instability of the ground situation, it seems to me that the era of written policy manuals is pretty much over.    A whole new AI-based dynamic approach to decisions regarding actions will be needed,  based on input from epidemiology modeling and intervention modeling.

Without that,  all the best Scientific models in the world at the bottom level of that system diagram are a waste of time, as they will have no beneficial impact on social actions. 





No comments: