Comments on life, science, business, philosophy, and religion from my personal public health viewpoint
Friday, September 19, 2008
We need an improved "invisible hand", Adam
Incidentally, there is essentially no engine today in any product that does not have a "controller" as part of the design, to increase stability, response time, etc. No elevator would stop at the floor without an abrupt "jerk" without a controller. The design of such controllers is in the field called "Control System Engineering."
A sample text book is this one: Feedback Control of Dynamic Systems, by Franklin, Powell, and Emami-Naeini. These are the concepts we need for a "governance" or "regulatory" system that actually works as advertised.
Control system engineering is to complex systems what "civil engineering" is to automobile bridges across rivers -- it is completely general and non-political, it won't tell you where to build or what to build with, but it WILL tell you the required properties of the materials and that some things will simply not work. You can't build the Brooklyn Bridge out of plastic, for example, regardless how cheap it is. You can't design a regulatory system that depends on feedback, for another example, and then blind the sensors that are supposed to determine the feedback.
The advantage of such engineering is that it focuses on issues such as "stability" (a big one right now) and gives power to insight, such as that blinding the eyes of a system will make it drive off the road for sure.\
Search "feedback" or "system thinking" in this weblog for other posts on such matters!
====================================================
One obstacle to a good solution is the incorrect assumption that a process "under control" equates to a small group of people doing "the controlling." Let's keep those separate.
The question of whether we need more "governance" should be distinct from who, or what, should be the active agent. For much of the US History, many have favored Adam Smith's "invisible hand" of the marketplace to do this controlling.
The classic debate over more or less "government" desperately needs this distinction.
The question should be whether there is an improvement on the class of "invisible controllers" that (a) do a better job and (b) are even less corruptable by those who would hijack the process.
There is no question that we have very complex processes running out of control, and that this is not the preferred state. Fine.
The question is how to achieve the "under control" part. The institution called "government" has typically decayed to "a few people" who, regardless of wisdom and intent, have been unable to grasp the com
plexity of the beast or improve on its operation and results.
The deep cynicism resulting from such failure seems related to the abandonment of a goal of prosperity for all and replacement with a goal of "prosperity for me and my friends at everyone else's expense" which turns out to be a short-term illusion, given how interconnected everything is.
These are problems in the area of "control system engineering" and "complex adaptive systems" and the necessary insights are probably in those fields.
Monday, October 01, 2007
Is perpetual war good for the economy?
Which economy would be the first response - the one the Wall Street Journal lives in? Or the one on Main Street?
Since the economies are intertwined with the respect "healths" of the nation, the corporate infrastructure, and the civilian population individual health, this is a relevant question for public health to address.
Is the boost to the GDP from that cash flow positive? Even if so, is the GDP a good metric of the wealth and health status of the respective organisms?
Or, are we in a state where spending on the perpetual warfare is the only thing keeping one or more of these economies afloat? That's a serious issue if true, particularly if the flotation effect is transient and temporary, and the cinder-blocks around the ankle effects of the debt-funding is permanent. Or are we trying to keep our head above water by hurling rocks downward, where they end up snagged in a net attached to our ankles that is dragging us under?
It seems more and more that the latter may be the case, and every effort we make along that particular axis to improve things has the long-term outcome of making things worse, and making the current crisis more pressing, which in turn increases short-sighted "solutions", which in turn adds more rocks to our ankles.
That's a losing strategy, if that's the situation. There is no light at the end of that tunnel.
We need to start spending more energy looking further upstream and ways to actually get out of this perpetual loop of bad decisions and worse results.
Once upon a time, such issues were "cyclic" and, if we waited them out, they would "go away." I think that time has passed forever. Now, these issues are "structural." All those social trade-offs we made to live today at the cost of tomorrow have come home to roost, because the party's over and it is now tomorrow, and all the bills we racked up are coming due.
Sadly, it may mean we have to stop acting like adolescents and start acting like adults. We may be forced to realize that we can't keep spending our national treasury at a faster clip than we are generating new wealth. We may have to recognize that selling off the land and furniture isn't "income", and whatever it is, isn't sustainable.
Or, heck with it. "Party on, Garth!" "Party on, Wayne!"
I guess that's the consensus opinion. I'm just noting that the water, which used to be well below the portholes, is now up above the portholes, and there's some water coming in the cabin.
Hmm. Wait - isn't that the Canadian dollar above us? The "old European" Euro?
How are we doing with respect to the Chinese Yuan? Looks like a "free-fall" parabolic arc downward to me. Looks like "sinking" to me.
Maybe it is time to stop the fist-fights and consider our strategies and our options.
"More of the same" doesn't look viable.
Wednesday, September 26, 2007
Role of IT - information technolgy - in next-gen companies
If we assume that what we're building is, essentially, a massively-parallel connnectionist computing engine (consciousness) out of people and technology, we get the suggestion that the key roles are:
transparent communication at successively larger scales
coherence-building at successively larger scales, and
transparent interactions - ("phase-lock loops") across the components of the system.
Yes, computers will still be required for tracking the trillions of details needed to run a large company today, but that is, in Peter Senge's words "detail complexity." There's a huge amount of it, but it is, relatively, simplistic in nature aside from the amount of it. Enterprise computing knows how to do that, at least in theory.
What we are looking for in the next-gen company is the thing that ties it all together, that supports the feedback loops that maintain coherence and build integrity, the same way the circulating thoughts in the brain slowly emerge an "image" out of billions of "nerve impulses" from the retina.
This is "Technology-mediated collaboration" and more, so I'll call it "technology-mediated coherence." It is what allows "aperture synthesis" in large radio telescope arrays to act as if they are a single huge individual and the gaps "don't exist."
This is pretty much what the Institute of Medicine was recommending when it urged a focus on "microsystems" recently (see prior posts on "microsystems"). The point is that a small team (5-25 people) is capable of being "self-managing" if they can simply be given the power to do so by having access to information about what their own outcomes are. This information does not need to be packaged and interpreted at successively higher levels of management and then repackaged and distributed back to them a month later as "feedback." In fact, that doesn't help much. What really helps is speed. What helps is if they can see, today at 2 PM, how they have been doing collectively, up through, say noon. They can learn to make sense of the details, and don't need "management" to try to do that for them.
In fact, given the fractal density of reality, and the successive over-simplifications required to get data into a "management report", it is a certainty that we have something far worse than the game "telephone". What will come back down the line from upper management will bear little resemblance to what went up, breeding distrust and anger on both sides.
So the role of next-gen IT is to grab hold of the 'WEB 2" technology, that allows bidirectional websites to be both read and written by people, and that includes weblogs, wikis, and "social software" that encourages interaction and cooperation, including, gasp, "gossip."
This is the stuff that, in the right climate and context, can be converted into "social capital" and converging understanding by each employee as to what everyone else is doing and why.
Where there can be dashboards, they should best be very close, in both space and time, to the decision-making actors. Lag times are incredibly dangerous, and are the source of instability in feedback systems. (Imagine trying to drive a car with a high-resolution TV screen instead of a windshield, with a fantastically clear picture of what was outside the car 15 minutes ago. )
A relevant quote from Liker's "The Toyota Way" is this (page 94) where he is talking about the problems with large batches and the delays that go with such batches:
"...there are probably weeks of work in process between operations and itThe hugely complex computation of making sense of such data is what human brains and visual systems are built for, and tuned for, and that machines costing a billion dollars cannot replace yet. Just give people a VIEW into what is happening as a result of what they are doing, and they will, by a miracle of connectionist distributed neural-networks, figure out what's affecting what faster than a room full of analysts with supercomputers - in most cases.
can take weeks or even months from the time a defect is caused until the time it
is discovered. By then the trail of cause and effect is cold, making it nearly
impossible to track down and identify why the defect occurred.
That's the role that computation needs to look at - is close-to-real-time feedback in a highly visual form to the workers of the outcome of the work currently being done. (This is a step-up from Lean manufacturing visual signal system which is a signal to management that something is amiss.)
The "swarm" is capable, like any good sports team, of making sense of "the play" long before the pundits have had a chance to replay the video 8 times and "analyze" it. Yes, there is a role for longer-term, more distant view that adds value.
But what there is NOT is a way to replace real-time feedback and visibility with ANY kind of delayed information summary. All the bases must be covered, and long-term impacts and global impacts will not be instantly visible to local workers -- but they have to be able to see what their own hands are doing or they'll be operating blind. "Dashboards" with 1-month delays on them cannot cover that gap. Too much of the information is stale by the time it arrives. Both are needed. Local feedback for local news, and successively more digested, more global feedback for successively larger and more slowly varying views.
Thursday, September 20, 2007
Templeton, Toyota, and Dynamics
I'm also finally learning how muscles work (better late than never) and it's just fascinating.
I mean, this is really strange and not something we learned in physics in college - this "body building" mathematics. As a True Believer ("exponent"?) of hierarchically symmetric principles, of course, I assume that many of the same patterns that govern development of strong arms or abs govern the development of strong corporations or state or nation economie -- with some specific to each level as well.
But muscles. Wow. You make them get stronger by breaking them down and using them up. We can't even decide in our terminology whether this is "down" or "up" -- which instantly calls to mind non-transitive dice and Hofstadler's (Escher's) strange loops, and feedback mechanism.
Now, what is the template here, the reusable pattern? If I break my car down, it stays "down".
If I "work out" (Now a new direction!) I make space or gaps or folds or niches somehow that end up getting "filled in" with interest. Again signature linguistic clues to strange loops.
So, Templeton, one of the richest men on earth, really believes in a concept of "giving" which involves delayed but amplified "receiving" - and finds a spiritual basis for this in Christian scriptures. He says (page xx):
"Of course, an activity of this kind creates an activity in the lives of the givers into which more good can flow!"So, he seems to be saying that "good" and "goods" (in a commercial sense) follow the same behavior patterns.
Still, I don't find a word in English for this loop, this pattern of behavior that muscles have where you have to use them up to make room for them to automagically refill or recharge.
I noted yesterday to my wife that it was good for rechargable batteries to let them run down, in fact, to go out of your way to run them down to just about zero and recharge them a few times, or they lose the ability to be charged up at all. Curious. Some even recommend that you do this as soon as you purchase them, and that, if you leave them in the recharger and "overcharge" them, they'll become useless and run out much, much faster than new batteries. They won't be able to "hold a charge", whatever that means (in general). (Now, we add the "let go" and "hold on" axis added.)
But, in my System Dynamics course we're studying how to model social processes using "stocks and flows" to capture the feedback structures.
I and a few other students are looking deeper, and asking what it is exactly in social systems, that "holds" any of these structures in place. In typical texts, like Franklin's "Feedback Control of Dynamic Systems" there are marvelously powerful equations and tools and software for designing great systems - but they all assume that the parts you build with don't simply fall part as soon as you connect them up.
In the real social world, that assumption is false, at least by default. Anything you build today will be much more likely to be gone tomorrow than still there when you wake up. So, if we want to draw on the power of Control System Engineering, we first have to figure out how to make, or model, parts that don't simply fall part like they're made of sand. This becomes a required precursor step.
And, parts don't "get made" in social systems, which have a truly funny sort of "clay" to sculpt things out of. Parts of any scale larger than trivial have to get "grown", like muscles, which gets me back to where I started.
I realized my vocabulary of words and concepts to describe how muscles "grow" by using them "up" is missing almost all they key words, which, as Whorf pointed out, makes it hard to think about, let alone discuss. Or , if words fail me, maybe a good picture or an animation or something.
How general is this phenomenon? Can we make employees "grow" by "using them up?" Can we make companies grow by "using them up?' Can we make nations grow by using them up?
Hmm. Well, start with employees. Any good employee actually wants to be "used" in a "good sense" (alert - two solutions!) not in a "bad sense". They want to be "exploited", again in the "good meaning" of that word, not the "bad meaning." (nuance alert!)
They want, in short, to be "used up ..[and recharged to a stronger state] " like MUSCLES, not "used up ... and discarded, like soap. In fact, it's HARD-to-impossible for an employee, or a member of a sports team, or a member of the Army, to "be all you can be" without an external social structure forcing [ nuanced word] you [nuanced noun] to "use yourself up" and "push yourself" and get through the pain / "the annoying feel of weakness leaving the body."
And, wow, are we not wired linearly for this multiday-process-loop. In the short run, rather than happily encouraging us to use them, our muscles complain bitterly about being disturbed from their slumber. Once "warmed up" or after a "great workout" they change their tune, and suddenly we get an "endorphine high" -- but that's way later than when we need it. So even this loop, maybe a month long, of getting the "pull" of the endorphine high to reach back around the feedback loop and inform the bitching-muscle part is nuanced and subtle and something no one ever explained to me before, let alone modeled for me or for a company or department or team growth process.
So, from the starting point, using up a muscle seems "hard" and "painful", and people who do it seem incomprehensible. I mean, they jog in the sleet in the middle of icy roads. Clearly insane.
Yet, "once you get into it" (nuance) the perspective changes and suddenly it becomes both possible and then enjoyable and rewarding.
But, I just don't have good pictures or words for the parts here. There's a ten-minute to 1 hour loop proess of warming up, a 2-5 day process of "recharging", and a 1-2 month process of learning that this is building you up not tearing you down that all have to fit hand-in-glove for this thing to fly at all.
It does fly, it can fly, and I'm finally figuring that out, much to the dismay of my downstairs neighbors who hear my weight-bench and think the ceiling is falling at 6 AM. The 6 AM part doesn't help.
So I can DO it, but I can't MODEL it yet so I can discuss it with someone else, let alone a very busy manager, and say "you need to do THIS" with your people, not "THAT", -- or better, build a "flight simulator" so they can interact and figure this out for themselves.
Boy, social literacy in this one alone would fix a lot of problems in how managers try to "develop" employees or teams and "fail.'
There's a lot of nuance, non-intuitive non-transitive loops, and multiple solution equations here that make this thing, relatively easy to do, very hard to explain in words.
Maybe, with Vensim modeling, I can simulate it comprehensibly and in a way it can be shared with others and discussed at a business meeting.
There are some other subtleties here, the motion equivalents of "a lap" - something that both is and isn't really there. (I mean, where does your lap "go" when you stand up?)
There are things that are like momentum or worse, angular momentum with its bicycle wheel or gyroscopic force that can be stabilizing or maddeningly sideways.
Somehow, though, back to Templeton, building "wealth" and social capital involves a lot of "giving and receiving" and the residual side effects of muscle-building as a result of that cycle, so that, if it is repeated a lot, it gets stronger and wealthier and "healthier" and more "alive."
This suggests that "wealth" is a flow-process, like a lap, not a static-noun, like "a rock" or "a gold bar." It suggests that building up wealth is like building up muscles, where you have to "give" to "receive."
That's just fascinating. I have to build one.
Well, off to learn about "lean manufacturing" and what makes some companies thrive and others become run-down, un-fit for business, and finally fall apart like sand in the wind.
This has so much to do with "life" and "health" and "wealth" and feedback processes! I think they all have to come as a bundle, at each level, and across levels, to work at all.
Batteries and muscles have to "recharge" from outside resources, and it's not through any "action" (at THAT EXACT TIME) that the battery "does" that it gets recharged. People have to "build muscles" or "heal" the "damage" [?] which happens when we sleep, not when we are "doing " something or when the doctor "does something." The best we can do is get out of the way of the natural process [ hah!] that actually does the healing out of sight, off-line, in secret, where it is so easy to be forgotten while being the key to the whole thing.
Ciao.
Wade
Monday, September 17, 2007
Small team feedback control in health care
- (This is a rewrite of a prior post to make it more helpful).
Thoughts on the IOM and feedback to small teams (“microsystems”)
General “white paper”
R. Wade Schuette
5/4/07 (original post)
[ some sections of my original document were not relevant and were removed, and I added some updated links.]
So, where does the IOM refer to this? Searching the full text of the IOM report doesn't even hit that word? We have to start with the main author's after-thought (reformatted for clarity below):A User's Manual for the IOM's 'Quality Chasm' Reportby Donald M. Berwick, Health Affairs, V 21 No. 3 May/June 2002, p 80-90,http://content.healthaffairs.org/cgi/reprint/21/3/80.pdf
ABSTRACT: Fifteen months after releasing its report on patient safety (To Err Is Human), the Institute of Medicine released Crossing the Quality Chasm. Although less sensational than the patient safety report, the Quality Chasm report is more comprehensive and, in the long run, more important. It calls for improvements in six dimensions of health care performance: safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity; and it asserts that those improvements cannot be achieved within the constraints of the existing system of care. It provides a rationale and a framework for the redesign of the U.S. health care system at four levels: patients’ experiences; the “microsystems” that actually give care; the organizations that house and support microsystems; and the environment of laws, rules, payment, accreditation, and professional training that shape organizational action.
From the "Prologue" to the article:
One of the architects of the [IOM] report, Donald Berwick, decided that it would be worthwhile to condense the message into a “user’s manual” for interested readers in the United States and abroad. In this paper he synthesizes the report’s structural themes and presents them, executive summary–style, as a framework that did not appear in the final report but was the basis for the months of discussion that led up to the report’s writing and dissemination.This framework comprises four levels of interest:
the experience of patients (Level A),
the functioning of small units of care delivery (or “microsystems”) (Level B);
the functioning of the organizations that house or otherwise support microsystems (Level C);
and the environment of policy, payment, regulation, accreditation, and other such factors (Level D) that shape the behavior, interests, and opportunities of the organizations at Level C ...
As the author of more than 100 peer-reviewed papers in numerous journals,Berwick was ideal for the task. A pediatrician by training, Berwick is chief executive officer of the Institute forHealthcare Improvement (IHI).
So we can see here a four-level multi-level model of patient care with a very surprising twist - namely, it seems to have skipped over the doctor, going from the patient right up to the whole small team that includes the doctor(s), nurses, and other staff who collectively deliver care within that clinic or unit.This gap is no oversight. It reflects some very profound hypotheses:1) when caught up in an institutional environment, the boundaries of individuals blur, because doctors behave differently than they would in solo practice. Their behavior is as much a function of the team they are in as it is of their own "self".and2) if we want to intervene in this 4-level health care system to improve things, the place we should intervene is at the small team level, not at the level of the individual doctor.
OK, so then the question becomes “What sort of "Intervention" is necessary to improve the performance and behavior of this team level entity and produce safer care in a more cost-effective manner?”
The surprising answer given by the IOM that very little intervention is needed.
In fact, the primary intervention required is simply to provide the team sufficient real-time feedback of how they are doing, and trust them to respond to it appropriately, without any further management intervention. This is a mix of "Theory Y" of management, and Deming's models of the behavior of employees, who, he asserted, given the tools to do their jobs, would do them.
(But note that the team remains within the context of a larger health system, and that is important too.)Here's a detailed but readable discussion of how that feedback can work:
and
Microsystems in Health Care
http://www.clinicalmicrosystem.org/publications.htm Joint Commission Journal of Quality and Safety
This is IT at the microsystem level, and is almost entirely absent in many health systems, in which IT is considered the exclusive province of levels C and D - the enterprise and national statistics. This recommendation of the IOM focuses on an area that is referred to as "technology-mediated collaboration” by the University of Michigan School of Information’s program in just such an area.
(see that program here: http://www.si.umich.edu/research/area.htm?AreaID=3 )
Note that a fully-integrated national health care system would actually provide the necessary IT support for all four levels - A,B,C and D in a coherent fashion.In conclusion, the national health information infrastructure model, as perceived by the IOM, really includes providing real-time self-managment tools as the crucial, key IT support to small teams of caregivers, whether the caregivers are "providers" in a hospital, or patients and their friends and family.This needs to be more central to the discussion of IT in a health-care environment, and it is a very different subject than simply automating medical records -- it is empowering small-team collaboration.
The realization behind this is very simply that we have good people who will figure out on their own how to do good things if they simply have the tools to see the impact of what they are doing, in as close to real-time as possible.
Tuesday, June 26, 2007
Darwin rules but biologists dream of a paradigm shift
Douglas H. Erwin starts with that premise in an essay in the New York Times Science Times section today. Focusing on the hot topic of evolutionary and developmental biology, his title is "Darwin Still Rules, But Some Biologists Dream of a Paradigm Shift."
Of course, I can't help but notice that he uses the word "some" in the title to soften that premise.
And, in reality, paradigm shifts are initially very strongly resisted. Thomas Kuhn documented this so well in his famous Structure of Scientific Revolutions. It is in fact a crisis of a fundamental kind to challenge the prevailing, comfortable, organizing world-view. This is the large scale version of the resistance within an organization to Karl Weick's "mindfulness" and surfacing problems that seem to imply the whole mental model is wrong, instead of suppressing them. In that way, this is the key to "The Toyota Way", as well, which is obsessive about forcing a process that leaves problems no place to hide.
John Gall discusses this delightfully in his half-humorous, half-profound view of how systems fail and his invented field of "systemantics". See Failure is perhaps our most taboo subject.
My readers know this subject is near and dear to me right now, as I'm caught up in the paradigm shift within public health, which is transitioning from a local, biomedically-oriented view of causality to a global, context-oriented, multileveled "distal" or "ecological" view of what determines who we are, how we act, and whether we are healthy or not. The older view was historically very successful and proponents of it are not about to give it up without a fight. Entire careers and departments have sprung up around it, giving it staying power.
My readers also know how I tend to view all this commotion through the lens of what I'm calling "s-loops", and what I see (modestly) as even more basic than DNA as the building block of all life at all scales. This is my invented term for Self-aware, Self-sustaining, Self-repairing, Self-protective regulatory feedback control loops -- which is why a shorter term is helpful.
These loops don't really care what substrate or medium they are based in, and can happily cross from DNA to water-levels to photons to whatever and back. Importantly, they don't care what scale of life they operate in, and are as happily at work at in a "genetic circuit" as in the Tobacco industry, following exactly the same rules and principles.
Erwin gets so close to this in his essay, talking about how researchers in artificial life labs and the whole Santa Fe Institute crowd have shown that eyespots can evolve into our current eyeball models through evolution. I have to note on the side that "cross-over" is probably the more accurate term for what he's calling "mutation".
His point supports my point, which is that s-loops quickly develop "eyes" of one kind or another. Erwin says:
Natural selection, driven by competition for resources, allows the best-adapted individuals to produce the most surviving offspring... It is the primary agent in shaping new adaptations. Computer simulations have shown how selection can produce a complex eye from a simple eyespot in just a few hundred thousand years.Well, any adaptive cybernetic thingie, whether made of silicon or carbon or virutal electrons, needs to be able to detect the outside world that it is supposed to be adapting to, duh. Why is this rocket-science? And silent detection (eyes) is a lot safer in a predator-rich environment than active detection (touch.) I'd rather see the snake than reach in the hole and find it by feeling around. Again, duh.
In my book, the whole evolutionary biology crowd is too close to the beast to be able to see the simple outline, even though they draw "feedback" loops and Krebs Cycles and genetic circuits all day long. Systems Dynamics people draw "causal loops" and that's great as far as it goes, but fails to focus as well on that very special class of regulatory feedback loops that become self-aware and undergo a sort of phase-shift in nature.
Once a goal-seeking control loop has been established, with any learning capacity at all, the goal ends up including self-survival -- at least, of the ones that survive! Those not interested in or good at survival, bless their hearts, are not generally with us any more - but make a great snack.
So, the persisting ones care about survival, and care about internal quality-control. They have to be able to repair damage and overcome noise. Once they get more complex, they need to be able to distinguish "me" from "not me" - ie, develop a rudimentary immune system. They need to learn how to fight back.
Then, the very clever ones, with even more propensity to survive, discover that they have some influence over the world around them. They can move to a new location and get out of the rain, which is one way to locally control the local world. They become terra-formers.
It doesn't take long to run into the fact that part of the world one is terraforming (or about to eat) already "belongs to" another S-loop. Uh oh. The dumb ones take up fighting even more, and the bright ones learn about alliances and stable ecological cross-supportive worlds.
But it still all comes down to an s-loop at the core, despite the fancy clothes. We still have a Self-aware, Self-sustaining, Self-repairing, Self-protective regulatory feedback control loop at work, bound by all the principles that control-system engineering has discovered and made into textbooks for those who have eyes to read.
My prior posts show that such a core loop will have a "blue gozinta", my somewhat tongue in cheek term for a "controller" that must have a few key parts, and always has them:
- A sensor for the world
- A sense-maker of the raw sensory input
- A mental-model (paradigm, world-view) of what's outside.
- A goal.
- A way to measure difference between the goal state and the current state.
- A mental-model of how things work and what parts it has that it can move.
- A way to take historical data stream of sensory input of what it's done and where it is and what seems to affect what and use it to generate the next second's push, pull, or other way of impacting the world.
So, any S-loop will have a strong survival pressure to defend its own internal mental models and paradigm, countered by a learning system that has to come to grips with the fact that sometimes, yes, the "cheese moves."
If I call anything with a functioning S-loop "alive", then not only are all "Living things" alive, but so are corporations, nation-states, religions, cultures, social norms, prejudices, stereotypes, and evolutionary biologists' collective paradigm of how things work.
So, yes, by this model, of course they will fight back, and fiercely, if their paradigm is challenged. And, yes it makes sense that all the supportive control structures terraform around themselves locally supportive smaller s-loops, which are built or entrained to be part of the larger empire. In this case, researchers, and collections of researchers, have all organized around this older paradigm as part of their "given" world and shared assumption, and in acting to defend their own s-loop identity and world-view, give life to the defense of the entire field's identity and world-view - that is, the field's core s-loop. It is natural that the field, a meta-living thing, will then support supportive opinions and try to stamp out or squash contrary or challenging opinions and dissent. All s-loops will tend to do that, at all scales: genes, bosses, departments, corporations, religions, nation states -- all will tend to squash and suppress dissent.
But two things can happen. The old guard can die off and yield that way to the "young Turks" who have a different paradigm, or the old guard can learn and adapat - a traumatic crisis of paradigm shift.
But it can be successful, and go from everyone knowing that the new paradigm is "obviously wrong" to everyone adopting it and effectively changing the past to affirm now that "they've always believed that."
In the short run, failure of news to update the paradigm has been identified as the killer of high-reliable operation of pretty much any complex adapative system, whether it's a nuclear reactor control room or the US Army or an aircraft cockpit or a hospital's surgery suite. When the old paradigm suppresses too much dissent, it misses the news that the cheese has moved, the old model of the cooling system must be broken, the enemy has moved locations from where headquarters was sure they were, etc. Actions no longer are based on reality, and tend to no longer support survival.
This appears to be the core issue about which we, as a society, are pretty ignorant right now -- what's an efficient way to make a "learning organization" that can collect input from its sensors and figure out when the internal mental model and paradigm need to be updated.
And, in the military, or hospitals, or any high-stakes operation, how do you keep the "control" system functioning, right in the middle of a mission, while ripping out the old paradigm and implementing a new one. For example, how do you transition from McGreggor's "Theory X" management to "Theory Y" management without losing the whole ballgame during the transition? The middle state seems ugly and totally out of control, even if the far side "future state" looks way better than where we are now. Is there a way to skip the middle state and just wake up and find ourselves in the new paradigm?
This is effectively a phase-transition -- the same stuff is still in almost the same place, but now the way it is structured has changed, with possibly a lot of stray energy involved going in or coming out.
The benefits of an s-loop model of evolution is that, in addition to our genes and selves and species, it includes all those departments and corporations and cultures and nation-states around us that are visible daily trying to assert control and dominance of the world and paradigms around themselves.
And, the s-loop model has another really strong benefit over pure Darwin at one level -- namely, there is an alternative to "kill or be killed" known as "cooperate in an ecology" or "acquire and merge." Diverse ecologies are way stabler than homogeneous empires (the Borg) and have proven so far to be able to survive massive context and climate changes that even huge individual models (dinosaurs) couldn't survive.
S-loops are all around us. Two people in a strong relationship or marriage may succeed in forming a bond that is so real it takes on a life of its own - and becomes another s-loop that is self-aware, self-healing, and terraforming the space around it in order to survive better.
My main point is that the behavior of complex regulatory feedback control loops is not something I discovered yesterday -- this field has been studied over 100 years and has a great depth of literature, analysis tools, theory, principles, visualization tools, and ways to simulate situations and do "what if" analyses.
If pretty much everything we care about is in the grips of one or more s-loops, then wouldn't it make sense to get the Santa Fe Institute, or somegroup like that, to educate us on what kind of behaviors you can get out of a swarm of such things interacting with each other - especially if you allow for consciousness and efforts to terra-form, make alliances, and learn how to overcome the "sticky paradigm" problem with some sort of dynamically stable solution.
Tuesday, June 05, 2007
Gentle primer on feedback control loops

The first picture shows rising and falling output. This is often what people mean or think of when they talk about "positive" and "negative" feedback.
Unfortunately, it's also their concept of where the "feedback" concept stops, so they missed all the good stuff.

The next picture shows converging output as a result of a simple control ("goal seeking") feedback loop.
The output rises or falls to some present value or "goal".

Then, the system can be "tweaked" a little so it converges faster on the goal, but that often will result in overshooting and coming back with a little bit (or a lot) of bouncing.

The next picture, of the car getting to a hill from the flatland below, is supposed to show how a speed control system should do a good job of maintaining the same speed, even when the outside world changes a lot.


But, this whole effect of locking down or "latching" or "clamping" a value, such as speed, to some predetermined value is really confusing to statistical analysis. The effect is that a variation that is expected to be there is not there. There's no trace of it. So far as statistical analysis shows, there is absolutely no relationship between the slope of the hill and the speed of the car. Well, that's true and false. The speed may not be changing, but the speed of the engine has changed a lot.
The same kind of effect could be seen in an anti-smoking campaign. The level of smoking in a region is constant, and then you spend $10,000 to try to reduce smoking. The tobacco companies notice a slight drop and counter by spending $200,000 to increase advertising. The net result is zero change in the smoking rate. Did your intervention have no effect? Well, yes and no.
The output (cigarette sales) has been "clamped" to a set value by a feedback control loop, so it varies much less than you'd expect. Again, this is hard to "see" with statistics that assume there is no feedback loop involved in the process.
For that matter, the fact that the "usual" statistical tests should ONLY be used if there is no feedback loop is often either unknown or dismissed casually, when it's the most important fact on the table.
(The "General Linear Model" only gives you reliable results if the world is, well, "linear" -- and feedback loop relationships are NEVER linear, unless they're FLAT, which also confuses the statistical tests, and sometimes the statisticians or policy makers.
The good news is that there is a transformation of the data that makes it go back to "linear" again, which involves "Laplace Transforms", which I'm not going to get into today. But, stay tuned, we can make this circular world "linear" again so it can be analylzed and you guys can compute your "p-values" and statistical tests of significance and hypothesis testing, etc.)

OK, then, I illustrate INSTABILITY
caused by a "control loop" . In this case, a new driver with a poor set of rules thinks ("If slow, hit the gas. If fast, hit the brake pedal."). Those result in a very jerky ride alternating between going too fast and too slow.

Then I have a really noisy picture that's really three pictures in one.
The left top side has a red line showing how some variable, say position of a ship in a river, varies over time. The ship stays mostly mid-stream until the boss decides to "help". Say the boss is up in the fog, and needs to get news from the deckhands, who can actually see the river and the river banks.
Unfortunately, the boss gets position reports by a runner, who takes 5 minutes to get up to the cabin.
As a result, using perfectly good RULES, the captain sees that the ship is heading too far to the right. (well, yes, that's PORT or STARBOARD or some nautical term. For now, call it "right").
So, she uses a good rule - if the ship is heading too far to the right, turn it more to the LEFT, and issues that command.
The problem is that the crew had already adjusted for the too much to the right problem, but too recently for the captain to know about, given the 5 minute delay. So, the captain tells them to turn even MORE to the left, which only makes the problem worse.
The resulting control loop has become unstable, and the ship will crash onto one or the other shores - not because any person is doing the wrong thing, but because the wrongness is extremely subtle. There is a LAG TIME between where the ship WAS and where the captain thinks it is NOW, based on her "dashboard".
That "little" change makes a stable system suddenly become unstable and deadly.
People who are familiar with the ways of control systems will be on the lookout for such effects, and take steps to counteract them. People who skipped this lesson are more likely to drive the ship onto the rocks, while complaining about baffling incompetency, either above or below their own level in the organization.

The last picture shows some of the things that "control system engineers" think about.
These are terms such as "rise time", "overshoot", "settling time", and "stability". And Cost.
These terms deal with how the system will respond to an external change, if one happened.
But a lot of the effort and tools are dedicated to being sure that the system, as built, will be STABLE, and won't cause reasonable components, doing reasonable things, to crash into something.
This kind of stability is a "system variable" in a very real sense that is lost when any heap of parts that interact is called "a system." It is something that has a very real physical meaning It is something that can be measured, directly or indirectly. It is something that can be managed and controlled, by very small changes such as reducing lag times for data to get from person A to person B.
And, my whole point, is that this is something people analyzing and designing organizational behavior and public health regulatory interventions should understand and use on a daily basis.
Maybe we need a simulator, or game, that is fun to play and gets people into situations where they have to understand these concepts, on a gut level, in order to "win" the game.
These are not "alien" concepts. Most of our lives we are in one or another kind of feedback control loop, and we have LOTS of experience with what goes right and wrong in them -- we just haven't categorized it into these buckets and recognized what's going on yet.
One thing I will confidently assert, is that once you understand what a feedback control loop looks like, and how to spot them, your eyes will open and the entire world around you will be transformed. Suddenly, you'll be surrounded by feedback loops that weren't there before.
The difficulty in seeing them may be due to the fact that what is flowing around this loop is "control information", and it can ride on any carrier, as I showed yesterday with the person getting a glass of water. The information can travel in liquids, solids, nerve cells, telephone wires, the internet, light rays, etc., and is pretty indifferent as to what it hitches a ride on.
The instruments keep changing, but the song is what matters.
You have to stop focusing on the instruments and listen to the song.Control System Engineering is about the songs that everything around us is singing. Once we learn to hear them, they're everywhere. Life at every level is dense with them. And, they seem to be a little bit aware of each other, because sometimes they get into echos and harmonies across levels and seem to entrain each other.
It's beautiful to behold. I recommend it!
W.
Monday, June 04, 2007
Controlled by the Blue Gozinta

For those who are following this discussion of feedback loops, we're most of the way through the basic description of the insides of such a loop.
I showed how a microphone and speaker, or getting a glass of water represented kinds of feedback loops, and made a distinction between dumb feedback loops and smart - goal seeking - feedback loops, also known as control loops. And we showed how control loops are everywhere in nature, made up of almost any substance - animal, mineral, vegetable, light, chemicals -- and they don't care because the principles work regardless. Control is to the loop as a song is to the instrument - you can play the "same" song on almost any instrument, or sing it, and the "sameness" is there.
So, I need to give a name to the four parts that I had in the upper left in this picture I drew yesterday:

The basic diagram that Professor Gene Franklin uses in the book "Feedback Control of Dynamic Systems" is similar to that block diagram, except for pulling the "GOAL" out and lumping the three other boxes "comparer", "model", and "decider" into a single blue box that is labelled "?" in his diagram of a car's cruise-control system for maintaining a constant speed.
So, the diagram is from that book, as quoted by me in slide 16 of my Capstone presentation on patient team management of diabetes control. I think you may need to click on the picture to make it zoom up large enough to read the words.

In any case, the only box on that diagram that is blue is the one that the feedback "goes into", so I'm calling it a "blue gozinta" as just a funny name that rhymes and that no one else is using.
Besides, the word "controller" rings all sorts of bells I didn't want to ring, echoing back to parents and school and bosses, etc.
Well I guess I failed in that already, as I gave the example of "negative feedback " of a student getting "graded" by a teacher for performance on an "exam", and receiving a failing grade of zero percent, which could be quite discouraging and dampen enthusiasm for the subject.
Franklin's picture has two other minor differences from mine. First, he adds "sensor noise" to the bottom "speedometer" box, to emphasize that this loop is all built around a perception of reality, not reality, and the thing that does the perceiving may not be perfectly accurate. That's a pretty good model of human beings or any other regulatory agent or agency.
As John Gall would say in his book Systemantics -- inside a "system" the perception IS the reality. The medical chart IS the patient.
That effect is so strong that the patient can be dying in the bed but caregivers are so busy looking at the monitors showing something else that they don't see the problem -- which is part of what went on in the tragic Josie King case, where an 18 month child slowly died of thirst in the middle of one of the best hospitals in the world. So, yes, we better remember on our diagram that what our senses tell us is going on may be very wrong. We'll come back to that in a big way when discussing how human vision and perception get distorted by all sorts of invisible and insidious pressures - especially in groups with very strong beliefs.
The other difference between Franklin's diagram and mine is on the upper right, where he adds an incoming arrow labelled "road grade". This means the slope of the road, and how hilly it is, not what we think of the road. His point is that the behavior of a car and the speed it ends up going after we have set our end and put the gas pedal where we think it should be actually ALSO depends on factors that are outside the car - such as whether it's going up a steep hill.
That will also be a universal pattern. The results of our actions are mixed into the impact of outside actions, which makes it hard to disentangle the two from just looking at the end result. The good news is that there are software programs that can disentangle those two for us.
Anyway, the whole point of this post is to get the "blue gozinta" identified.
This little blue box is the heart of the problem, because "feedback" is really just information, and is not intrinsically "positive" or "negative". In this diagram, the "feedback" is the speed of the car, as measured by the speedometer. That's just a number.
The number becomes "positive" or "negative", leading to "more gas!" or "more brake!" actions, only because the blue box, the controller, the blue-gozinta, compared that number to the desired speed, and saw that it was less than desired. Then the controller had to check a mental model and use some rule like "if we're going too slow, push on the pedal on the right!"
"If we're going too fast, push on the pedal on the left!'
As anyone who has ever taught someone else to drive knows, that turns out NOT to be the actual rule that drivers use to control the gas pedal. The behavior those rules and that simplistic model of the world result in is holding down the gas until the car shoots past the correct speed, then slamming on the brake until the car passes the desired speed slowing down, then overshooting and slamming on the gas until the car passes the right speed on the way up, then slamming on the brake, etc. The car jerks back and forth in an unstable and very unpleasant oscillation forever if that's the only rule in use.
However, we can probably all think of organizational policies or laws that have exactly that behavior, and are either too harsh or too lenient, or something, and keep on going back and forth and never manage to get the right setting.
It has been hard to recognize those problems and go
- Hey, I've seen that behavior before!
- That's a "control loop" behavior.
- The way to fix it is to change what goes on in the blue gozinta box.
- What part of the process / law / policy I have corresponds to that box?
- That's where the problem can be fixed.
It's really important to see that there is nothing wrong with the car. The gas pedal works fine, and does not need to be replaced. The brake pedal works fine. The speedometer (in this case) works fine. What is wrong is inside the blue box, and is subtle - it's the "mental model" or rule that is used to decide what action to take depending on what information is coming into the box from outside.
And, the realization is that a very simple rule, a dumb rule, doesn't accomplish what we want, but a slightly better rule will make the very same parts behave correctly together.The better rule requires a little more brains inside the box. We have to track more than just how fast we are going and how fast we want to go -- we have to figure out how fast we are converging on the goal, and start letting up on the gas as we get near the target speed, before we even get there.
The controller needs to "plan ahead" or "look ahead" and react to something that hasn't happened yet.This seems to fly in the face of science and logic. How can a dumb box react to something that hasn't happened yet? We can't afford the "glimpse the future!" add on module, at $53 trillion.
Ahh, but here's another wonderful property of feedback loops. What goes around comes around. We've been here before. Nothing is new under the sun. The past is a guide to the future.
Either putting out the garbage can causes the garbage trucks to come, or we can learn the routine well enough that we can predict when the trucks will come based on past experience. It turns out, in a loop, the past and future become very blurred together.
Being able to recall the past IS being able to predict the future, in a control loop.We don't just go around a control loop once or twice -- we go around a control loop thousands or millions of times. So, if we have any rudimentary learning capacity at all, we can start to notice certain patterns keep happening. We can detect what always seems to be happening JUST BEFORE the bad thing happens, and use THAT as the trigger event to react to instead.
So, we have a second rule that gets added by experience -- "When you get near the target goal, start easing up on the pressure to change and start increasing the pressure to stay right there and keep on doing exactly what you're doing."
This basic ability to learn from experience is the simplest definition of "intelligence" we can come up with. Do you recall the joke about Sven and Ollie that Garrison Keeler told?
Sven comes by Ollie's house and sees that Ollie has both ears bandaged.So, the moral of all this post is that the key to the behavior of a system being managed by a feedback control loop is the blue box, the "blue gozinta."
"What happened?" he asks.
"Well", Ollie replies, "I was ironing and the phone rang and I picked up the iron by mistake and held it to my ear!"
"Oh.... So, what happened to your other ear?"
" Ahh.... once I was hurt, I tried to call an ambulance. "
Very simple changes to that box can change a horrible experience into a pleasant ride.
The heart of "Control System Engineering" is figuring out what to put in that box.
For human beings, a second major problem is that little tiny addition of "sensor noise", and figuring out how to prevent, reduce, or account for distortions in perception that can cause the system to be responding to a perception, not a reality.
And, for both, there's another very subtle but very well understood problem, and that is "lag time." I didn't draw "lag time" on the picture but I will in the future.
If we're trying to drive based on the speedometer reading from 5 minutes ago, things will not go well for us. In fact, the more we try to "control" things, the worse they can get.
This is a huge problem. A perfectly stable system that is perfectly controllable becomes a nightmare and unstable and can fly out of control just by there being too much of a lag between collecting the sensor data and presenting the picture to the controller.
Or, in hospitals and business, it's popular now to have a "dashboard" that shows indicators for everything, often exactly in "speedometer" type displays.
The problem is, the data shown may be two months old. We are trying to drive the car using yesterday's speedometer reading at this time of day. When I state it that way, the problem is obvious. But, I can't find any references at all in the Hospital Organization and Management literature about the risks caused by lag times in dashboard-based "control".
At this point, even with just this much understanding of control loops, you, dear reader, should be starting to realize how may places around you these loops are being managed incorrectly.
We're spending a huge amount of effort trying to improve the brakes and gas pedals, when the actual problem is a lag time in the messages to upper management, or that sort of problem.
None of these problems need to be in our face. These are all "Sven and Ollie" problems that we can fix with what we know today.
But that will only work if we're really sure about how control loops work, and how they fail, and can make that case to the right people in the right way at the right time.
Take home message -
Even a very basic understanding of control loops can help us ask the right questions, and realize where the problems may be lurking instead of where they appear to be at first glance, so we don't waste our time barking up the wrong tree.
Especially in complex organizations, the generator of failure is usually not that labor failed or management failed, or that any one person did something "wrong." What is killing us now is that we have a huge collection of "system problems" that are due to things like "lag time" and "feedback". Every piece of the system is correct, but the way they behave when connected is broken. There is a "second level" of existence, above the pieces, in the "emergent" world. Things can break THERE. Most of the systems humans built are broken there, or at least seriously in need of an engine mechanic, because we didn't even realize there WAS a THERE.
Worse, "management" still thinks that discussion of "higher level" problems means that someone is pointing the finger at THEM, and that leads to bad responses.
The problems are subtle. We won't see them unless we spend a little time studying how control systems work, and how they fail. Then, the patterns will be much more obvious, and our efforts will be much more likely to be successful. And, then we can stop blaming innocent people for problems that aren't their fault.
It is, however, in my mind, the fault of the whole enterprise of Public Health if this kind of insight is not taken advantage of when designing regulatory interventions or in helping individuals try to "control" behavior. That, in my mind, would be a clear failure of due diligence.
Or - it would be, if these concepts had been published in the peer-reviewed literature that's the only thing they read and pay attention to.
Which says, it's my fault for not publishing this and your fault, dear reader, if you don't get after me to do so.
After all - I depend on feedback from my readers to control my behavior. So, what I do depends on what you do.
Wow, doesn't that sound familiar?
Sunday, June 03, 2007
THIRD kind of feedback discovered!
This just in! They found a third kind of feedback! The third kind is really, really, really important to understand if you want to understand what's going on in society today.
You know about the first kind of feedback, where the technician puts the speaker blasting into the microphone, and the result is a terrible sound, a rising squeal: e-e-e-e-e-EEEEE.
It's called "feedback" because the sound coming out of the speakers is fed back into the microphone, where it goes around again and the even louder sound comes out of the speakers and is fed back into the microphone where it is amplified and gets even LOUDER, etc.
Even though the result is unpleasant, this is called "positive feedback" because the signal is being reinforced and encouraged to grow stronger and stronger. Mathematically, with each loop more volume is being added in, so the equation, if we wrote it out, would need a PLUS SIGN.
Unfortunately, we are all familiar with the second kind of feedback, "negative feedback", which is what out best ideas or songs usually receive from friends and teachers.
You can see the "minus sign" on my second clever picture where a music student has just gotten his music test back with 0% correct, and is thinking of throwing his guitar, and his musical career, in the trash can.
This is "discouraging" feedback.
So, with positive feedback being "encouraging" and negative feedback being "discouraging", it's hard to see where there's room for a third kind.
I mean, what would it be? "Neutral" feedback, neither positive or negative?
The third kind of feedback can be called "goal-seeking" feedback, or "intelligent feedback" or "smart feedback" or "cybernetic feedback" or "regulatory feedback" or "feedback control."
Rather than blindly being always POSITIVE or always NEGATIVE, this kind of feedback varies depending on whether the news coming in that second should be encouraging or discouraging.
That concept implies that this is both "feedback with eyes" to see what the news is, and "feedback with brains" to decide how to interpret the news.
If we start at the glass being filled with water, the information flows from the glass into the person's eyes, then to their brain, where it is compared to a desired goal - a glass filled up to some mark or point. Since the water is not up to that level yet, the brain decides that the spigot could be opened wider to let more water flow, so this message is sent down to the hand, which carries out the message. That action causes more of the water in the 55-gallon drum to flow out of the spigot into the glass, raising the level of water in the glass. That information flows into the eyes, which ... goes around the loop again and again.
But, each time the information goes around the loop, the result may vary, depending on how full the glass is. The same kind of loop happens in a car, where the driver has some speed they want to go, looks at the speedometer, reads how fast they are going now, and decides whether more gas or more brake pedal is the right thing to do next, does that, and the control information goes around the loop again.
Now, this is a different kind of animal, this feedback control loop. But it is a very very popular design pattern. You'll find it everywhere, once you look, because it is a key building block for anything that's alive, and even for many things that are not alive, but act sort of alive, - like robots or automatic-speed-controls on your car.
Notice that I drew a closed path, a loop, but
what is flowing around the loop is actually control INFORMATION.The information doesn't really care what carries it - whether it is an electrical wire or knots in a piece of string like the Inca's used, or a handful of pebbles or a piece of paper or flow of water or movement of a muscle. Information is cool - it will hitch a ride on anything it can get.
In our loop, the information starts as level of water in a glass, then it changes to light rays, then it goes in the eyeball and changes to nerve impulses, and goes around the brain comparing itself to some mental goal or image then gets goes through some kind of "decider" mechanism to decide whether the glass is full yet or not, then gets resolved to nerve impulses down a motor nerve, then gets resolved to muscle movement in the hand, then changes to spigot movement, then changes back into the water in the drum moving down to the glass.
The loop is a picture of "control information flow", and the math doesn't care what physical thing is used to implement different parts of the flow. The concept is both very real, and at the same time very abstract.
But, this is the tremendous power of this concept. Nothing depends on whether the process being described is physical, or solid, or liquid, or light or electrical impulses, or thoughts, or images, or muscle tissue.The only thing that matters is whether the CONTROL loop exists, and has some kind of SENSOR (the eyeball), some kind of GOAL (how full I want the glass), Some kind of COMPARER (is the glass that full yet or not?), Some kind of mental model of what changes what ("To get MORE water, PULL the spigot lever forward towards me"), and some kind of ACTION-TAKER (to make that happen, move my hand towards me, and to make that happen, send a pulse down, let's see. ... oh yeah, down THIS nerve. )
The PATTERN is like a song, and the song doesn't care whether it is sung, or played on a piano, or played on a guitar, or played on bottles filled with different amounts of beer being hit by a stick -- it's still the "same song."
In our case, here's the song that Nature sings over and over again, everywhere inside our bodies and outside our bodies. When I write it out, it looks boring, like sheet music compared to actually playing the music. So, don't expect it to LOOK exciting. What matters is what happens when the music is PLAYED.
The other truly good news is that, once you understand how this kind of loop operates, and what you can do with it, and what you cannot do with it, that insight will carry over to thousands of different parts of life where the same loop operates - a different musician singing the same song.
So, here's the loop, that looks, as I promised, boring on paper.

Well, you wouldn't ask but I will, what about the first two kinds of feedback? Do we need a different picture of what "song" those are singing?
No, more good news is that one picture will do. Those loops are really boring songs, that essentially involve going up one key on the piano at a time, or going down one key on the piano at a time. Yawn.
Positive feedback is the same loop with stunted growth. It has no comparer, no goal, no mental model, and a single decider which is "ADD MORE".
Negative feedback is the same stunted loop with a simple decider rule "WHATEVER
you give me, I'll give you back less."
But, oh boy, control feedback can play a symphony with 8 voices and harmony. To think of "positive" feedback and "control feedback" being in the same family is like comparing a clock to a fancy BMW sports-car -- yes, they are both machines. Yes, one has a really boring song ("whatever time you showed last, add one second and show that next.") and the other has an "open-ended" song: "Wherever the driver wants to go and however fast she wants to go ... make it so!"
So, this is the background I wish my audience had when I did my Capstone presentation on how small teams of regular people (not doctors) could help each other get diabetes under control. There's that word again - "control", and, yes, there's a "regulatory feedback loop" involved with actions, looking to see if the actions worked, deciding what to to next, trying to do that, and around the loop again.
In any case, I will end with the same thought I put into that Capstone -- If we want to get things "Under control" and we want to use "regulations" in a loop and "monitor" how successful they are, and "modify" them based on that information, then we are playing in the "control loop" ballpark and it seems a minimum of due diligence to read the literature in that field and not reinvent the wheel, let alone get it wrong.
Feedback control theory is over 100 years old, and is very well developed, and has really neat toys and calculators to do all the hard stuff and make it easy to use. Probably any engineering college in the US has at least a course in "Control System Engineering". There are textbooks and journals and conferences, etc.
I found "Feedback Control of Dynamic Systems" to be the most readable, for the first two chapters, coming into the area from the outside.
There's a lot of interest lately in another 50 year old field - System Dynamics, where one of the goals is to try to capture, even qualitatively, the LOOPS in whatever system or organization or process you're trying to change, and then, if you can, the DIRECTION of push , be it "positive" or "negative". The Systems Dynamics Society has a whole literature and set of publications on how to do that and there's a graduate program at Worcester Polytechnic Institute in that field.
But, I have to note with some dismay, those analyses do not tease apart CONTROL loops from "positive feedback" and "negative feedback" loops. As I argue above, these are very different animals. Getting the connectivity and loops mapped out is a big part of the task. But I think the modeling of what happens next when you simulate this would be greatly improved if "control loops" were then distinguished from "dumb loops".
Just a suggestion for any SDS members who happen to be reading this. :)
(For more on Systems Dynamics and links, see my post "The Law of Unintended Consequences")
Oh, yes, I kind of lied a little bit when I titled this "Third kind of feedback discovered!" because it's only been discovered in the engineering literature, and has not been officially noted yet in the Public Health literature, and therefore, lacking "judicial notice" it currently does not exist so far as Public Health practice is concerned.
Again, my suggestion for due diligence applies.
And, no, it doesn't matter that human beings and possibly corporations and cultures and even lawyers are part of the system being studied in public health -- the theory and operation of control systems is identical. It doesn't matter whether it's animal, vegetable, mineral, light, people, chemicals, water -- the same control system laws control what can happen, can predict what might happen if you changed something, and can guide your intervention along pathways that are even conceptually feasible.
Monday, May 21, 2007
When little things matter

This post is a very simple lesson that can give us a profound deeper insight into how things work and why so much around us doesn't.
This arose from a very simple question I asked yesterday, to my daughter who is learning how to teach the concepts of physics to 4th graders. No number-crunching is involved. Relax.
Here's the situation. Some people, who work at the Daily News in New York, between sightings of Elvis, argue that the Earth is a hollow shell, like a tennis ball, and there is a whole world inside it, illustrated by my clever picture #1. Oh, yes, and they would add, there are holes at the poles where the flying saucers go in and out, which of course the military knows about but keeps secret from us all. The aliens from the UFO, shown here as "little green men" live on the inside as shown in the picture.
My question is, "Do you think that would work?" or paraphrased, "What's wrong with this picture?" Of course, there are many things wrong with the picture, but the one I wanted to focus on has to do with gravity. If you could make a hollow earth-sized space ship, unless it was spinning really, really fast, the problem is that the little green men wouldn't be able to stand up as shown. In fact, they would fall "up", towards the center of the earth.
See picture #2 for why that would happen.

So, here's the deal. In your mind, break up the earth into "billions and billions" of basketball sized pieces.
Some of these, "under" the alien's feet, would be close, and would pull on them with a lot of force.
Some of these, "above" the alien's head, would be far away, and would pull on them with just a little force.
So, you think, the close ones will win and the pull will be towards the local "ground".
The problem is that there are way more chunks of earth far away than there are close up. And even a little pull, multipled by billions and billions, adds up to a lot more than a huge pull, multipled by a small number.
As a result, the alien would fall towards the majority of the earth, which is over its head, instead of towards the little bit of earth beneath its feet. "Down" is still towards the center, even if you are inside the hollow shell, for the same reason "down" is towards the center if you are outside the hollow shell.
But, some one would ask, if the earth were spinning at a thousand miles an hour at the equator, which it is, wouldn't the centrigual force hold the aliens "down" onto the shell, just like in the movies of spinning circular spaceships? Yes, Timmy, it would - but, if the earth spun that fast, then the people, rocks, and buildings on the outside of the earth would also be going that fast, and we'd all be flung "upwards" and fly off the earth. We don't need any more math, just logic - because we aren't thrown "upwards", the earth isn't spinning fast enough to hold aliens "upwards" either.
What's the point? The point is that most people think that the first picture is right, and that local effects dominate the world. It's not true in this case - distant effects dominate the world, even though they are tiny, because there are just so many of them.
The conceptual mistake that "big" local things determine the outcome is carried over into our thinking of much of human society and our decisions about what things will "work" and what things won't work. Unfortunately, many of our social decisions, like picture #1, look fine at first glance, and seem fine, and "feel right", but they turn out to be wrong in the same way. That is, we throw the "tiny" effects out of the calculations before we have multiplied by how many places they occur, instead of afterwards. That gives us the wrong answer.
To decide whether an effect is "tiny" or "negligible" or not, we can't just compare it to some other effect locally. We have to multiply it out first, or compound it if there is feedback, and then decide which effect is "tiny" and which one is "big" - and whether our alien will fall "down" or "up".
Many of our social structures, like the alien, are built on the wrong model - and they keep trying to fall apart, and it's baffling to us why that is happening. It "should" work. If that is happening in your world, many the same thing is true there. Maybe, many tiny effects that you think "go away" actually turn out to dominate the answer.
This is true ten times over if there is one of those "feedback loops" I keep going on about. If an effect is "compounded", like interest on your credit card, it turns out to be way more powerful on the outcome than you would think at first glance.
These "feedback" systems that are used to control everything from elevators to airplanes, and the engineers who analyze them use a different lens than an inexperienced person would use, to account for this effect, before they do their calculations. They apply something called "The Laplace Transform", named after some guy who lived a long time ago whose name was "Laplace." That "operator" does two nifty things at once. First, it corrects everything for what it will add up to after you take "compounding" into effect. And second, it changes the loop into a straight line, so all our favorite statistics can be used on it again, but this time, giving us the right answer. (Warning -- If you look it up, don't mistake this for the "Laplace Operator", which is a whole different thing, named after the same guy!)
The math behind the Laplace Transform looks scary, but, just like the rest of statistics, that all gets hidden inside the calculator and all you need to do is push the correct button to use this, so it's not a big deal.
Actually, there's a third effect, but it's distant and subtle - the Laplace Transform means that this concept will not put biostatisticans out of a job, so they can stop the torch and pitchfork parade up to the castle to kill the idea. Without some way, like the Laplacian, to counter that distant and subtle effect, we'd expect the small resistance put up by a large number of established researchers to dominate the scene, and squash this idea from being taken seriously.
Anyway, now you have a better idea what I'm talking about when I say such things.
References & further reading
======================
A description of the Laplace Transform, and what it does, in something approaching English,
can be found in any "control system engineering" introductory textbook. I prefer the explanation in this one.
For anyone who enjoys the idea of big things and little things switching roles, stay tuned for a discussion of "Olber's Paradox", or the very serious question of why the sky is dark at night.
Any idiot knows that's because the sun has set. Olber, howerver, raised the point that the sun, where you can see it, will be the same brightness per area, regardless how far away from it you are. If you imagine holding up your thumb and first finger in a circle, and looking at the sun from close, you'd see a small portion of it, and that portion would be very bright. If you were twice as far away, you'd see more of it through that circle of your thumb and finger, but it would be farther away and less bright. Well, it turns out those two effects exactly cancel out.
The amount you see goes up as the square of the distance, and the brightness goes down by the square of the distance, so the total amout of light coming through that circle remains constant.
Sooner or later, you get far enough away that the sun doesn't fill the circle. But, no matter, because if there are an infinite number of stars scattered evenly around the universe, sooner or later any line you draw will run into one. And the little section of sky that one covers will also be as bright as the sun.
Which means, the entire sky should be as bright as the sun, and we should all be cooked.
The math, it turns out is correct. Which means, one of our other assumptions about what's out there is wrong. Fascinating...