Tuesday, July 31, 2007

Subliminal and subconsious systems

Then again, we can look "downward" or "within" and see systems with "minds of their own" that control a lot we thought we controlled.

Here's the lead of an article from today's NY Times.

July 31, 2007

Who’s Minding the Mind?

In a recent experiment, psychologists at Yale altered people’s judgments of a stranger by handing them a cup of coffee.

The study participants, college students, had no idea that their social instincts were being deliberately manipulated. On the way to the laboratory, they had bumped into a laboratory assistant, who was holding textbooks, a clipboard, papers and a cup of hot or iced coffee — and asked for a hand with the cup.

That was all it took: The students who held a cup of iced coffee rated a hypothetical person they later read about as being much colder, less social and more selfish than did their fellow students, who had momentarily held a cup of hot java.

Findings like this one, as improbable as they seem, have poured forth in psychological research over the last few years. New studies have found that people tidy up more thoroughly when there’s a faint tang of cleaning liquid in the air; they become more competitive if there’s a briefcase in sight, or more cooperative if they glimpse words like “dependable” and “support” — all without being aware of the change, or what prompted it.

--- wade






Systems explanations for student behavior

I'm continuing to reflect on why students appear to be changing their behavior, when the teachers assert that they (teachers) did not change what they (teachers) were doing.

When the people in a system are still doing what they were doing before, but the result changes, it suggests that some emergent system-level feature has changed -- probably one that no one even knew was there.

It doesn't take very much of a twist or warp to the world, if it is universal, to end up with an M.C. Escher world where the parts still appear to be just fine, and yet the whole has become broken. These two pictures by Escher illustrate that. The stairs in the picture above, and the flow of water in the waterfall are both clearly impossible loops - and yet, it is difficult if not impossible for the unaided eye to directly SEE what is wrong and where.



The problem is that no one thing is wrong very much, and our eyes are used to a little noise which we "squelch" to silent -- a strategy that works fine if the discrepancies are random, chaotic "noise". This leaves an opening in our perceptions, a gap, a blind-spot, that Escher brings home to us. It is, as Douglas Hofstadter pointed out in Godel,Escher, and Bach, a "strange loop" and one of the properties is this "non-transitive" property that we, as humans, are just not hard-wired to grasp, regardless how much we try.

So, I illustrated the exact same thing with the "non-transitive dice" here recently, where just because A beats B, and B beats C, you cannot conclude that A will beat C. Or if stairstep 1 is lower than two, and two is lower than three, you can no longer be sure that this means that #1 is lower than #3.

So, when we run into this very common situation in life, we are unable to process it and the outcome of our thinking is, as they say, "undetermined." It feels so wrong. It can't be right. So, we force it to fit, like stuffing too much in a suitcase, and just sort of ignore the parts that stick out the edges by common agreement to be silent about such things, because "that's just the way things are." Every time it comes into our heads we can see it, briefly, and are totally surprised yet one more time -- and then as soon as we let go it evaporates again so our total net learning curve is zero. It is, alas, to paraphrase Dave Barry's description of Labrador Retrievers' reaction to being asked if they want to go for a walk. "Walk? Wow! What an idea! This is GREAT! Who would have thought of this!?!"

And, when we are faced with more than two items to chose from, whether it's sports teams or jobs or dates or mates or candidates for jobs or elections, we all "know" that there "MUST" be a "BEST" one, and all that remains is for us to "FIND" it. We vote. We use weighted voting. We use some some of the squared voting. We use weighted sums of squares. We are just so convinced that there has to be a "best" without considering the reality that only certain kinds of things have a "best", and those things are boringly predictable single-dimensional things that are "transitive" in the way we are measuring them.

We are used to "height" being one such thing, and usually, in the real world, it is. In Einstein's world of general relativity however, once space is "curved", this is no longer true. How much you have to climb to get from point A to point B depends on your path. In fact, in a bicyclist's dream come true, there may be in fact a "downhill" path all the way from point A to point B.

Hofstatder illustrates this property with Bach's musical chords as well, where the perceived pitch keeps on "going up" with each successive chord until, surprise, it has come back to the place where it started, all the while getting, to our ears, higher and higher.

We shake our heads, like a wet dog, to forget this clearly "wrong" result again. This must be a computational error, or too much to drink. We must have dropped a decimal point or something. This can't be right! (but it is.)

Well, where am I going with all this preamble? I'm going back to the question of what happened to the students, and my original question in my first post of "What have we done to our children?" that assumes, if it got done, and we had control of the schools, then we did it whether we intended to or not.

The change in our behavior as educators did not have to be huge to change the net result. In fact, the change in our behavior could be imperceptible to us, or as mathematicians say, "of measure zero" -- a fancy way of saying that it's there, but safe to ignore.

So, let's pick a different hypothesis or explanation to try out -- suppose the pressures of cost-effectiveness, "analytical thinking", and other such things, over time, have in fact warped the whole system just enough that "things" that used to work and produce result "A" no longer work. We haven't changed what we do, but the result has changed.

This is precisely the sort of thing I described in my favorite Snoopy cartoon, where he says in his profound and simple way -
"Did you ever notice,
that if you think about something at 2 AM,
and then again at noon the next day,
you get two different answers?"
Same input - different output, and whatever changed is totally invisible from inside the system.

Well, hmm. So, life is not quite as simple as we would prefer it to be. Rats!

Our youth, our students, our children are, however, exquisitely sensitive to context and, despite their rebellious nature, tend to take on shape based on the actual context they are in. If that shape has changed (still to be verified), then the context probably did change, even if we didn't notice it change from our vantage point inside the "system."

And, from personal observations, I agree with the students, even though the middle area is fuzzy and won't lie flat, and has parts sticking out the edge of the suitcase. If I talk to doctors, they are sincere, caring people, but doctors-in-context-as-a-whole, viewed from the outside patient viewpoint, have become uncaring, indifferent, almost irrelevant, and certainly detached almost entirely from the reality we, as patients, experience. They think they are "accessible" but have stopped hearing patient's describe the roadblocks "the system" has put in between them and patients. They live in some sort of mythical world, giving out advice that may have worked 20 years ago, but is disconnected from life as we live it today -- and then blame patients for being "non-compliant" with the advice that seems so great to them and so irrelevant and bizarre, to the point of not even being worth being challenged, to us.

And, they don't really like challenges. And, if challenged, they say "Well, there's nothing we can do about that. We tried. We're still tryiing. But that's just the way things are. That's someone else's job."

Their advice is like a financial analyst's advice - "To get ahead, just put $200 a week into savings and don't touch it, and watch it grow!" or "Just make a budget and live with it!" or a time-planner's advice: "Just figure out what you have to do over the next week, make slots for the time, allocate the time, and just live with it!" or a wellness consultant "Just eat less, exercise more, and eat the right food, and take an hour off in the middle of the day to commune with nature and relax, let go of that stress!" or a child-development specialist "Just be sure to remind your children to do their homework, and provide them a quiet work space without distractions or noise to work in."

Hello, reality to consultant? Hello? Who exactly are you talking to?

And, I fear, the same is true for education. Courses that may have made sense in one world have stayed the same while the world changed, and the course content is no longer aligned with the real world as experienced by the students. Or, the expectation of the professor or Attending physician faculty member is hopelessly out of date and no longer aligned with the larger overall picture and reward system that the students have experienced and been shaped by all their lives.

"Shut up and put up with it, there's nothing you can to that will make it better, but a lot you can do to make it worse for everyone!" is the message their behavior indicates they have received consistently throughout their lives. Like the Hemoglobin A1C test for diabetes, which reveals the last several months blood sugar level regardless where it is today, the conditioned behavior of the students speaks volumes to what the school system is actually teaching them to be.

In this model, it is not the students who have changed so much as the educational system that has changed. Maybe, over-extended teachers at all ages, and over-extended parents have simply rewarded "shut up and don't cause trouble" as the best they can hope for or strive for anymore, and the students, being good students, have learned their "place" in "the system."

In the book Complications, Atul Gawande, MD, discusses in one chapter the taboo and impolite question of when good doctors "go bad", or how many years it can take to do something effective by other doctors, who keep on seeing incidents that raise red flags about one doctor who has "lost it". The same is true for some college professors, especially those with tenure, as I've experienced personally - who almost have to murder some Dean's child in class to actually get noticed by a system that is either effectively blind, or effectively dysfunctional at taking action to repair itself -- which, at the receiving end, amount to the same thing.

These problems are "of measure zero" to the high-up people who run things, it seems. Their behavior, from the outside, is identical to what you'd get if they didn't care to what pain their system is causing.
I pick those words carefully, because the reality is often even more baffling - the people "on top" do care, a lot, but do not, as they perceive the world, "run things." In fact, they find their hands tied at every step and every turn, and their initiatives resisted and rejected by the same "system."
So, it turns out, no one is running the system any more.

But, if you try to change "the system" it fights back, as John Gall points out so well in his profound and hilarious book "Systemantics." So, something is running the system. But what?
It turns out that "the system" is now running itself.
As systems tend to do, the system, once our creation and slave, has now become the master, and is dictating what everyone in it, including those at "the top", is now allowed to do. We didn't even realize that systems could do that, but it seems increasingly clear that they can, and do.

I gave a very simple illustration of this before, in "Controlled by the Blue Gozinta", showing how simply filling a glass with water sets up a feedback loop that actually is in control, as it becomes as correct to say the water level is controlling the hand as that the hand is controlling the water level.

But our educational system has gone into the state I call "M.A.W.B.A" - for "Might As Well Be Alive". It acts like it is alive, with a mind of its own. It offends many people's sense of what "life" is to call it alive, but it follows all the rules my Biology 101 textbook uses to define "life", except for having DNA.

So, we should accept that unexpected result at face value and say, ok, our ideas about what "life" is are out of date. Apparently "systems" can become "alive" when our backs are turned. We stir the coffee in the cup and get a nice vortex or whirlpool in the middle, and then, to our shock, the coffee says "Thanks for the jump start, Joe!", spits out the spoon, and starts maintaining the whirlpool on its own. This kind of "life" or "MAWBA" seems to be just waiting around for an excuse to join the game.

It's as if we don't have to "create life" -- it's already out there waiting to be born as soon as we make a suitable vessel for it. Wow.

That's kind of interesting. You can get that with"solitons" or waves that once started, just keep on running forever, but they are passive and remain in their non-linear matrix. These MAWBA life-forms can get up, walk over to the wall socket, examine the situation, rip apart the blender, connect the cord to themselves and plug themselves in and start drawing power.

Corporations are MAWBA. Our Educational System is MAWBA. Our Healthcare System is MAWBA. The teachers and doctors didn't change what they were doing. The administrators didn't change, but the emergent system changed, came alive, and took over running things, thank you. Neither the teachers, no administrators, nor doctors, nor students, nor patients are in charge any more. It's the movie Terminator's premise - "Skynet has be come self-aware, and taken over, and shut us out."

These days, maybe Northwest Airline's ability to control it's number of canceled flights is MAWBA, or GM's ability to control its own direction and future, or the Mideast situation are all MAWBA, and no one, no person, no group of people, is in charge any more, while everyone is blaming everyone else, thinking this must surely be "caused" by some bad people somewhere, because what other explanation is there?

Indeed. That is the question, isn't it.

If you find it more comfortable to say it's not "alive", but can still fit into that model that it has perception, uses energy, adapts to its environment, and even starts tinkering with its environment to adapt the environment to it, great. Come up with some other word for that behavior that is not what I associate with non-living things. It is self-aware and self-protective. And it is a lot larger than we are as individuals.

That kind of changes what sort of interventions into health care or education or politics might work. This is way beyond "feedback" or "reciprocal determinism" or even "system dynamics". This is a whole new ballgame, a whole new way of looking at "Life Science."

Maybe this model, however bizarre, has better predictive value than our old models.

It seems to me to be worth checking out, because we're not getting too far with the old ones.

So, if something "acts like it has a mind of its own", maybe we should accept that at face value for the moment, regardless how bizarre it is, and ask "OK, then, suppose it did have a mind of its own. What would our next step be then?"

I need to reflect on that. Maybe the answer is simply: "Try to make contact with it. Maybe we can negotiate a different solution that works better for both of us." I certainly wouldn't rush in with guns blazing. Lack of visibility may cut both ways. It may be as unaware of us as we are of it.

I think it was Lewis Thomas (MD) who noted that if our body's cells could manage to talk to "us", the consciousness in here sharing the space with them, that there would be very little in common to talk about. We worry about taxes, acceptance to college, the War, elections, interpersonal relationships, job security. Cells have no equivalents.

My own observation, or contribution to that discussion is this: we actually do have one thing in common, at any level or scale: the nature of control itself. Every level of life that becomes self-aware wants to repair itself and survive. To do those things it has to, above all, maintain order, but it has to be dynamic order, not rigidity like an ordered crystal of salt. Dynamic order and adaptability to changes in the environment are keys to survival. That means, when the world changes, when the "cheese moves", this news has to make it up to the top, somehow, and adjust the prior strategy. This is a basic problem of cybernetics, and is true at every level.

So, we can talk about that issue with any system. What's the best way to maintain order, and still be flexible and capable of learning and adapating? We all face that problem.

In fact, we all seem to face it in the same context -- as part of a greater chain of being, with "us" being just some small bit-player in something much larger than us that's going on, was going on before we got here, and will still be going on after we leave.

We are a nested hierarchy of systems of systems. That is also a common problem for us all, at any level. Our freedom of action is constrained by that reality. How do we cope, align with larger priorities, and still get our own work done? That's the core question we share.

Monday, July 30, 2007

So has everyone simply stopped thinking?


A few recent posts have been about my observation in my class that some students weren't engaged, more than I expected. I was aware that I might be over-generalizing.

But, in the last two weeks, while finishing my last paper and dealing with a hospital stay, I ran across two other authors with similar recent observations - one for undergraduates, and one for medical residents. They both are equally puzzled as to whether they are just noticing this, or whether something real has changed in the just last few years.

Since the early Greeks, at least, every generation seems to feel the next generation isn't really doing very well, so we have to start with some skepticism.

But, for what it's worth, here's a brief summary of two recent books.

My Freshman Year - What a Professor learned by Becoming a Student, is by a cultural anthropology professor (pseudonym Rebekah Nathan) who put it this way:

After more than fifteen years of university teaching, I found that students had become increasingly confusing to me. Why don't undergraduates ever drop by for my office hours unless they are in dire trouble in a course? ... How could some of my students never take a note during my big lecture class? ...

Are students today different? Doesn't it seem like they're .. cheating more? Ruder? Less motivated?... Why is the experience of leading class discussion sometimes like pulling teeth? Why won't my students read the assigned readings so we can have a decent class discussion? [emphasis added].
Here' some excerpts from the book How Doctors Think, by Jerome Groopman, M.D. Dr. Groopman "holds the Dina and Raphael Recanati Chair of Medicine at Harvard Medical School and is chief of experimental medicine at Beth Israel Deaconess Medical Center in Boston. He has published more than 150 scientific articles and is a staff writer at The New Yorker."

The idea for his book came to him, as he describes it, in 2004. He continues:

I follow a Socratic method in the discussion, encouraging the [medical ] students and [medical ] residents to challenge each other, and challenge me, with their ideas. But at the end of rounds on that September morning I found myself feeling disturbed. I was concerned about the lack of give-and-take among the trainees....[they] all too often failed to question cogently or listen carefully or observe keenly.... Something was profoundly wrong with the way they were learning to solve clinical puzzles and care for people. [emphasis added]
He also asks himself if this isn't just the same old intergenerational bias, and concludes:
But on reflection I saw that there were also major flaws in my own medical training. What distinguished my learning from the learning of my young trainees was the nature of the deficiency, the type of flaw.
Dr. Groopman goes on to consider whether this is the fault of efforts to follow preset algorithms, like computers, instead of actually thinking, or on "evidence based" thinking that is linear and algorithmic, and incapable of going outside the box when the situation calls for it.

I have further thoughts, but those will wait for another post. At least I am in good company in thinking that something very important has just changed in our youth.

A good epidemiologist would next wonder how widespread is this? Is this also true in Europe? In Asia? In Africa? Only in North America? In Canada? Is this something we can go back and reanalyze existing data sets and trace an "epidemic curve" on to see when it began and if it has peaked or is still rising? And, of course, if real, is this a change in the students, or a change in the way the older generation perceives students?

And, finally, with the "Where?" and "When?" nailed down, we could look at "Why?" and "How?" and then work our way to "What to do to fix it."


Wade

Monday, July 16, 2007

When and how should we question authority?

New York Times piece today relevant to thinking about Toyota's Production System,
as well as topics of mindfulness, high-reliability, and how to teach and reach new MPH students that I discussed a few weeks ago in "What I learned at Johns Hopkins last week" where I bemoaned the fact that the students "just sat there, unresponsive."

The Times article, by Norimitsu Inishi
Japan Learns Dreaded Task of Jury Duty
NY Times July 16, 2007

Japan is preparing to adopt a jury-style system in its courts in 2009, the most significant change in its criminal justice system since the postwar American occupation. But for it to work, the Japanese must first overcome some deep-rooted cultural obstacles: a reluctance to express opinions in public, to argue with one another and to question authority.
Well, that certainly sounds like the class I was in. What insights can we gain from this cross-cultural view?
They preferred directing questions to the judges. They never engaged one another in discussion. Their opinions had to be extracted by the judges and were often hedged by the Japanese language’s rich ambiguity. When a silence stretched out and a judge prepared to call upon a juror, the room tensed up as if the jurors were students who had not done the reading.
Well, in my case it's likely that, literally, the students had not, in fact, done the reading and were in no rush to call attention to themselves and get a follow up question back to them that would reveal that fact. And they were reluctant to ask a question that all by itself would show they hadn't done the reading.

One more snippet is worth quoting:

Hoping for some response, the judge waited 14 seconds, then said, “What does everybody think?”

Nine seconds passed. “Doesn’t anyone have any opinions?”

After six more seconds, one woman questioned whether repentance should lead to a reduced sentence....

After it was all over, only a single juror said he wanted to serve on a real trial. The others said even the mock trial had left them stressed and overwhelmed.
So, I for one will be watching with great interest to see how this evolves.

One point I have to note regards our country's history efforts to try to implement abroad things that work for us at home. Sometimes we seem, like a teenager, with our 230 years experience, to be confidently giving advice to 5000 year-old civilizations -- maybe akin to a 2.3 year old trying to advise his 50 year old parents on how to run the household.

Maybe things are as they are for a good reason. Maybe, messing with a cultural system we don't even pretend to understand and casually plannint to replace one part of their system with one that "works for us" will have, shall we say, "unintended consequences."

As I said there, the classic paper in this field is Jay Forrester's congressional testimony:
"The Counterintutive Behavior of Social Systems",
http://web.mit.edu/sdg/www/D-4468-2.Counterintuitive.pdf

Quoting the abstract:

Society becomes frustrated as repeated attacks on deficiencies in social systems lead only to worse symptoms. Legislation is debated and passed with great hope, but many programs prove to be ineffective. Results are often far short of expectations. Because dynamic behavior of social systems is not understood, government programs often cause exactly the reverse of desired results.

I am deeply concerned about not just this context-blind approach to trying to walk in and transform existing cultures, that as far as I can tell we are not very good at, but at extensions of this mental model to deciding we are going to start tinkering with DNA.

My daughter recalled a conversation she saw quoted with China's former Chairman Mao meeting with the head of France around 1980 or so, and being asked what he thought of the French Revolution. Mao's response was "It's too soon to tell."

I can't help but note that W. Edwards Deming came up with key quality improvement ideas decades ago in the USA, which was totally uninterested in them, so he went to Japan, where he was welcomed as a hero and passed along ideas adopted by Toyota that have directly led to Toyota's impressive performance. So, maybe Japanese culture had some positive aspect to it that we should be careful not to damage when adding our new "jury duty" feature.

The systems literature shows that it is generally impossible to change just one part of a complex living system without impacting all the other parts. Living things are not machines, with sub-assemblies we can just remove and replace with the latest version. This has the feeling of someone removing the propeller from a small private plane and installing a jet turbine in 98% of the cabin space, since "jets are better than props." Hmm. Not always.

Maybe a better example that I recall actually happening was the period when the US was complaining about Japanese "barriers to entry" of US car sales in Japan, back in the mid 1970's.
GM was offering a car that had the steering wheel on the left side (Japanese, like British, drive on the left side of the road and have steering wheels on the right side of the vehicle.). Also, the cars were too large to fit down most alleys and many streets in Tokyo, too large to park anywhere, and guzzled gas that was running at ten times the US price. And the car interiors were scaled for 6 foot Texans, not 5' Japanese. And, the car had no place to put a bicycle in it for the rest of the commute once a parking place was found. The US attributed low sales to wrongful Japanese barriers to free trade. The reality was that most Japanese couldn't use that car if you gave it to them for free. The basics of marketing once upon a time, when I went to Business School, were "know your customer" and "Be driven by what the customer values, not what you think they should value." In "lean manufacturing" this would be called "pull" or "value chain".
But, then, we were too busy assuming things and talking to shut up and listen.
We were violating Japanese traditions from nemawashi (walking around and gaining consensus before taking action) to the "lean" concept of genchi genbutsu (going down to the floor to see for ourselves before making pronouncements from afar of what is wrong.)

As even Wikipedia realizes:
Genchi Genbutsu (現地現物) means "go and see for yourself" and it is an integral part of the Toyota Production System. It refers to the fact that any information about a process will be simplified and abstracted from its context when reported. This has often been one of the key reasons why solutions designed away from the process seem inappropriate.
So, I'm not sure this particular government policy has a learning curve and won't explode in our face when we turn it on. Maybe this has been deeply considered. Maybe not.

If the objective is to damage Japan's culture and gain a competitive edge, or at least remove their edge over us, then I suppose random tinkering might be a good idea. If the objective is the much harder task of improving the functioning of a 5000 year old civilization, it might be good to be mindful of any indications that our mental model doesn't match their reality, so we should stop what we're doing and address that and update our model with more current information. That's the key to high-reliability performance, and avoiding nasty surprises. The article gives no indication that the policy implementation is contingent on it actually working in practice when implemented.

In the classic PDCA (Plan, Do, Check, Act), there is that "C" step, "check" that what we did had the desired effect, not an unexpected contrary effect, in case we missed some crucial fact.

That's not a bad model.


W.

Monday, July 09, 2007

The tipping point concept of non-transitivity




(above - picture of a set of 3 non-transitive dice from Grand Illusions website.)

What I'm seeing is not that people can't "think big", because they can. The US President can go from tying his shoe to considering Armageddon in a heartbeat. We all are free to consider BIG problems or TINY problems and the "auto-zoom" feature of our brains makes whatever we're considering fill out mental screen.

So, it's easy to be misled by small examples into thinking they're BIG issues. We don't seem to come with "ground wires" that keep our feet on the same ground.

That's probably a lot of what goes on in my favorite Snoopy cartoon where he's lying on top of the doghouse and thinking:
Did you ever notice
that if you think about something at 2 AM
and then again at noon the next day
you get two different answers?
But this morning I'm focused on why it is that a loop is so surprisingly hard for people to grasp.

I think it's not the wider view or scale, because people can do that "zoom" so effortlessly they don't even see it happen.

I think its that
  • The value of "constants" changes with scale, and
  • the relative ordering is non-transitive.
People aren't overly baffled when what looks like a short-term great idea turns out, in the long term, to be a terrible idea. As Dennis the Menace said, standing in a corner for punishment, "How come dumb ideas look so great while you're doing them?"

But each time people run into this, it's like suggesting to a Labrador Retriever that it might be time to go for a walk. "Oh, my God! Yes! A Walk! What an astonishing idea!" (Thank you Dave Barry for that thought about Labs.) The idea is visible, and logical, and sensible, but somehow it fades away to nothing between uses. We keep forgetting it.

The most likely reason I can imagine for that is that there is a larger idea, a context idea, that this change-with-scale property violates or offends, and, as soon as our conscious mind lets go of it, the cleanup crew in our brain looks for where to put it back and, mystified by it, decides it must be trash, because it doesn't fit anywhere with something bigger we preserve.

That's the easy one.

The loop thingie is ten times harder for people to grasp, even once. Even when people see it, touch it, play with it, some part of their brain rejects the concept as "clearly false" and is preparing to disassemble and discard it as soon as possible to restore sanity and normalcy.

And the problem isn't with a loop. People grasp the concept "circle." People don't run screaming from a "hula hoop" toy. It's more subtle.

It's more like the sense when you put a twist in a loop of paper, ending up with a Mobius strip. This does not feel right. This is uncomfortable, and barely tolerable, regardless how many times you've played with them or tried to cut one apart lengthwise and failed.

But, no, it's even worse than that. It's an M. C. Esher type loop, with a twist in a dimension that we don't even recognize as a dimension when we TRY to focus on it with our full attention.

It's a property of the children's game "rock paper scissors" - where there are three rules:
rock smashes (beats) scissors
scissors cuts (beats) paper
paper covers (beats) rock

So, there is no "best" one. This turns out to be a much more widespread phenomenon that we would prefer. We see it but reject it. For most things with multiple dimensions, the term "best" is meaningless, but we're so attached to it, we want to make it true anyway. We can't get resolution if we admit that there is no "best mate" or "best house" or "best job" or "best employee" or "best candidate" or "best football team". If you compare them by pairs, each pair seems to have a "better", but if you make a map of "better" it has no top or "best", but instead goes in a loop, or more than one loop. It's uncomfortable and a little scary. Things we thought we could rely on turn out to be shaky. We try to forget it, and succeed. Over and over.





Here's the classic example - the "non-transitive dice" that Martin Gardner described decades ago, and that Ivars Peterson attributes to Bradley Efron, a statistician at Stanford University.


You can read about these, but you just have to buy a set, or build a set out of construction paper, and even then you can see it but you can't believe it. There is no best one of these 4 dice, or of the 3 at the top of this post. A beats B, B beats C, C beats D, and D goes around the end of the barn and comes back and beats A. It's a loop and it just seems wrong.

(So, warning, don't try to win money with these, because the loser will be convinced you must have cheated.)

Well, as always, you must be wondering what this all has to do with health care or the problems of the world. So, back a few days ago I posted an analysis I did of why so many airlines are running late these days. Included in that was this loop diagram, that I made up, that you can click on to zoom up to readable size.



This one is a circle of "blame", where the blame is "non-transitive." Each set of people, in their local world, can blame the next group down the chain for the problem, and is clearly "right" -- which would be OK except that the list of blamee's goes in a full circle back to the "blamers."

Again, if you view this one box and it's neighbors at a time, it seems fine and makes sense. But if you put them all together in a circle, something seems to have gone terribly wrong.
Like this Esher print I love (from Wikipedia)


Or this one of stairs from Wikipedia.


There is wrongness there. But the wrongness is subtle.

That happens a lot more than we hold in our heads to be true.

So, where this comes down to Earth is the following conclusion. If people are going to learn about system dynamics and feedback loops, we need to get them past the point where very simple loops like the ones shown above, are perfectly sensible and acceptable, instead of where they are now, which suffers the mental version of tissue-rejection.

The problems will not come to us. We must go to them.

There is no way to make a circle into a line, regardless how "linear" a little part of the edge is if we simply elect to ignore the parts that go out of sight on each side of a narrow field of view.

Three facts seem to be true:
  • Closed circles of causality make us queasy.
  • Closed circles of "blame" make us and our legal system very uncomfortable.
  • Closed circles of "blame" that show that what's happening to us is our own fault coming back to haunt us with a lag time and amplification are just intolerable thoughts and are rejected out of hand instantly. That's crazy talk.

We need to learn to be able to see BOTH lines and circles of causality without becoming queasy and needing a drink.

Suggestions welcome as to how to do that.

Wade

-------
from Ivar Peterson's MathTrek

Gardner, Martin. 1987. Nontransitive paradoxes. In Time Travel and Other Mathematical Bewilderments. New York: W.H. Freeman.

______. 1983. Nontransitive dice and other probability paradoxes. In Wheels, Life, and Other Mathematical Amusements. New York: W.H. Freeman.

One possible source of nontransitive dice is toy and novelty collector Tim Rowett. He offers a set of "Magic Dice" along with rules for several games at http://www.grand-illusions.com/magicdice.htm. You can find out more about Rowett's collection at http://www.grand-illusions.com/tim/tim.htm.




Sunday, July 08, 2007

Institute for the Future

The Institute for the Future has an interesting weblog at http://future.iftf.org/. Two pieces I found in the current posts relating innovation, competitiveness, and operational structure are these:

Forward Thinking Cultures quotes a study by Mansour Javidan in the Harvard Business Review
In our study, Singapore emerged as the most future oriented of cultures, followed by Switzerland, the Netherlands, and Malaysia.

Interestingly enough, I heard about the Netherlands in a different context on my last trip to Johns Hopkins -- apparently US males are no longer the tallest in the world, and the average male in the Netherlands is almost 2 inches taller than the average in the US. That's probably related to other statistics showing that the USA, with less than 2% of its health care expenditures spent on prevention, has such generally poor health that the top quartile of Americans have worse health than the bottom quartile (by wealth) in the UK.
(see a related post by Ian Morrison of the IFTF, on US vs England comparisons here.)

I was also impressed many years ago, 1989 I think, when I met the head of healthcare IT (at a place called BAZIS) for the Netherlands (Mr. Bakker?) at a SCAMC meeting in San Francisco and we had a chance to chat at lunch. They tend to have a few very large hospitals by US standards there, with 2,500 beds for example. Their health care IT system for the hospitals was available free for anyone who wanted to translate the documentation. It ran on some tiny Digital Equipment Corp. box, something like a MicroVax. We were astounded, both that they had operational hospital systems completed, and that they could run on a small box instead of some huge IBM behemoth. What was the secret? Well, he confided, they did have to rewrite the operating system from scratch, because VAX VMS was trying to be all things to all people, and was so top heavy that it could be close to nothing for anyone anymore.

Wow, I think, just like Microsoft Vista today! To go to Hopkins I needed a cheap laptop in a hurry, and bought a $300 Dell with Vista pre-installed. The Circuit City salesperson told me I needed to drop another $50 for a memory upgrade first, because Vista would just barely fit in the 512 Meg it came with. So, I upgraded to 958 Meg of RAM. It comes with a 1.80 GigaHertz processor.

Let me put this into perspective. In 1990, as I recall, the only computers outside secret government facilities with 1 GHz processors were the ones on nuclear subs that decoded sonar signals. So I have here a computer with almost twice the processing speed of those incredible computers. How fast does it operate? Well, when I click on an application to open it, after a few seconds I often have to click again to get some idea whether it ever heard me or not.

This seems to be the corporate model in America that has gone from a philosophy and practice in the culture to actually being hard-wired "under the skin" of the latest computers. Let's make the beast so top heavy, so unwilling to say "no" to anything, that it can no longer say "yes" to anything because there's no room left to operate. Sleek and lean it is not. Shades of the entire US business model -- so preoccupied with fist-fights in the cockpit that no one is even looking out the plane windshield any more, having forgotten the original mission.

So, anyway, the tiny Netherlands, not laden with our historical success, saw fit to throw it all out and rewrite from scratch to do one thing extremely well, and it worked.

A second piece at IFTF that caught my eye was this:

Medical innovation: Could the U.S. slip?

The Washington Post's Amar Bakshi writes about the Artemis Medical Foundation, an about-to-open clinical research center in India. It's an interesting piece for two things. The first is its blunt critique of American medical research: Artemis founder Kushagra Katariya (formerly a professor at the U. of Miami) declares, “Opportunities to develop cutting edge [medical practices] are fast disappearing in…the United States."
The rest of the Post article quoted more or less describes the same congestion effect I refer to above.

Slip, by the way? "Slip?!!" Pfizer just closed it's doors in Ann Arbor. Fall and crash its skull open is more like it.

Actually, I recently wrote about Little's Law here in an analysis of why so many planes are so late these days. Pretty much all these three situations - computers, US research, and US airlines share the property of systems that throughput drops rapidly towards zero if you try to jam too much stuff into the box, then try to make it go by whipping it harder. Then you get a vicious death spiral - as the cartoon Dilbert says "the whippings will continue until morale improves." The less comes out, the more management tries to jam into every remaining opening. Well, see my causal loop diagram for how that simply cascades itself to death.

It's not a new problem - I believe the Old Testament in the Christian Bible warns land owners against gleaning to the very edges of their fields, or basically, taking all the slack out of the system. So, sorry Futurists -- this isn't a future problem , or even a current problem, it's a very old problem that we are really, really good at putting off.

What we need in many ways is less "news" and more "olds." I guess that's much of what my weblog here is about - lessons from religion that we keep on trying to reinvent the hard way, and botching.

If my primary assumption is right, that we're in a scale-invariant hierarchical organization of Life, then evolution is carrying us around a helix or spiral, back to the same "point" each pass, but shifted in a perpendicular direction upwards each pass a little more. We're "turning the screw." In that case, it's easy to predict the future (although not the timing) -- just assume that we will keep on running into the same damn organizational problems at each new scale of size until we wise up and solve them generically. Them is us. Then is now. We're traveling in our own prop-wash.

Each of us is a walking testament to the fact that 6 trillion cells can work together as one emergent larger entity, so we know that this problem has a solution. Why don't the cells just rip each other apart, or go their separate ways, or compete to see who will be the king cell of the body and get to rule all the others and own all the energy supplies?

Getting N+1 units to operate together and produce an output superior to what N units alone can do is one of those core problems that scales up symmetrically. We keep on trying to put off dealing with that problem, ascribing the failure of "committees" to this or that thing, and never looking at this core problem. What are we doing wrong? This is important. If we solve this, we just increased mankind's innovative power by a factor of about 6 billion, or maybe 6 billion factorial, but in any case, way more than 25%.

We don't know how to share. We don't know how to play together nicely. These are not just "children's problems." These are make-or-break it problems. While our scientists peer into deeper microscopes, their social base and funding is unraveling, because this "little" problem has not been solved. I'll assert that the whole "war on terror" wouldn't have come to 9/11 attacks if we had solved this problem in the Mideast. There's a trillion dollars right there.

And health care costs? Amid the arguments about how insurance happens, we've lost sight of prevention, which only matters because it's way cheaper than repair. Maybe 100 times cheaper. But, prevention may require people to work together, whereas repair only needs one doctor. Since we refuse to address the working together problem, we're stuck with the repair problem.

Then, miracle of miracle, along comes Toyota discovering "lean" principles which, at their core, are pretty basic religious teachings of honesty, integrity, transparency, working for the long run, caring about each other and working together. To quote another cartoon, "Doh!" How could we have known? I doubt that the little Andon cords and colored cards matter so much as the "care about each other" part. Maybe "competition to the death" is not our best strategy.

As T.S. Eliot, in the Four Quartets , said
We shall not cease from exploration
And the end of our exploring
Will be to arrive where we started
And know the place for the first time.
Wade

Thursday, July 05, 2007

Why are so many flights delayed?




Although my flight made it home from Baltimore, my flight there was canceled, and on the way back the two flights on adjacent gates to mine were canceled.

Northwest Airlines, with a hub in Detroit, seems to have led the pack, with 14% of its flights canceled two weeks ago, stranding over 100,000 passengers.

An article in this morning's paper confirms that it's getting worse. It also notes something I realize I should have seen myself, since it's one of those scale-dependent thingies: the delays counted by airplane are nothing compared with the delays experienced by passengers. The airline calls it a 1-hour delay, but it causes a missed connection and an overnight stay, or even longer, waiting to get re-booked, because all the other flights are already full too, and you're not the only one who got bumped.

Here's some numbers from the Times:

Ugly Airline Math: Planes late, fliers even later
New York Times
Jeff Bailey and Nate Schwebber
July 5, 2007

As anyone who has flown recently can probably tell you, delays are getting worse this year. The on-time performance of airlines has reached an all-time low, but even the official numbers do not begin to capture the severity of the problem.

That is because these statistics track how late airplanes are, not how late passengers are. The longest delays — those resulting from missed connections and canceled flights — involve sitting around for hours or even days in airports and hotels and do not officially get counted.

Researchers at the Massachusetts Institute of Technology ... determined that as planes become more crowded — and jets have never been as jammed as they are today — the delays grow much longer because it becomes harder to find a seat on a later flight.

But with domestic flights running 85 to 90 percent full, meaning that virtually all planes on desirable routes are full, Cynthia Barnhart, an M.I.T. professor who studies transportation systems, has a pretty good idea of what the new research will show when it is completed this fall: “There will be severe increases in delays,” she said.

Over all, this could be a dreadful summer to fly. In the first five months of 2007, more than a quarter of all flights within the United States arrived at least 15 minutes late. And more of those flights were delayed for long stretches, an average of 39 percent longer than a year earlier.

Moreover, in addition to crowded flights, the usual disruptive summer thunderstorms and an overtaxed air traffic control system, travelers could encounter some very grumpy airline employees; after taking big pay cuts and watching airline executives reap some big bonuses, many workers are fed up.

If a flight taxies out, sits for hours, and then taxies back in and is canceled, the delay is not recorded. Likewise, flights diverted to cities other than their destination are not figured into delay statistics.

About 30 percent to 35 percent of Continental’s passengers make connections between flights

A spokeswoman, .. added that many delays are caused by weather and thus do not reflect the airline’s performance.

...That is a typical level of missed connections, but Continental’s flights that day were 89.6 percent full, so finding seats on later flights for some passengers was difficult.

Continental also has a new system that sends e-mail messages — and, beginning next month, text messages to cellphones — informing connecting passengers on late flights how they have been re-booked.

It also is moving ticket kiosks inside the security area so passengers can print new boarding passes without going out to the main ticketing area or having to wait in line for a gate agent to help them.

The system, however, re-books people on the next available flight with a confirmed open seat and that is not always as soon as people might expect. Some are told their new departure is in three days.

“That causes them to go berserk,” said David Grizzle, a senior vice president at Continental. Often, on standby, people get out sooner, he said.


I also noticed that Northwest Airlines had attempted to solve this problem by institutionalizing the response. They now had entire special carts to make it easier for large numbers of passengers to attempt to make new bookings faster.



From the point of view of "lean" practices, and the Toyota Production Model, this represents one of the worst wastes possible - trying to become more efficient at doing work that shouldn't even be done in the first place. The risk is that the "workaround" will partly work, and then dig in for the long haul and become part of the new "normal" process, replicated 500 times in other places. New vendors will spring up to build even "more convenient" re-booking carts, and to lobby for sustaining this practice.



What might be done instead?

The first thing is to identify what the problem is. The problem is not thunderstorms or a feud between the traffic controllers and the FAA, although those contribute. The problem here is one of those pesky physical laws that I've been writing about, and what "the Yarn Harlot" pointed out as man's persistent desire to make "ten less than nine" and the delusion that maybe it just hasn't been rotated the right way yet and somehow this will "fit."

The law in question is called "Little's Law", and it looks innocent enough. It says that for any system the "cycle time" to process one unit (or passenger) goes up towards infinity as the system becomes full, and goes up much faster if there is more variability in the processing time for any individual step.

I can't easily find an authoritative textbook online, but here's the key info from a wafer fabrication newsletter "fabtime". (The same law applies to semiconductors as to passengers.
WIP = Work in process)

The relationship between cycle time and WIP was first documented in 1961 by J. D. C. Little. Little’s Law states that at a given throughput level, the ratio of WIP to cycle time equals throughput, as shown in the formulas below:

Throughput = WIP / Cycle Time

In other words, for a factory with constant throughput, WIP and cycle time are proportional. Keep in mind that Little’s Law doesn’t say that WIP and cycle time are independent of start rate. Little’s Law just says if you have two of these three numbers, you should be able to solve for the remaining one. The tricky part is that cycle time and WIP are really functions of the start rate.
Oh, and that tricky part is the devil in the details. What this really says is that as you try to jam more and more stuff through the same process, as it fills up the process starts to run into conflict and congestion costs, and the actual throughput starts declining rapidly, while "work in process" (passengers waiting for a flight) climbs towards the sky.

Fabtime's tutorial, shows a graph of the result, that shows that effect.

What this shows is that not only does the "cycle time" expected for a unit in this system (a passenger) to be processed (get home) go up, it goes up rapidly to multiples of the time it would take on an uncrowded system. So, for a very consistent, low variability process, the blue line,
trying to operate at about 90% full capacity will cause the process time to be six times the time it would take at 10% full. If the process has more variability (thunderstorms), this knee can be reached much sooner - at 65% capacity.

This is as true for service work and management work as for producing widgets or silicon wafers. Past a certain point, trying to shove more work through the system only slows down everything. So, the right thing to do is to find the sweet spot where the most work actually gets done, and resist the temptation to now try to fill every open space with more work. For wafer fabrication, this is about 85% "full". In other words, at the maximum throughput, 15% of the system will be empty, just "sitting there". This drives management crazy.

What typically happens is that people don't believe this result, even if they know it. (The delusion factor is strong, and surely 10 can be made less than 9.) None of the outside stakeholders, or visiting brass from the parent company understand this law, and a piece of idle equipment is surely a mistake and needs to be doing work! Or so it seems.

So, now, our friend, the feedback loop, comes into play. Once this knee in the curve is passed, and output starts to slow down due to congestion, the typical response of management is to go ballistic and push harder, trying to jam even more work through the system. This slows the system down more, which leads to management pushing even harder and starting even more work in process.

Then psychosocial factors come into play. Management becomes convinced that the employees must be goofing off, and become irate. "Surely that is true, because the total throughput is going down!" they think. Meanwhile, the swamped employees, seeing more in their in-boxes than ever and becoming exhausted trying to deal with all the internal delays at getting the simplest thing done, also become testy and hostile.

Meetings are held to discuss why so little is getting done, which takes more time, further slowing down the process. Labor strikes. Management retaliates, further cutting production and sales and revenue, which makes stockholders even more desperate to make up the losses with even more bookings. We end up with a positive feedback loop that rises until something breaks.
That's where it appears to be today.



If you click on that diagram, you can zoom it up to a readable size.

(That diagram is most of a Causal Loop Diagram, as developed by Systems Dynamics folks like Worcester Polytechnic Institute, or MIT Professor John Sterman (author of the tome Business Dynamics), drawn with Ventana Software's Vensim software that could put in numbers and actually run the simulation to see how this unfolds. This sort of reasoning is described by Peter Senge in his book The Fifth Discipline, where he uses an example of a beer production and distribution system to show how things can fall apart even when everyone is doing a good job, as they see it, because of "system factors" and "feedback loops". People interested in that would be interested in the whole Systems Dynamics Society. )

This occurs in a great many companies today. Unfamiliarity with Little's Law loads the gun, psychosocial factors cock the hammer, and every new thunderstorm or glitch pulls the trigger as everyone involved - stockholders, management, labor, and passengers, blame each other for the problem -- which is really a "system problem" not a "bad person" problem.

When this kind of thing happens to any health care delivery system, such as a hospital, it becomes a public health problem. When this kind of thing damages nerves and business effectiveness which leads to more pressure which damages nerves and leads to obesity and heart attacks and layoffs and no health insurance, it becomes a public health problem.

This is the kind of "systems thinking" competency I'm hoping the new ASPH Core MPH Competencies will lead to, so people can see this effect and head it off at the pass.

The biggest single controllable step here is to lower the blame factor, and realize we're all in this together. Myths and delusions and a norm that management's job is to crack a whip and push harder and harder come into play in a bad way when it is the system that is slowing down, not the employees. (Take it up with God, I guess, if you don't like Little's Law.)

Going around and around this loop is one of the major factors winding us all too tight these days, both humans and corporations. Maybe understanding what we've run up against can help defuse it and lead us back to a saner world for everyone.

At least, that's what Public Health hopes, in my view.

Wade

Wednesday, July 04, 2007

A question of baseline


One possible scientific working hypothesis about religion is that it's just remarkable how many people are delusional. Another is that there is something under all that smoke, regardless how poorly it has been resolved and how artfully it has been decorated.

Given my understanding of both humans and feedback loops and psychology, I can see the power of shared myths to persist and feed and grow, as effectively a living thing. On the other hand, given what I've learned about the computational power of massive parallel "connnectionist" architectures, and neural networks and human vision models and computer vision models, I tend to think that there is indeed a very real potential for emergent power in crowds to detect signals that any individual would miss.

A species as a living thing may perceive a different world, dimly but correctly, that is not accessible to individuals in the species -- just as your brain perceives a world that would not make any sense to an individual neuron. If you and your neuron could meet for coffee, there's not much you could talk about in common, except for things like the problems with control, and how hard it is to get good help these days. Taxes, defense policy, immigration, college applications are simply not sensible on the scale of one neuron.

Under a slight extension of the general Cosmological Principle ("there is nothing special about where we are, when we are, or what scale we are") we have to assume that this principle of "insensible larger concerns" is true as well for people-level thingies (us.) That is, there may be a lot going on that not only do we not know about, but that, given our size, we will never know about. In fact, if we use some sort of iterative reasoning, and apply this Cosomological principle yet again, there must also be some things that even earth-scale species, regardless how electronically wired in the future, will never be able to comprehend. And so on, who knows how far upwards. Maybe in this universe even Galactic scale (10 to the tenth stars) thingies will have galactic-cluster events that they will never be able to comprehend.

So far, I think that is pretty solid scientific reasoning. In short it is a more reasonable hypothesis that we humans are permanently shut out of certain knowledge, due to our finite size, than that we're almost gods. Yes, we can wire up the blogosphere and let the huge connectionist engine start cranking and discovering Things that it can respond to, but it can never really tell us fully what those Things are, any more than we can explain a 1040 tax form to a neuron. It's a simple bandwidth limit. None of us have 500 years to listen to the details, for starters, so anything that takes over 500 years to explain is out. It's a very strong assertion to say that that set is empty, and a much weaker assertion to say that there may be stuff in that set.

These days, most of us cannot and will never grasp things that take more than 5 years to explain, except in very narrow tertiary specialty areas. In business and politics, sometimes it seems that 15 minutes is the cut-off, and any concept that takes over 15 minutes to explain is simply in the "insensible" or "incomprehensible" set. I think that political advertising assumes that anything that takes over 30 seconds to explain the the public might as well not even be attempted.

This cut-off frequency to the full spectrum of knowledge in turn must result, by signal theory, in some rather major distortions in what it is that we do think we see with what limited capacity we do have. A classic result in radio-astronomy for example was Ron Bracewell's realization, around 1926 or so I think, that the best details that could be resolved, even with infinite observations and averaging out the noise, was limited buy a resolution of lamda over D, where lamba is the wavelength being monitored and D is the diameter of the radio telescope "dish" or "grid" or "lens" or "mirror" being used. Similarly for eyeballs - if you want eyes like a hawk, you need a wide-diameter pupil, and humans just can't go there with out tiny eyeballs.

So, the question comes then, are the "details" important or negligible? This is worth stopping to ponder. I spent 5 years at Parke-Davis pharmaceutical R&D division generating cross-sectional images of blood-vessels to assist in research on coronary disease. With microscopes as with people, we had a choice of picking a high-power lens, and seeing details down to the individual cells staining structure, or a low-power survey lens, and seeing the big picture, but we couldn't do both. It turned out that was a critical problem and gave wrong answers. We needed both the details and the big picture to grasp what was going on and how and why. So I developed techniques to take many high-resolution images and assemble them into a high field-of-view montage, and then we finally had something the computer could analyze and get meaningful results that corresponded to biological truth.

The image at the top of this post is one such montage I made in 1995. (repeated here).

There are some other examples of that sort of work on my quantitative biomedical imaging website, at http://www-personal.umich.edu/~schuette.

So, at least in that one case, yes, the details mattered. Hmm. OK, then in at least some cases, the details matter and change the answer. Do we know anything at all about which cases that might be? Well, it will certainly include cases where the details add up to more than the low-resolution/high-field of view facts. This could easily include feedback loops, where those tiny details ( like,say, a persistent 5%/year drop in value of the US dollar ) add up or compound over the span in question, and end up dominating the computation of the final value in your dollar-denominated bank account.

The distortion caused by cutting-off a portion of the spectrum is also not negligible. You can't just chop off a signal and get the middle part you hoped for, but instead you get large scale distortion that is mathematically the Fourier transform of your chopping function.

Here's an example:

Skipping the math, if you try to "look at" a point source, like a star, through a hole in a piece of metal, say, even if the point source "fits" in the hole, what you see will be distorted as shown on the bottom. You'll "see" a sort of diffuse bell-curved shape source in the middle, surrounded by a dark ring, then a bright ring, and another dark ring, and yet another dimmer bright ring, etc.

You cannot get around this, known in the optical domain as "diffraction". It's a law of physics and signals. If you look up diffraction in Wikipedia, you'll see another example of what you see looking through a square hole:

OK, wow. So the size and shape of the hole or "aperture" through which you are trying to look can dramatically change what you "see" or directly perceive or detect with film or an imager or a radio signal detector.

Now, that's not a fatal problem if you know your distortion pattern, because you can "back it out of the equation" and computationally figure out what shape actual signal must have been there to generate the signal you "measured".

That works for optical and radio astronomy and optics in general. For microscopes this is the "psf" or "point-spread function." You can use the magic of Fourier Transforms to undo the distortion and get a clean image of what you'd see without it, mostly.

But here's our problem as a society. To me, the same effect applies to looking at the world through a finite aperture, or gap, defined by a limited set of "scales" and time-frames we are going to observe. So as humans, we are making observations on smaller and smaller scales of time, and on magnitude scales that tend to be about the same size as us, whoever "us" is (person, corporation, nation, culture.)
But by blocking out larger scale information (that we might call "context") and smaller-scale information (that we might call "negligible details") we are then bound, by the laws of physics, to be directly perceiving a distorted signal. The problem is, we don't realize we need to undistort it before paying attention to what it says.
How distorted will it be? Well, look at the point source viewed through the square hole. Pretty distorted.

The same thing is true on one-sided "holes", or simply blocking out everything to the "Right" of a point source with a sheet of metal, we'll still get distortion near the edge.

Hmm. So, if we don't realize we're cutting off all signals above a certain scale (slowly varying, long-wavelength) and below a certain scale (rapidly varying "details" of very short wavelength), we won't realize that we cannot help but see a distorted picture of the real world out there - the one we'd see without such distortion.

That brings me full cycle back to pondering what sort of "receiver characteristics" a massive array of people-shaped sensors over 1000 years might have. We can say something, with no further details, about what kinds of signals and patterns and frequencies it would be able, in theory, to "pick up" or detect, what it would be blind to, and what sort of distortions it would necessarily have.

If that social-shaped antenna "sees" something it resolves into a pattern it calls "God" or some of the other aspects of "religion", we need to reflect carefully on simply dismissing that data point as "noise". It is not at all obvious that it is receiver noise, and it is the worst abuse of science to discard a data point simply because we don't like it, or because it doesn't fit our preconceived notion of how things are and what "should" be there.

Besides, we don't have much opportunity to do such long-baseline (1000 year long) observations ourselves, so we should treasure the few we do have.

We know a few things with a fair amount of certainty. We know lamda over D will be a limit on resolution of details, unless it is computationally broken using a technique of hyper-resolution.
That means, in lay terms, that the wider the baseline diameter, the more we can potentially "see". The more diverse the observing group is, the wider it is spread out along any dimension, the better the group can triangulate in on something and resolve how far away it is from us. A very wide baseline will let us sort out foreground from background. A totally uniform set of sensors will have zero resolution and be totally blind to telling foreground from background. Diversity matters, in a purely information-capture sense.

The question for Science, with respect to Religion, then, it seems to me, is not to be obsessed with the persecution of Gallileo or Iowa's decisions about evolution, but to ask what this irreplaceable observational unit in our heritage may have seen that we, here, now, looking over a few year window, could never possibly see.

Even lousy sensors, such as the cells in our retina, can give you a good picture of the world if you process the signal correctly, which is what much of our brain is hardwired to do. There is value in those low-grade signals, if processed cleverly and assembled into a big picture.

It's not a question of "right" versus "wrong". It's a question of baseline.

There is no scientific justification for discarding one of our longest-baseline observations, regardless how "bright" or "technical" the individual sensors in that array were compared to us. We are stuck in the narrow "now", and they have the advantage of a several thousand year baseline. Nothing we can do with gigahertz processors and PhD's can overcome that physical law on signal processing theory. A crowd may see things our most brilliant scientist missed.

Wade

Tuesday, July 03, 2007

Scientific Method as a whole as a non-lean process

I was somewhat harsh on the "Scientific Method" in my last post, and think that justifies some clarification.

The methods that scientists use to work can be evaluated at multiple scales, and as with many things, the answer you get varies with scale.

On a one-on-one basis, the use of solid reasoning, backed by data, and reviewed by peers has a great deal going for it over arguments based on authority, emotion, sloppy or incorrect reasoning, and anecdote.

But, viewed from afar as an industrial process, the enterprise of "Science" doesn't seem to have updated its methods or approaches for many years. When I protest that it took 30 years for Science (big S) to recognize that genes operated collaboratively, some supporters of the current version of the Scientific Method might reply "See, it worked!"

Approaching this from the Toyota Production System or "lean" viewpoint, I have to agree that it worked, but ask if it really needed to take 30 years -- couldn't we have accomplished that in, say, 3 years?

Typically a response to that suggestion would be a combination of horror at the suggestion, along with a confident assertion that "it goes as fast as it goes, and nothing can be done about it."

Well, hmm. Let's take that as an unproven hypothesis and look at it, well, scientifically.

I remember when General Motors took 6 weeks to convert its assembly line from one model to another, and was trying to figure out if it could be reduced to, say, 5.5 weeks. Skeptics assured everyone it couldn't be done. Then someone looked at Toyota to see how long they took, and found it was something like 8 hours.

Again, hmm. Are we to believe that intellectual progress can theoretically be made regarding all aspects of life with one exception -- progress at improving the "Scientific Method" ? Or is it possible that the Scientific Method itself could use improvement for a new century?

I sense emotional stirring of defensive arguments such as "Sure, and the US Constitution could use improvement but it costs so much to open that debate and has such a risk that it could be made worse, that we don't want to even go there!" Or, as someone in class said "The line between Germany and France is arbitrary, but so much blood was spilt getting it where it is that it's better to just leave it be."

All that is fine, if we're the only game in town. However, as with General Motors, what if there were actual competition that wasn't hampered by these reluctances to change? What if some other nameless country managed to crack this "lean barrier" (like the sound barrier), and, while keeping Science dynamically stable, still got the response time and agility up by 50% as they practiced it? Then would the arguments suddenly shift or look less credible?

Peer review as practiced, extremely heavily weighted to the old guard, and tenure at Universities, worked ok when it was ok to take 30 years to "see" something. But I think everyone agrees that the rate of change of the world is accelerating, and there comes a point, sooner or later, when 30 years is "not good enough." It's not good enough because the huge inertial moment of the Scientific Method As Practiced (SMAP), while great at removing high-frequency noise, is now also removing significant signals as well.

This comes home to roost regarding treatment of situations involving feedback loops in general, and goal-seeking feedback loops in particular. Most of the history of Science has been based on linear, isolatable, open-loop causal pathways, and that worked fine for building most equipment up through 1950 or so. It didn't seem to have any power at all to understand areas such as interpersonal relationships, the economy, spiraling tensions leading to war, or biological systems, but those were considered squishy or "soft sciences" and their importance denigrated by "real scientists."

But, now that all those basic linear machines have been developed, thank you, times have changed. The world has gotten much smaller, much denser, and much more than ever is living in the "built environment". We are trying to fly in our own wake, and most of what we see as huge problems on the horizon are of our own making. Next year, more than half of the world's population will live in cities, not urban areas.

Everything affects everything else. Nothing can be isolated and studied in isolation. All the critical processes have not just one feedback loop, but many, at many levels, or are literally dense with feedback loops. All that violates the core key principle of the General Linear Model, which means that all statistics based on the GLM (which is 95% of what's used today) is null and void and not applicable as it stands, without modification to deal with feedback effects.

How powerful can those be? Well, how different is a tornado or hurricane from a calm day? That's how powerful they can be. Meanwhile, most of our national policies continue to be based on linear thinking, so we hear people arguing over whether "prices drive wages" or "wages drive prices" as if the obvious feedback loop was not imaginable.

And that's a problem.

In many ways, it's made worse by Religion (as a social force) continually pointing out that Science is failing to deal with many socially important issues on which Religion has an opinion. As a result, now Science risks losing face and admitting that Religion was right on something, which is abhorrent to many.

But meanwhile, Science, as an institution, in the USA, is slowly losing ground and losing funding to precisely the forces that deal with issues that Science ... well... er ... um... hasn't gotten around to dealing with yet -- but hold on, in another 358 years or so, at the current rate, Science will have something to say about that.

Unfortunately for that argument, Science, at the current rate, probably has less than 358 years left of social acceptance for it and funding for it -- and maybe it is less than 40 years, or even less than 4 years in an extreme but imaginable case of exactly the wrong person winning the Presidency and leading the charge against the forces of Darkness and Evil among us.

So, in addition to arguments of economic effectiveness of the Science industry, there are arguments about the survival at all of what's left of it. Most large companies in the USA seem to have abandoned long term research entirely, and even boast about it.

But if Science falls to this new cycle of the Luddite axe, it will be largely Science's own fault and own sin of omission and inaction that led to it. For way too many people, life is getting worse, not better. Too many scientific and technological "miracles" turn out to be "catastrophes" in disguise when seen in their actual social context. One instant example is the introduction of the electric water pump to India -- which in the very short run resulted in a great increase in food production and population, while exhausting the aquifer, and now the final state of man is worse than the first.

It's not OK for Science to say "We did our part - everyone else screwed up!" Maybe if inventions were created with an understanding of context, we'd have fewer devices created in isolation being applied in destructive ways. Maybe, at this point in time, it's more important for the best and brightest brains on the planet to stop messing with subatomic bozos (or bosons), and turn to coming to grips with the fact that the ship we're all on is listing somewhat to one side at an increasing rate.

I don't know what to do about this problem but it seems important. With 1 Gigahertz processors in everyone's briefcase, don't we have enough collective computing power to address some of these more important questions like better ways to collectively understand complex situations and discuss them and find solutions?

"Not my job!" say many. Yep, at least that's right. It won't be your job much longer at the rate Science is sinking in social relevance and importance while it's busy in its stateroom admiring its new outfit and feeling all proud.

It's time to take a serious look again at the Scientific Method in the large, and ask what we could do to improve its functioning by a factor of 10 or 100, using the new technology we have, particularly for addressing complex feedback scenarios which is where it's all breaking down out here in the cold.

More energy will only make it worse, moving us closer to catastrophic warming. More water will only make it worse, as the primary use of water is to wash toxins from point A into the ocean, killing the ocean. More food will only make it worse. Antibiotics are soon to make things worse. Any "pointwise" solution turns out to not be a solution at all.

Enough with the local, pointwise, isolated "solutions." We need some system-wide actual solutions! If Science can't deliver those, then get off the playing field so the rest of us have room to work.

W.

Monday, July 02, 2007

The power of delusion -- genetic causality

What was reported as a dramatic event came this week, if we are to believe, in the official recognition of the fact that human genes co-operate as complex systems, not as some sort of "one gene, one function" machine tools.

Here's the heart of the New York Times article today (7/2/07) by Denis Caruso, identified
as follows: "Denise Caruso is executive director of the Hybrid Vigor Institute, which studies collaborative problem-solving. E-mail: dcaruso@nytimes.com."
A Challenge to Gene Theory, a tougher Look at Biotech

The $73.5 billion global biotech business may soon have to grapple with a discovery that calls into question the scientific principles on which it was founded.

Last month, a consortium of scientists published findings that challenge the traditional view of how genes function. The exhaustive four-year effort was organized by the United States National Human Genome Research Institute and carried out by 35 groups from 80 organizations around the world. To their surprise, researchers found that the human genome might not be a “tidy collection of independent genes” after all, with each sequence of DNA linked to a single function, such as a predisposition to diabetes or heart disease.

Instead, genes appear to operate in a complex network, and interact and overlap with one another and with other components in ways not yet fully understood. According to the institute, these findings will challenge scientists “to rethink some long-held views about what genes are and what they do.”

[T]he report is likely to have repercussions far beyond the laboratory. The presumption that genes operate independently has been institutionalized since 1976, when the first biotech company was founded. In fact, it is the economic and regulatory foundation on which the entire biotechnology industry is built.

But when it comes to innovations in food and medicine, belief can be dangerous.

Overprescribing antibiotics for virtually every ailment has given rise to “superbugs” that are now virtually unkillable.

The principle that gave rise to the biotech industry promised benefits that were equally compelling. Known as the Central Dogma of molecular biology, it stated that each gene in living organisms, from humans to bacteria, carries the information needed to construct one protein.

The scientists who invented recombinant DNA in 1973 built their innovation on this mechanistic, “one gene, one protein” principle.

Because donor genes could be associated with specific functions, with discrete properties and clear boundaries, scientists then believed that a gene from any organism could fit neatly and predictably into a larger design — one that products and companies could be built around, and that could be protected by intellectual-property laws.

In the United States, the Patent and Trademark Office allows genes to be patented on the basis of this uniform effect or function.

In the context of the consortium’s findings, this definition now raises some fundamental questions about the defensibility of those patents.

“We’re learning that many diseases are caused not by the action of single genes, but by the interplay among multiple genes,” Ms. Caulfield said.

Even more important than patent laws are safety issues raised by the consortium’s findings. ...

“Because gene patents and the genetic engineering process itself are both defined in terms of genes acting independently,” he said, “regulators may be unaware of the potential impacts arising from these network effects.”

With no such reporting requirements, companies and regulators alike will continue to “blind themselves to network effects,” he said.


Now, the field of "Systems Dynamics", celebrating its 50th anniversary this week, is devoted to studying how to describe, analyze, and design complex systems made up of many components interacting in "non-linear" ways -- which is to say, interacting so that any given "function" is carried out by many different components acting in concert.

This property, which I've been calling a "scale-invariant" design principle, can be found at all levels of life, or any computer system, from cellular components to genetic "circuits" to humans in a sports team or office, to scientists themselves doing research, to the role individual corporations have in the ecology of the economy.

The big question in my mind isn't really that genes interact and cooperate in getting their chores done -- it's that our best researchers took 31 years to figure this out, working together, in the face of what is sure to be seen, in hindsight, of overwhelming evidence that it is true.

This gets me back to yesterday's post on "The Power of Yarn", and the single sentence that captured the essence of that for me in the Yarn Harlot's story " There are some truths. Things that just are the way they are, and no amount of desperate human optimism will change them."

One of these truths is that living things operate in complex ecologies, not designed to make life easy to analyze. Another such truth is that "feedback is important" and that, again quoting the yarn harlot,
See how 10 is bigger than 9? See how there is no way that 10 can be made smaller than 9?
I've been asserting almost daily that the "scientific method" has a major weakness, as practiced, in that it focuses our attention on separable parts and analysis based on the General Linear Model, that assumes critically that causality is not circular - that is, that there are no feedback loops. Unfortunately for those who wish for such simplicity, Life is dense with such feedback loops, if not actually defined by such loops.

It is an astonishing fact of life, which the Times article reveals, that the desire for life to be simpler is so powerful that it can cause 10,000 "trained" scientists, with PhD's, to take 30 years to finally collectively observe what others outside their mutual-blindness-field already knew.

As I've said, textbooks such as "Feedback Control of Dynamic Systems" are in their 5th editions in Control System Engineering, but biologists, and much of public health's biomedical research community, discount that literature to the point of invisibility and effectively treat it with contempt. To them, this literature does not exist. When seen, it "comes as news to them", and is promptly forgotten, because it conflicts with the shared myth of their culture, and cultural myths always win out over boring contrary evidence.

Science, as an enterprise, as practiced by real people in the real world, is not immune or exempt from such behavior. I really must tip my hat to the late Dorothy Nelkin, who gave a graduate seminar back in the 70's or so at Cornell in "The Sociology of Science", for awakening me to this fact, which, as a physicist by training, was "news to me."

Similarly, Science, as an enterprise, and Medical Science as well, should not be astonished, but often are, that people outside their internally-blinding-fields have less regard for the collective ability to discern truth than the scientists inside the myth-field would expect. In fact, it sometimes appears from outside that the "scientific method", as practiced, produces a type of "idiot-savant" who can see with tremendous power along such a narrow trajectory that they have almost complete peripheral blindness. Their history of crashed theories and trail of mistaken certainties are painfully evident to outsiders, but almost invisible from within.

If confronted with the trail of past casualties of the "scientific method" we get a response that "see, it works!" when , as with biology, in only 30 years they get around to being forced to see something that makes their life more inconvenient and part of their training irrelevant or impotent. Comfortable delusion wins out, especially if shared with everyone nearby and only challenged by distant outsiders who are clearly ignorant fools.

So, yet, it is true, that some biologists have started to realize that in some cases Life involves complex systems and feedback. Perhaps in another 30-50 years, this will be dealt with, and, golly, they might realized that feedback crosses the vertical hierarchy and "local" events may in fact be determined by "distal" factors or even social factors. But I won't hold my breath, because, (a) I can't hold it that long, and (b) this fact would be so inconvenient, and such a problem, that it will find some way to be rejected yet again for another 30 years.

Yesterday, somehow prompted by doing the Time's Sunday Crossword puzzle, I came across a history of how the US Military stubbornly refused to see that airplanes could possibly damage ships at sea - a fact that flew in the face of existing "doctrine." Just as Semmelweiss was ostracized and removed for his myth-challenging assertion that it was doctors' dirty hands that were causing women to die in labor or surgery, so Billy Mitchell was court-martialed for convincing the military that their official doctrine had clay feet.

It is a little puzzling that very good researchers, who wouldn't think of peeking at the identifiers of samples in doing a double-blind experiment to defend against bias, can operate in a world with such huge, collective bias against certain ideas and be oblivious to it and resistant to the meta-idea that such bias exists and that they, caught up in that non-level playing field, have a huge effective bias affecting their results that they are unaware of and not properly countering.

If they knew it was there, yes, the would adjust for it. I love scientists. Part of my heritage is science. They're good researchers, but they're simply not familiar with the power of context to focus and blind and bias their very own selves to facts that are trying to leap off the page. Stephen Jay Gould documented much of the power of this effect so well in The Mismeasure of Man, but most scientists haven't read that, or think it doesn't apply to them because "they're very careful."

This is the heart of all the work in high-reliability systems as well-- how to overcome collectively formed mental models and myths and paradigms that have taken hold and are now blinding everyone to facts they should be seeing, but aren't.

Well, maybe at last, with computer modeling and the power of interactive animations, researchers may realize at last that bias comes in many sizes, and the larger models are almost as hard to see from those embedded within them as gravity waves.

It's not just scientists that are prone to this, but many of the rest of us have a little more humility or experience and realize our judgement is not 100% reliable. Scientists when they have checked off the boxes within their own tiny trajectory that has now become their entire world seem, collectively, to lack such humility - sort of an iatrogenic side-effect of the PhD process and of hanging around a very non-diverse crowd that shares the same viewpoint.

These silos of tertiary specialization are the source of much friction, particularly if it is not recognized that the distortion of the perspective of the silo is causing the blindness.

More on this in some later post. It's too important to breeze by, and core to the frustrating battle between religion and science over large-scale social processes.

This is the challenge all organizations, all cultures, all s-loops face -- how to achieve dynamic stability, to be resistant to type-1 errors of being too gullible and believing flashes in the pan, but of being still capable of avoiding type-2 errors - of being to stubbornly fixed on a particular data value, or mental model, or paradigm, or goal-set, or identity that it cannot accept any feedback at all and there is no reasonable way to get updates up to the top where they do any good.

This is perhaps the single largest core cybernetic challenge for a survival-enhancing model.


Wade