Showing posts with label sources of conflict. Show all posts
Showing posts with label sources of conflict. Show all posts

Tuesday, October 28, 2008

Why we have so much trouble seeing


(Columbia shuttle launch. / NASA )



The difficulty lies, not in the new ideas, but in escaping from the old ones, which ramify, for those brought up as most of us have been, into every corner of our minds. John Maynard Keynes

To understand how we “see things”, we need to realize that vision is not at all some kind of biological TV camera that simply projects its image where “we can view it carefully and without bias. The picture that forms has been so filtered, edited, and amended as to sometimes bear little relationship at all to what is before us. Our hopes, fears, mental models, stereotypes and prejudices intervene long before the image delivered to us has been formed – as surely as a political candidate’s own words have been replaced by many layers of handlers. And, worse, the intervention is itself as invisible to us, and hard to see, as our eyes’ own “blind spots” – which are effectively papered over with an extrapolation of the surroundings so that we are not burdened (or informed) by what is there.

In our evolution it was valuable to be able to discard the ten thousand leaves and, based solely on a little patch showing through here and there, to connect the dots and so perceive the dangerous animal behind them, and to do so with sufficient certainty that we would take immediate defensive action, even if sometimes over-reacting to shadows. The process is built into our hardware and is automatic and invisible. The process is accelerated if everyone else around us is screaming and running – we too see the beast, real or not.

Two features of our visual system contribute greatly to disagreements between humans to what is “obviously going on”.

One feature is a type of automatic “zoom” feature, which brings whatever we are contemplating at whatever scale to just fill our mental TV screen. Whether it is tying our shoe-lace, or contemplating global thermonuclear war, the subject occupies exactly one mental screen.

A second feature, adopted from our need to survive, is the way our eyes cause anything that is constant to fade from view, literally, so that we are able to detect quickly anything that is moving or changing or different.

These two features combine to make it startlingly easy to take some small disagreement between two people and have each person “blow it all out of proportion” and lose track entirely of how much in common they have, and all the good things they share. After cooling down, each wonders how that could possibly have occurred. This is a perfect example of a problem actually caused by the “features” of our visual system.

Another problem is the astounding impact of context on how “the exact same data” is seen on our mental TV screen.

Here’s one example, in which you should simply ignore the background and note that the two vertical red bars are exactly the same height. It is extremely hard to do, even after you print out the image and measure them and confirm it.


Below is an even stronger illusion.


The dark gray square at the top was made by simply cutting out a section of the "light" gray square in the "shadow", and pasting it up in the white background area.

Your eyes "auto-correct" it for you to account for the "shadow." You can’t stop them from doing this. I have yet to find anyone who can easily “see” that the two squares marked are the same shade of gray, even when they have confirmed that they are.

I know this seems hard to believe, so do this" print out the picture, get a pair of scissors, and cut out the square in the shadow and slide it over to the edge, where it magically "changes color" and becomes dark. As you slide over "the "shadow", the same square changes shade right in front of you.

This is just one of the thousands of things your perceptual system is doing to be "helpful" to you, including altering the way you perceive people around you, so that they fit your mental model of how things "should" be.

The same effect is at work if you're deep into depression, when your mind is "helpfully" coloring everything around you "depressing" before it shows it to you.

That's what makes prejudice or bias or depression so hard to detect and treat - they seem so "obvious" and "external" that you can't figure out that your eyes changed reality before they showed it to you. This realization that your mind can lie, convincingly, to you, is the first step in Cognitive Behavioral Therapy and overcoming depression.

So, our minds and eyes can be gripped with not just an image, but an attitude or mental model that is almost alive, that filters and twists and selects and changes everything around us to fit its own view and thereby survive. It fights back against our inroads, undoing our progress. No wonder earlier humans thought they had become “possessed” by a demon.

This, sadly, is not just something that occurred to ancient man, and we, being modern, are no longer subject to. These are the same bodies and visual systems that ancient man had, with all the pros and cons.

In modern terms, we are captive to mental models and feedback loops. The famous economist John Maynard Keynes, observed the same thing here (quoted in http://en.wikiquote.org/wiki/John_Maynard_Keynes )

The General Theory of Employment, Interest and Money (1935)


  • The difficulty lies, not in the new ideas, but in escaping from the old ones, which ramify, for those brought up as most of us have been, into every corner of our minds.
    • Preface
  • The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.
    • Ch. 24 "Concluding Notes"

Sadly, we have not even exhausted the features of human perception that control us invisibly, intervening before we can see what they have done.

Charles Schultz’s cartoon character Snoopy, lying atop his dog house one night, captured it perfectly as he mused:

Did you ever notice

That if you think about something at 2 AM

And then again at noon the next day

You get two different answers?

An equivalent morsel of wisdom from “Dennis the Menace” cartoon is this thought, as Dennis is in the corner being punished for his later mischief:

“How come dumb stuff seems so smart when you’re doing it?”

For better or worse, we are all caught up in an invisible current created by those around and near us, especially our peers. The resulting “group think” can often lead us all to the same wrong conclusion at once, and then sort of latch that thought in where none of us can escape “seeing it” as “obvious”.

This might not be so bad, but if we simultaneously interpret those who disagree as “enemies, out to destroy us”, we have a serious problem.

In any case, as we have all experienced, it is far easier to fall into mischief or sin or wrong ideas if the entire herd around us has already fallen into it.

This impact is remarkably strong, and well known to magicians. If only one person in an audience sees through your trick but no one else near them sees it, they will tend, strongly, to actually “un-see” what they “thought they saw” to reduce the discord.

Because all these effects take place before the images reach your mental TV screen, you can try all you want to be “unbiased” after that, with no impact. And usually, if charged with being biased or prejudiced, people react with anger and outrage, because they are trying to “be careful.” Sadly, they are carefully reasoning with distorted information.

One professor I had in Business School was involved in the design of the Pentagon’s War Room. He noted that, by the time the billions of pieces of information had been processed, filtered, summarized, tweaked, and massaged to make them fit in a one page summary, the conclusion was already built in by the system. Anyone would make the same conclusion, wrong or right, viewing that information. The War Room or central headquarters concept has a fatal flaw that way. How, for example, could General Motors executives not realize that people would switch to smaller cars when their financial pain rose? From the outside, it seems incredible.

Corporations and large organizations have a worse problem, that so far no one besides me seems to have noticed: What small facts or “dots” add up to, how they connect, depends on what scale you are operating on, not just on where you stand.

Here’s one of the classic pictures that illustrate the problem. View this image from normal viewing range, and then stand up, walk across the room, turn and look again.

The image above is from the 31 March 2007 issue of New Scientist and it is from a paper entitled 'Hybrid Images'

http://www.yoism.org/?q=node/141 has many more such images and illusions, as well as this delightful picture:


People who liked this post may like these as well:

Things we have to believe to see

Why men don't ask for directions

Pisa/OECD - Why our education stresses the wrong way of seeing

Failure is perhaps our most taboo subject (link to John Gall Systemantics)

Active strength through emergent synthesis

US - Economy of arrogance (and blindness)

Virtue drives the bottom line - secrets of high-reliability systems

High-Relability Organizations and asking for help

Secrets of High-Reliability Organizations (in depth, academic paper)

High-Reliability.org web site

Threat and Error Management - aviation and hospital safety

Failure is perhaps our most taboo subject (link to John Gall Systemantics)

Houston - we have another problem (on complexity and limits of one person's mind)

Institute of Medicine - Crossing the Quality Chasm and microsystems (small group teamwork)

Here's a few quotations from MIT Professor John Sterman's textbook "Business Dynamics".

Many advocate the development of systems thinking - the ability to see the world as a complex system, in which we understand that "you can't just do one thing" and that "everything is connected to everything else." (p4)

Such learning is difficult and rare because a variety of structural impediments thwart the feedback processes required for learning to be successful. (p5)

Quoting Lewis Thomas (1974):
When you are confronted by any complex social system, such as an urban center or a hamster, with things about it that you're dissatisfied with and anxious to fix, you cannot just step in and set about fixing things with much hope of helping. This realization is one of the sore discouragements of our century.... You cannot meddle with one part of a complex system from the outside without the almost certain risk of setting off disastrous events that you hadn't counted on in other, remote parts. If you want to fix something you are first obligated to understand ... the whole system ... Intervening is a way of causing trouble.


IN reality there are no side effects, there are just effects.

Unanticipated side effects arise because we too often act as if cause and effect were always closely linked in time and space. (p 11)

Most of us do not appreciate the ubiquity and invisibility of mental models, instead believing naively that our senses reveal the world as it is (p16).

The development of systems thinking is a double-loop learning process in which we replace a reductionist, narrow, short-run static view of the world with a holistic, broad, long-term dynamic view and then redesign our processes and institutions accordingly. (p18)

Quoting Nobel Prize winner Herbert Simon (p26) : The capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problem...

These studies led me to suggest that the observed dysfunction in dynamically complex settings arises from mis-perceptions of feedback. The mental models people use to guide their decisions are dynamically deficient. As discussed above, people generally adopt an event-based, open-loop view of causality, ignore feedback processes, fail to appreciate time delays between action and response in the reporting of information, ... (p27)

Further the experiments show the mis-perception of feedback are robust to experience, financial incentives, and the presence of market institutions... First our cognitive maps of the causal structure of systems are vastly simplified compared to the complexity of the systems themselves. Second, we are unable to infer correctly the dynamics of all but the simplest causal maps. (p27)

People tend to think in single-strand causal series and had difficulty in systems with side effects and multiple causal pathways (much less feedback loops.) (p28).

A fundamental principle of system dynamics states that the structure of the system gives rise to its behavior. However, people have a strong tendency to ... "blame the person rather than the system". We ... lose sight of how the structure of the system shaped our choices ... [which] diverts our attention from ... points where redesigning the system or governing policy can have a significant, sustained, beneficial effect on performance (Forrester 1969.). p29.

People cannot simulate mentally even the simplest possible feedback system, the first order linear positive feedback loop. (p29). Using more data points or graphing the data did not help, and mathematical training did not improve performance. ([p29). People suffer from overconfidence ... wishful thinking ... and the illusion of control... Memory is distorted by hindsight, the availability and salience of examples, and the desirability of outcomes.

The research convincingly shows that scientists and professionals, not only "ordinary" people, suffer from many of these judgmental biases. (p30). Experiments show the tendency to seek confirmation is robust in the face of training in logic, mathematics, and statistics. (p31).

We avoid publicly testing our hypotheses and beliefs and avoid threatening issues. Above all, defensive behavior involves covering up the defensiveness and making these issues undiscussable, even when all parties are aware they exist. (p32).

Defensive routines often yield group-think where members of a group mutually reinforce their current beliefs, suppress dissent, and seal themselves off from those with different views or possible disconfirming evidence. Defensive routines ensure that the mental models of team members remain ill formed, ambiguous, and hidden. Thus learning by groups can suffer even beyond the impediments to individual learning. (p33).

Virtual worlds are the only practical way to experience catastrophe in advance of the real thing. In an afternoon, one can gain years of simulated experience. (p35).

The use of virtual worlds in managerial tasks, where the simulation compresses into minutes or hours dynamics extending over years or decades, is more recent and less widely adopted. Yet these are precisely the settings where ... the stakes are highest. (p35).

Without the discipline and constraint imposed by the rigorous testing imposed by simulation, it becomes all too easy for mental models to be driven by ideology or unconscious bias. (p37).

System dynamics was designed specifically to overcome these limitations. ... As Wolstenholme (1990) argues, qualitative systems tools should be made widely available so that those with limited mathematical background can benefit from them. (p38).

Most important ... simulation becomes the main, and perhaps the only way you can discover for yourself how complex systems work. (38).


Saturday, November 17, 2007

I've been framed!


There are two ways to change the meaning of something - you can change the something, or you can change the context in which you say it. If we don't account for this, we will make terrible mistakes in communicating with each other, and even with ourselves. If we grasp this, we can overcome many of the problems that plague our world today, which are results of unrealized context shifts. We have content processors but what we need now are "context-processors."

We all know that a quote, taken "by itself" out of context can be totally different than what it meant at the time. This is often visible in courtroom dramas, where the person is asked by the attorney, "Answer, yes or no, did you say this?" followed by some damning phrase or sentence that sounds totally wrong out of context. We all know this is unfair and somehow wrong, but don't have a strong way to assert that or to understand how pervasive this effect is.

It doesn't just affect communications. It affects our ability to work alone!

My favorite expression of this truth was a cartoon one day by Charles Schultz of Snoopy, the dog, lying on top of his doghouse, staring at the stars and pondering. He said:
Did you ever notice
that if you think about something at 2 AM
and then again at noon the next day
you get two different answers?
This cartoon is profound. Slow down and consider that this means. This says that a correctly functioning human being has a context-sensitive thinker-thingie that produces different answers to the same inputs depending on what larger context it is sitting in at the time.

This is, in my mind, a "feature not a bug." In fact, this seems to me to be the key to reconciling humans and resolving age old conflicts that have seemed totally impossible to tackle.

This is also a critical insight in trying to figure out how to make decisions today that don't seem totally stupid tomorrow.

That's true whether you are a person, a group, a corporation, or a nation.

We are walking around comparing "content" and failing to account for different "context" in which that content was perceived or generated. In small, local worlds where context is shared and identical among people, we used to be able to get away with that. Once we start trying to cross cultures or "silos" of expertise, and do something interdisciplinary or international, this tends to trip us up every time. We didn't learn the "general case."

Content is explicit, obvious, the kind of thing you can hit with a hammer. Context is implicit, invisible, unstated, and hard to describe even when you try. But it is vital that we learn how to do this, to get by in a diverse world - a world in which different people are operating in different contexts but trying to communicate with each other over space and time.

It is crucial when we try to take some thought or observation, about a patient, say, and "record it" in some electronic database where we will pull it up a year later and compare the two to see what changed. Are we capturing what we need to do that assessment correctly? Are we writing something down in words that will bring up the right thoughts to a different doctor next year?

Tragically, we have failed, socially, to understand the full implications of this issue. The miracle of technology allows us to store or send content across space at the speed of light, but, oopsie, forgot about the context part of the message. What gets delivered is not what was sent, in huge ways.
It does not have to be this way. In the same way that we have built computers that do content-processing correctly, we can build environments that do context-processing correctly. It is critically important that we learn how to do that.
Now, these effects are not flaws in humans that would go away if we were all "rational" or "scientists" or if we all based our judgment on "data" and "evidence." These effects are properties of the very nature of space, time, and information itself. We cannot "get around them" or ignore them. We are going to have to learn how to account for them correctly.

It doesn't have to be hard, but it does have to be done, or we'll keep fighting needless wars, between parties that actually agree with each other but don't realize it.

Take the example of "perspective" -- a distortion of space where it appears to each observer that things "far away from them" are small, and things "close to them" are large, and as you move towards a distant building or mountain it "gets larger."

At some point in life as infants we figure out that the thing we're looking at actually isn't changing size at all, it's an illusion, a distortion, caused by where we are looking from, our viewpoint. If we didn't correct for this, we could argue all day about which of two things was "bigger" and what was "fair" and not get a resolution, because A looks bigger to me than B, but looks smaller to you. Once we correct for that perspective distortion, we can resolve that question in a way that makes us both happy. This happened so early in our lives we forget we had to learn it.

There is a popular misconception that because things are "relative", there is no underlying reality, and no way to ever reconcile them. Einstein said the opposite. He said that actually, once you understand what is going on, you can completely reconcile observations made by two competent observers, relative to their own reference frames, all the time, every time. You can totally account for the changes, say, in perspective between two observers, and figure out entirely how the world I see needs to be warped and twisted to give the world you see.

Computer animators and virtual worlds have to deal with this "perspective" or "viewpoint" transformation all the time. It's a lot of bookkeeping under the covers, but straight-forward if you do it carefully.

Unfortunately, there are other shifts in context that are less familiar to us that impact our ability to reach agreement. The problem is very deep, as I said, built into the nature of space and time itself.

Once, in my astrophysics grad student days, I took a course in Cosmology and General Relativity. I didn't get it all, but I got enough to get the story, which is not that hard to tell, and does not require math. Please don't flee. There will be no quiz. Everyone one here gets an "A".

The essence of Einstein's General Theory of Relativity could be distilled down to three insights, widely misunderstood. You don't need to be Einstein to understand these ideas.

The insights are that
  • content and context are completely equivalent in what can be said, though widely different in what can be expressed easily in each;
  • you cannot change one without affecting the other;
  • if you do your sums right, the changes are completely predictable.

In the world described by relativity, the meaning of a very concrete phrase or physical expression or very real measurement of velocity, say, changes as you slide it around in space and time, most famously if you attach the observation to different observers traveling at different speeds. I can only observe your speed "relative to me", so depending on how fast I'm going, I'll measure something different.

This is no big deal. People in a car on the highway appear stationary to each other, even though the car is speeding down the road.

Or, if I'm standing on the Earth, I see the sun, obviously moving across the sky, going around the Earth. If I'm standing in some space ship off to the side, I see the earth spinning and the sun remaining fixed. These observations are both right, relative to the "reference frame" in which they were made. To reconcile them you have to account for the different in frames used by two completely competent observers.

Implications
==========

Well, what does this mean? For one thing, it means that where you work or spend time thinking or talking to each other affects what result you'll come up with. A decision that is "obvious", made in a bunker-like dimly-lit War-Room deep underground might not be at all the same decision that would have been made, given the "same facts", in a cheery, sunny deck in a woodsy retreat on a warm spring day.

It means that an "obvious" decision about what to do next, made standing in an urban war zone with explosions in the distance is not the same decision, given the same information, that is "obvious" viewed by people safely out of harm's way, at their leisure, later, reviewing the tapes over coffee and some nice Danish.

In fact, as a child, I observed that the behaviors that seemed to make the great leaders "great" in war movies wasn't that they were brilliant, but that they simply managed to remain stable and sane when the world around them had gone to hell. They remained connected to a larger, stable world despite the fact that their body was located in a locally unstable one.

Maybe, there is value in having content-intensive work like "science" embedded in larger stabler social frameworks that religions have sometimes produced in the past. I find it fascinating that, according to a recent issue of New Scientist, geneticists are discovering that far from being "junk DNA", the DNA between the 22,000 genes that code for proteins (content) may be even more important, and this "junk" codes for the larger context that decides when and whether that content should be expressed, or modified in the way it is expressed. (Junking the Junk DNA, New Scientist, July 11, 2007).

Yesterday, I did a post on the software world "Second Life" and possible roles of virtual reality in getting people to experience worlds they couldn't get to on their own. Today, I want to add to that the idea that virtual worlds are virtual contexts, which means that you can conceivably adjust not just the contents of a scene, but the context of the scene in which those contents are embedded.

This may be the tool we need to explore more how context and content interact with each other for humans, and to learn how susceptible we all are to "framing" of an issue. We can understand how advertisers or demagogues try to use propaganda techniques to shift the frames of discussions so that, even though we seem to be the same people, we end up making different decisions. Even though we don't feel manipulated, we have been - by Madison Avenue agencies that know how to send broadband messages in context-modulation that bypass all our cognitive protections against content-manipulation. That's what TV is all about, to them.

Dirty pool aside, honest and diligent CEO's and civic leaders need to understand what an idea will sound like, or be taken to mean, in hundreds of different contexts, to know how to process the input they get, or how to say anything that won't offend one group or another.

If nothing else, cars for Latin America, for example, shouldn't be named "Nova" - since "No va" in Spanish means "won't go." Underarm deodorant shouldn't be advertised in Tokyo using a happy octopus logo, since in Japan an octopus doesn't have 8 arms -- it has 8 legs. Oopsie.

(photo Walking Alone, by me, on Flickr)

Thursday, October 18, 2007

Taking a Whack Against Comcast

This week Comcast came to our apartment building and did their usual "two-phase" installation for a new customer. It's always the same - we've seen them do this maybe 8 times in the past 10 years.

First, one guy comes, gets what seems like 100 extra feet of bright orange cable, and runs it from the junction box at one end of the building across the middle of the lawn, across the sidewalk at the entrance to the building, around my mother's plants, up over a patio wall and down into a snaky spiral of an extra 40 feet or so, left on their patio before it connects to the service entrance there.

After a few days of disbelief that they really consider this "done", we call our landlords, who call Comcast, who sends a second guy to do it right. The second guy always seems very sincerely irritated with the first guy. In a way, too bad - I was thinking if my mom tripped on it, we'd be set for life once our attorneys got done with them. Then I dismiss that thought, mostly.

Anyway, it baffles us how they make money doing this, let alone good customer relations. It makes me wonder again if this is even visible at the top, or if every instance of this is discounted and written off as "grumpy customer - isolated incident." It has to be the latter.

It reminds me of when my then wife and I got "Swine flu" shots back about 1980. We'd been healthy for years, got the shot, and woke up the next day very sick. We were sick for days. When we reported this to our doctor, he said this couldn't be a result of the shot, because it didn't have that reported side-effect. We asked if he was going to report our experience. He said no, there was no point, because this was clearly an isolated coincidence. Hmm.

In that mood I was perversely delighted to see the article in this mornings's Washington Post
titled "Taking a Whack Against Comcast", about a 75-year old lady who took her irritation at poor customer service to a new level, involving a hammer. I won't spoil it for you.

Still, it seems a prototype of a reality that belongs in the comic strip Dilbert. How can a company's top management be so blind, or uncaring, or out of touch with reality that they can't believe this is happening, or, believe it but don't care?

The first impression is to say "Well, they are bad people." However, I warned against that repeatedly in recent posts, and suspect that they are actually, as bizarre as it seems, good people who are victims (on their end) of looking at reality through the wrong end of the telescope, and not being able to see what all the fuss is about, and therefore writing it off as "Well, they are bad people" (that is, the customers who complain.)

This problem is so widespread, among companies, that it becomes really important. And when the companies are hospitals, say, as the issue at Kaiser earlier this week in California, the problem becomes life-threatening at the bottom, and then, probably to their surprise, life-threatening to the whole company at the top, who "never saw it coming."

No one wins from this, and no one really investigates "it" because it's so "obvious" to people on each end that the problem is due to "bad people" on the other end.

Those are the kinds of situations we study in Systems Dynamics, and in books such as Peter Senge's The Fifth Discipline - where "the system" is actually the problem, but no one understands how that happens, so everyone assumes the breakdowns are due to "idiots" or evil people at "the other end."

That's the challenge for the conflict-resolution crowd - to disentangle all the entrenched blame from a problem that is as subtle as the error in M. C. Esher's Waterfall, and then to find the structural "system" problem actually responsible for the resulting perceptions and behavior.

Yes, I think there are "bad people" - but way fewer than we commonly assume. Hybrid scale-dependent vision seems to me the most common culprit in setting the stage for things to go wrong, and then feedback loops close the trap and lock it in place.

Comcast could probably save enough money if they fixed this problem on their end to relieve the pressure that makes them push the first installers so hard that their only choice is to rush and run. There are two "stable states" of the system -- doing a good job and making a decent profit, and doing a terrible job, with everyone unhappy, and making less profit. The problem is that the middle state is worse than either of those, so the system gets hung up on the "less profit" local maximum (the top of the little hill) and can't find its way down and back up to the top of the larger profit hill, because "down is bad" and pressure to make profit is so high.

The challenge is getting people to believe long enough that the "obvious culprits" might be innocent that they can step back and look at what is really going wrong.