Showing posts with label collaboration. Show all posts
Showing posts with label collaboration. Show all posts

Wednesday, October 21, 2009

Untouchables revisited



Columnist Tom Friedman, in today's New York Times, in a context when another 100 people are being laid off at the Times newsroom, assesses the qualities of "the untouchables" -- those who will keep their jobs when the others are long gone.

I think he misses the point that the sea of others is going to drown the few in that mental model of how a solution might work. I suggest a better way, one that will sound familiar to my regular readers.

Maybe, we can help each other out, and get by with a lot of help from our friends.
=============================

Ann Arbor, MI
October 21st, 2009
4:14 am

Indeed, "Man must wait on corner long time for roast duck to fly into mouth."

We are, sadly, more crippled by our educational system than we are helped by it, as gurus such as W. Edwards Deming have noted. The reason is not that we don't teach enough stuff, but that we teach "the wrong stuff."

We teach math and science and sometimes English and social studies, but all in a context of intense individual competition, as if we are waiting for very bright individual humans to save us. That "roast duck" isn't coming back. No individual is bright enough to grasp our social problems any more, and even if they could grasp their own and succeed, it has lately been at the expense of all of the rest of us, not as our leader and savior. Wealth isn't what's trickling down.

There is a solution, and it is realizing that people, like computers, are a thousand times more powerful in cooperative networks than as isolated "mainframes" or even "super-computers." All the new research on social intelligence and high-performance teams shows that we are at our best, and a best a thousand times better than our worst, when we are in a high-functioning team.

But that is exactly what "do your own work" trains us out of. We need courses in how to make friends, how to ask for help and get it, how to be a team member, how to be a good or great leader and / or follower, how to build relationships, and WHY we need cohesion and synergy far more than bright Rambo-style individuals.

Done correctly this does not dampen or quench the brightest of us -- it lifts and empowers and supercharges the brightest of us. Teams of people working this way are shown repeatedly to have extraordinarily high performance, quality, and productivity.

That's the corner we need to turn, the new model we need to understand, and respond to. Otherwise, we will turn on each other and collapse, like a shattering glass.

---------

(PS - yes it IS possible to have a work place where, when you get a good idea, other people enthusiastically support you instead of thinking, "Damn, there goes my own raise and job." In fact, that type of socially uplifting context is the only place where really good ideas can be nurtured and evolve and where truly creative problem comprehension and solving occurs.)


Related Posts


Houston, we have another problem.

The importance of social relationships.

DOWNSIDE OF MISMANAGED PSYCHOSOCIAL FACTORS: "Negative Deviance"



UPSIDE OF CORRECT PSYCHOSOCIAL FACTORS: - "Positive Deviance"



OECD and International Student Assessment



Tuesday, October 07, 2008

Now what - contemplating the market crash


You say "Americans deserve to hear much more detail about how the candidates would reform the financial system to prevent another crisis like this one."

I think we desperately need to expand the frame in which "this problem" is perceived. That discussion should precede solutions within that frame.

For one thing, this cannot be isolated to be a "financial system" problem. It involves other economic and social systems, including trust, social decision-making, governance, jobs, social safety nets, education, attitudes towards expertise, sources of blindness in how humans see the world, etc.

They are all tangled. They cannot be solved "separately", or put into a priority order. If there are 200 holes in the bottom of the boat, addressing the "three most important ones" doesn't really cut it. Causes, effects, symptoms, are in tangled feedback loops.

Most of all, I don't think that 200 bright people can "solve" this while the rest of the country and world watches TV, or that it will be solved by voting to pick the best of the two solutions they come up with in some back room somewhere.

A billion people are willing to help. How can that work? THAT's the question we should be addressing.

— Raymond, Detroit



See also:

Failure is perhaps our most taboo subject (link to John Gall Systemantics)

Active strength through emergent synthesis

Why more math and science are not the answer.

OECD PISA - Our education system should teach collaboration not competition

US - Economy of arrogance (and blindness)

Virtue drives the bottom line - secrets of high-reliability systems

High-Relability Organizations and asking for help

Secrets of High-Reliability Organizations (in depth, academic paper)

High-Reliability.org web site

Threat and Error Management - aviation and hospital safety

Failure is perhaps our most taboo subject (link to John Gall Systemantics)

Houston - we have another problem (on complexity and limits of one person's mind)

Institute of Medicine - Crossing the Quality Chasm and microsystems (small group teamwork)

Pathways to Peace - beautiful slides and reflections to music on the value of virtues

You say "No system can be smart enough to survive this level of incompetence and recklessness by the people charged to run it."

T.S. Eliot, writing in the last Great Depression, in "Choruses from 'The Rock'", said it well.

"They constantly try to escape
From the darkness outside and within
By dreaming of systems so perfect that no one will need to be good.
But the man that is shall shadow
The man that pretends to be."

===========================

MIT's John Sterman, in his book "System Dynamics - Systems Thinking and Modeling for a Complex World", describes how poor intuition is at predicting the behavior of "complex adaptive systems."

Books like Gene Franklin's textbook for control system engineering, "Feedback Control of Dynamic Systems" describe the universally applicable conditions for any system of any type to be stable, and I don't see them met or even discussed.

The only thing that seems CLEAR to me is that a whole new feedback loop has been added, responding with almost certainly short-range horizons to events that used to be decoupled and now that will be coupled by that unpredictable response.

We are way past the point where well intentioned humans can follow their "insight" and improve things with that strategy.

I have a number of relevant quotes from Sterman's book on my weblog post on the credit crunch that I made in August, 2007: http://newbricks.blogspot.com/2007/08/credit-crunch-reaches-larger.html and this post I made in January, 2007 on Jay Forrester's Law of Unintended Consequences: http://newbricks.blogspot.com/2007/01/law-of-unintended-consequences.html

At risk of running on, I briefly quote that paper: The classic paper in this field is Jay Forrester's congressional testimony: "The Counterintutive Behavior of Social Systems", http://web.mit.edu/sdg/www/D-4468-2.Counterintuitive.pdf

Quoting the abstract: Society becomes frustrated as repeated attacks on deficiencies in social systems lead only to worse symptoms. Legislation is debated and passed with great hope, but many programs prove to be ineffective. Results are often far short of expectations Because dynamic behavior of social systems is not understood, government programs often cause exactly the reverse of desired results.

==============================
=============

Thursday, December 06, 2007

The "togetherness effect"

Yesterday I commented on the OECD's international student assessment, and I'd like to complete that thought. I believe that there is a solid path to the future from here, but it is not paved with math and science. Supercomputers today are actually super-powerful networks of hundreds of thousands of simple individual computers. That design pattern works. We simply need to use the same design pattern socially.

The power we need to manage our society and overcome our obstacles is already here in the "white space" between us, not in making each one of us some sort of all-wise genius.

So we need to learn how to manage the white space. In computer science, this would be called a distributed "operating system." The term "social capital" is related to this. "Emergence" and "synergy" and "teamwork" are related terms.

"Non-linear" is a related term. What that term is trying to convey is that the sum of two things is often greater than you would expect, and is much larger than what you think you'd get by adding up each thing separately.

One example that used to be familiar to every human, but is much rarer now, is a property of actual fires made of actual wood burning, or at least real charcoal in a grill. When it gets to that stage where the visible flame is mostly out and the coals are red-hot, it has a "non-linear" property. If you lay out these red-hot coals in a long row separated by a hand-width, they will probably die out. If you heap them into a pile, where each is near many others, they will glow brightly and keep going. You can do this experiment and see this for yourself. This is a "real fact." And it is a very important fact to keep in mind.

Or, if you're more scientific, you can look up the references to "cavity radiators", which say the same thing. If you take a piece of metal with a hollow space inside it, and drill a small hole into that cavity, and heat the metal until the outside just starts to glow, you'll see that the small hole you drilled is glowing much more brightly than the outside of the metal.

Here's a link to cavity radiation, also known as "black body radiation". Here's a relevant quote from that link:
"Blackbody radiation" or "cavity radiation" refers to an object or system which absorbs all radiation incident upon it and re-radiates energy which is characteristic of this radiating system only, not dependent upon the type of radiation which is incident upon it.
The brightness is a property of the fact that the hole allows each molecule to "see" many more molecules across it that it would "see" if it were just surrounded by the dozen nearest neighbors it has in a solid form. The brightness arises from this interaction, from this mutual encouragement and stimulation. Well, the math gets way more complicated, but that's the net effect.

It's a property of nature, and it's an important one to be sure you know, and be sure you believe in with unshakable faith. Things together burn more brightly than things separately. The "extra" brightness is a property of the "togetherness", not a property of the things.

In fact, the effect of the "togetherness factor" is so powerful that it really doesn't matter what the things are. In the limit, the nature of the "things" simply drops out of the equations, and the "togetherness" takes on a life of its own.

You can read about this. You can try the experiment with charcoal and verify it. This actually works. You can read about today's supercomputers here. This actually works.

So, now here is the huge emotional and intellectual leap: If this "togetherness factor" effect is so real and powerful, why aren't we using it more to solve our social problems?

As I reflected in my last post, why are we still obsessed with trying to make the "things" super powerful (and experts in math and science), when, in the end, what actually matters is the togetherness effect?

In fact, as W. Edwards Deming was so vocal about, our so-called "educational system" seems designed to destroy the togetherness, instead of to encourage it. Our concept of a "school" is a place where students compete to see whose "thing" is bigger, or better, or smarter, and then we can select those with the "best" thingies and boost them up even more with top honors and praise and scholarships and prizes until they have super-hot-shot-great thingies, even better than the Chinese, you betcha. And then, oh boy, then we'll "win" for sure.

Huh? Researchers in computer science gave up that approach 25 years ago, when they discovered that networks were way easier to build than super-cpu's. IBM's "Blue Gene" architecture has 65,000 dual-chip processors all networked together.

Researchers in Artificial Intelligence gave up the idea 25 years ago when they discovered that networks of very simple "rules" produced more "intelligence", way more easily, than some huge "if-then-else" program that turned out to be impossible to write anyway. I was on a panel on Expert Systems in Anaheim at a conference once, at the table with a team that had just redesigned the aiming program for the Hubble Space Telescope. They had reduced one "supercomplex" program with 50,000 lines of FORTRAN to 210 simple rules, which was 1000 times easier to write, and to maintain, and to debug, and took a lot less room to boot. The rules were things like "If the solar panels are out, don't plan to use stars they block as guide stars."

My expert program scheduled students into rooms for the Cornell Business School, while maximizing odds students could get their desired courses, faculty could sleep late, no one had courses Friday afternoons, the women were evenly distributed in different sections of classes, everyone had a section of something with every other person at least once, groups of 20 students had multiple classes together so you only needed to find one friend to get notes for all the classes you missed, etc. It was still being used 8 years after I left, maybe it's been replaced now. I would have never tried to take on those specs with a classical program written in some language like PL/1 or C or even LISP. Using many simple rules instead of one huge program let me write something 100 times more powerful than I could have otherwise.

I picked the idea of using rules because they are so easy to change and evolve over time. If a rule changes in the real world, or someone realizes a new rule or constraint, you just change that one rule in the system and hit the "run again" button. That's it. No logic diagrams or long nights wondering if every possible case has been thought of.

Not surprisingly, in this context, is the fact that biological genes operate exactly the same way. No single gene does one job -- mostly, they all interact with each other and it's the interaction that does the heavy logic and lifting. So, even God / Nature / biology selected this as the design pattern that made sense to use for the long haul. Since I'm a big believer in patterns that transcend any particular level or scale, and can be reused on other levels and scales, I'd say, if it works for genes and it works for computers then it should work for people - or for sure we should look and see if it does or not. It's a really easy solution to really hard problems and we'd be fools not to check it out near the start of our homework assignment.

This is the trick used to pack a human being into a mere 25,000 protein-encoding genes, which is only 5,000 more genes than a roundworm has. That's because the number 25,000 factorial is way larger than 20,000 factorial. It's not the genes that make us human - it's the interaction between the genes. The "program" is in the "white space" between the genes. When our bodies break, much of the time, as with current parallel processing software, the breakdown is "between the lines" not "on a line of code" -- that is, it's the collaboration that's broken.

Note also that our bodies have no such thing as a "master gene" or a "king gene" or a "Rambo gene" that directs all of the other genes and tells them what to do. Fascinating. The same thing is true of our visual system - we have subsystems that detect edges, or color, or texture, or motion, but there is no "king system" that runs all the subsystems -- the subsystems just kind of keep up a continual dialog with each other about "what do you make of THAT?" and the vision "emerges" from that interaction. This is a very, very, very powerful design pattern.

Ditto for large-scale heaps of legacy systems, such as a major medical center might have. All these fragments have to try to talk to each other successfully, and most of the breakdowns are not "in" a system so much as "between" systems. Again, it's the white-space on the chart of "systems" that is the "system" we need to keep an eye on.

I've been pondering this for some time. See Intelligent Agent Infrastructures For Supporting Collaborative Work (Sen, Durfee, and Schuette, 1995 - Computer Science and Engineering graduate project, EECS Department, University of Michigan)

In radio astronomy we can find other examples. The huge 1000 foot "dish" antenna at Arecibo, Puerto Rico, was great, and I have a spot in my heart for it since my then room-mate at Cornell designed the central antenna, but times more on and arrays of many smaller dishes are more powerful and more flexible and way cheaper to build. It's all the same principle. Many things, working together, can always trump one big thing. Always. Supercomputers. Intelligence. Perception of the universe. Always.

The same thing is true of sports teams, or of any kind of team. The teamwork factor, or what I'm calling the "togetherness effect" can be more powerful than any single superstar. Great coaches look for good players that, above all, are good team-players, not egotistical superstars.

The "Rambo" model is dead, or should be. It fights back and tries to stay alive, tries to keep its grip on our consciousness, tries to keep us mesmerized in believing in it.

I could speculate on the role of male psychology and biology on this historical obsession with whose thing is larger, but the honest to God truth is that size doesn't matter. What matters is the togetherness effect. If we could advance past the 1600's into the 21st century and recognize that, we could solve our social problems instead of being perplexed and baffled as to why even our best schools aren't putting out students with thingies so good that they are solving our problems for us.

I mean, come on, guys. Get over it.

"Group work" should not be some sort of after-thought that is tacked on to our "education" if there is time at the end. Group work, and how to make groups work, should be the focus of our entire educational process. Everything else can be "tacked on" if there is time at the end.

There are physical laws involved here. There is no way to make a single computer chip that is so powerful that you can't do ten times better with a network of much smaller chips. Or 100 times. Or an unlimited factor times. The "up" part is in the interaction, not in the thingie.

We have our educational system entirely backwards. That's what Deming said. I agree.

Pouring our national treasure into trying to generate a few more students with super-fantastic math and science understanding is not the answer. "These are not the droids we're looking for." If we did succeed, say, which could take 30-40 years, and every American could score a perfect score on the SAT exam, it wouldn't solve our social problems. Look at Oxford, as I mentioned yesterday - a community of brilliant scholars and they can barely manage to keep the place operating, if that. We'd succeed in making a bigger one of those. Big deal. Whoopie.

Look, reflect, and learn. This is a curve. The road turns here. The future is not in the same direction we were going, even though it is the same road. The cheese has moved.

I just don't know how to say it more vividly. If you open your eyes it's right there. The answer is right there. We can pull our heads out of the sand of cultural depression about the size of our thingies, and get on with life, and solve our problems, and go explore the stars and find really neat stuff.

Also known as "God has a wonderful plan for your life." Look at the center of the neighboring galaxy, where the interactions are millions of times more powerful than "they should be", and feel the power that is right here, waiting for us to accept. There is hope, after all, for even us.


Photo credits:
M31 (Andromeda) Galaxy, Arecibo and Very Large Array radio telescopes are all from Wikipedia.
Cavity radiator pictures are from the textbook Physics by Halliday & Resnick, 2nd ed.

Saturday, November 10, 2007

Survival of the selfless


"The consequences of regarding evolution as a multilevel process, with higher-level selection often overriding lower-level selection, are profound." This under-statement is in the latest issue of New Scientist, in a must-read piece titled "Survival of the selfless", by sociobiologists David Sloan Wilson and Edward O. Wilson. (New Scientist , 3 Nov 2007).

Indeed. Since I've been presenting the case for multi-level co-evolution in my weblogs for the last 2 years, I am ecstatic to see some big names in the field take the same position.

This furor is about whether it is "genes" that evolve, or "individuals" or groups of individuals such as tribes or species. Views were and are still held by many otherwise rational scientists with religious fervor in the worst sense, and arouse equal vehemence when challenged, akin to that between creationists and evolution-supporters.

This matters because higher level groups may have a whole different "fitness" measure than individuals, and while individuals or genes might evolve faster by being "selfish", the whole society of individuals might evolve faster if everyone was cooperative and altruistic. This battle continues to rage today, and is a core issue in whether "competition" and "free markets" are a good idea or not. So it is tangled with social ramifications, just like all science ultimately is.

This is also a core question in whether a "Theory X" company, driven by internal competition between managers, can ultimately be out-performed by a "Theory Y" company, like Toyota, driven by massive internal cooperation. A lot of egos are at risk of being bruised. A lot of justification for public policy is at risk of being overturned. It's a big deal.

Well, which is it? Do individuals evolve by being better at beating each other, or do groups of individuals dominate by being better at collaboration?

Peeking ahead, of course, I usually argue that "or" is a bad concept, once feedback is involved, and the right solution to look at usually involves "and" and "all of the above, simultaneously, interacting." But, "all of the above, interacting with feedback" was way beyond anyone's ability to compute or analyze, and not an attractive model for most researchers or grant writers.

Well, back to this article. In the face of enormous opposition, and tacking a consensus in the field that group-level evolution is a dead concept, they really settle for the weak claim that "we cannot rule out group-level selection."

Hmm. What's this all about? The concept is fascinating, and the sociology of science is equally fascinating here. The Wilsons ask "Why was group selection rejected so decisively [ in the 1960's ] ?" What a great question in how Science works!

Now, I should note that I'm one of the casualties of what seems a similar disastrous and mistaken turn of a field, namely Artificial Intelligence ("AI"). I got hooked by a course at
Cornell in 1965, taught by Professor Frank Rosenblatt, titled "Learning and Self Reproducing Machines".He and his lab had developed a "perceptron", a maze of switches and wires that connected up to a 20x20 grid of 400 photocells, on which letters of the alphabet were projected.

The perceptron, a model of human vision and learning, was slowly learning to tell the letters apart and identify them. At the time, this was astounding, and many scientists confidently argued that this could never be done. Later, of course, Kurzweil and others carried this technology forward and made OCR text-scanners that are now about 99.5% accurate or better and can read license-plates at an angle from a speeding car. But, in 1965, telling "A" from "B" was a big deal, especially if the "A" wasn't always exactly straight up and down, or in the same place on the grid.

The perceptron's insides were a network of wires and "nodes", a model of our brain's neurons, where the total strength of signal coming into each node was added up, multiplied by some factor, and either triggered or didn't trigger an outgoing signal to the next layer. The system learned by changing these multiplicative factors, searching for some set of them that would ultimately trigger the highest level "A" node when an A was projected on its primitive retina, and trigger at "B" when a B was projected, etc.

Then, the field was devastated by a very authoritative and persuasive paper, ultimately retracted, by highly regarded MIT professor Marvin Minsky that this approach "could never work." Funding dried up, and researchers moved on to other projects. Labs closed.

It took over a decade until somone finally figured out that Minsky had simply proven that a two-level neural net had irrecoverable gaps in its logic, and was not "complete". What he failed to look at, or see, was that these gaps went away when you got to three-levels or more.

Wikipedia has this quote:
Its proof that perceptrons can not solve even some simple problems such as XOR caused the virtual disappearance of artificial neural networks from academic research during the 1970s, until researchers could prove that more complex networks are capable of solving these and all functions.
(source: Hassoun, Mohamad H., Fundamentals of Artificial Neural Networks, The MIT Press, 1995. pp. 35-56.)
Oopsie.

Anyway, it appears to this observer that a similar phenomenon has occurred in socio-biology. Some very persuasive people published papers "debunking" multi-level evolution, well before there was enough computer power to actually simulate it and see what happened. ( In 1978, the mainframe computer I was programming had 4,000 bytes of memory to work with. Not 4 Gig, or 4 Meg, but 4 thousand. Any cell phone today has more than that.)

The social climate at the time made this debunking seem a better idea. World Communism was the mortal enemy of all that was good and holy, threatening "our way or life", justifying huge military expenditures, and anything that suggested communal good or community was more important than individuals was instantly suspect and risked being dragged before the House UnAmerican Activities Committee, where the proponent had to renounce their views or be thrown out of their jobs or locked up as being "unAmerican." Everyone was building bomb-shelters for protection against that day's terror threat.

In addition, religions had held for thousands of years that there was really nothing of importance between man and God, and man was God's noblest creation, so the idea that something larger than humans but smaller than God mattered was suspect dogma. (These days, the evolution of the earth and global warming is in fact dominated by such a larger life-form, "corporations", which have more or less hijacked the role individuals used to play in influencing governmental decisions and policies. But that observation lives in world "A", and discussions of evolution live in world "B", and the two don't talk to each other or trade notes.)

Then, of course, some people didn't like the idea of evolution in any form, and rejected it and most of biology and science based on that view.

So, for many reasons, some good, some not so good, the idea of group evolution as a dominant or even important force was denounced, rejected with emotion, and painted as an example of wrong thinking to be avoided at all costs.

Now, by what the Wilson's say, the whole question is being raised again, this time in a climate with much more powerful computers, where cooperation and collaboration in corporations are not always dirty words, and where the old theories, frankly, didn't explain why there was just so much altruism and goodness in people.

As I say, I'm delighted.

Also, finally, as I've posted on before, finally "feedback" and dynamics are starting to be considered in models, and finally multi-level causality keeps on increasingly showing up in how individuals behave, to the point where the National Institutes of Health and the Institute of Medicine talk about the necessity of using multi-level models to understand social interactions and how the things we see around us, like poverty, are held in place by many subtle but very powerful forces at different levels.

Fascinatingly, this gets us back to what Charles Darwin himself said in The Descent of Man, published in 1871, and the lead sentence in the Wilson's article:

Although a high standard of morality gives but a slight or no advantage to each individual man and his children over the other men of the same tribe ... an advancement in the standard of morality will certainly give an immense advantage to one tribe over another.

The Wilsons say that group evolution, versus individual evolution, doesn't yet explain the observed rise in altruism, although they can computer a visible imact.

Immodestly, I'll suggest that they need, like Minsky, to look at three levels, not two, to see the effect start to dominate.
In fact, all around us, we see corporations trying to survive and be more fit than competition, by pouring resources into internal cooperation and collaboration. So while individuals may continue to follow "greedy algorithms" and seek their own advancement, the corporation is making the playing field non-level and rewarding collaboration as the method of getting ahead personally. In that sense, Corporate policy is serving one role of religion - seeing the larger picture, thinking globally, and then trying to shift the local context so that relatively less visionary individuals, acting locally, will do the right thing if they just follow the rules.

This "think globally, act locally" function is the key role that "unity" has to handle, and it works best if people stop fighting the behaviors and yield and embrace the behaviors instead.

People have to let go of their own ego, "die to themselves" as it were, only to be "reborn" where their ego now includes the other people in the larger village or familiy or corporation they now have committed to belong to. In some sense, they now are just selfish at a whole larger level, as now those villages or corporations or religions or nations start competing, and the whole cycle begins again at a higher level, as they too have to learn that collaboration beats competition hands down in the long run, even if it doesn't seem obvious locally.

This phase transition is one we should be looking for and supporting. It's built so deeply into the fabric of space and time and control loops that it is inevitable and always working in the background, at ever higher levels, simultaneously. At least, that's how I see it.

Of course, that would imply that it won't be long before earth discovers we're just one inhabited planet of millions such planets, and we have to deal with the whole unity/diversity and competition / collaboration thing all over again at an even larger scale and scope.

Which is a model that some people don't like, so this can get emotional again. Still, I think we need to get used to the idea that we are not on top of God's creation, just below Gods ourselves, but maybe quite a few levels lower than that. Earth is not in the center of our galaxy, nor in the center of the universe.

It's a very scary concept to some people, if the world is seen as a place where competition and dog-eat-dog dominates. That belief leads to an imperative to dominate the world, before someone else dominates you. On the other hand, if the world is a place where cooperation and collaboration dominate, then it is a far less scary place, and we should "get with the program."

Already our corporations, internally, are undergoing this transformation. Kicking and screaming, often, but they cannot deny that the Toyota model outshines the GM model.

We need to speed the transition on the level of nations and religions as well, and find that sweet spot where cooperation and collaboration work so much better than competition for dominance and attempts at mutual destruction.

All of those struggles are tied up in this question of the way nature, life, and/or God operate here and what the design principles are that we can rely on to work. The cells in our bodies don't triumph in "beating" each other, but in collaborating with each other. It's a good model, and it's been field tested, and it works.

We should stop fighting it and use it.

Wade


The New Scientist article says of itself:
This is an edited, abridged version of a review in the December issue of The Quarterly Review of Biology. Further reading: D. S. Wilson's book Evolution for Everyone: How Darwin's theory can change the way we think about our lives describes multilevel selection theory for a broad audience. D.O. Wilson and B. Holldobler's forthcoming book The Superorganism analyzes how insect colonies can be seen as products of colony-level selection.

Friday, November 09, 2007

The problem with collaboration

One of the problems with collaboration is that it often only works if many people do it. There is no good way to get to it one person at a time. For example, everyone now thinks the telephone is a great idea, but picture yourself 100 years ago, trying to sell someone the very first telephone. Why would anyone want it? You couldn't call anyone else, and no one could call you.

It's relatively easy to learn how to use a hammer or spreadsheet by yourself, because it only takes one person (you!) to make it work. It's harder to learn how to use e-mail effectively as a business tool, because there are two learning curves - each person has to learn what keys make something happen, and then everyone has to learn simultaneously what the new powers and pitfalls are of this approach. After a period of getting no e-mail and failing to check it regularly, you get going and run into "flame wars" and "spam" and come back from vacation to 250 emails and people angry that you didn't respond in 4 hours.

Then there's damage caused by the terrible "lost message" problem, where someone responds to you, but he message goes astray somehow, so they think they sent it and you think they didn't. They get silently upset that you aren't thankful, and you get silently upset that they refuse to respond to you, and it can chill or destroy relationships without anyone realizing what happened. This is actually a fairly common problem caused by the use of e-mail.

E-mail is an example of a "socio-technical system". It appears on the surface to be just a new technology, like a spreadsheet or word processor. That is, it appears to be in "technology" space. In reality, however, it operates primarily in social space, changing the way people interact. And, when it breaks down these days, it can break in either the technical space or the social space. The technical bugs and breakdowns are obvious. The social bugs and breakdowns are often not at all obvious.

So, it is actually quite hard to develop and test socio-technical systems. A good design will deal with all aspects of use, not just the technical ones. Many times, software people who grew up with single-user systems mistakenly think that all they need to worry about is the software and the "human interface", and forget the "social interface" part. When the system breaks down socially, they treat this as a big surprise, or something outside their control or responsibility. It should be part of the design. Not including it is like designing a bridge that only works if no cars actually drive over it.

If it's hard to get many people's expectations adjusted and synchronized for e-mail, imagine the issues involved if the software requires everyone, not just some people, to use it, and use it well for it to work out. Again, like the telephone, it can be a great idea, but is hard to start.

This is where hospitals are right now with computerized physician order-entry systems (CPOE). It doesn't do any good to send an order via this new technology if the person you are sending it to can't receive it, or doesn't check, or gets upset that you sent it. The stakes are quite high for "lost messages" when the message might be "Urgent! Don't give that drug to this person!" So, the expected place such systems will breakdown is in social space, which tends to be overlooked by IT people, or treated as some sort of single-user training issue that learning what keystrokes do what will solve. As with the e-mail, the keystrokes are just the beginning of the battle. "Implementation" then is far more than scheduling which people or units will "go live" on what dates, and being sure the "software works." That's just the leading edge of the actual social implementation and adjustments required that will make or break the system.

It's not too surprising that many such implementations fail, leaving the IT people just baffled since they had tested "the software" and it worked just fine. Odds are, no one had budgeted, staffed, and done an equal amount of testing and validation on the "social-ware" part of the system, mistaking single-user human-interface for all that needed to work, and blaming any larger social system failures on "bad users" versus "bad design." ( The bridge works fine so long as no one drives on it -- the design is perfect, and perfectly useless. )

Well, those are problems where at least some of the system is visible to the IT people. Then there are even harder problems, where most of the system is invisible and in social space. These are the areas where "systems thinking" is required, but again much of this work fails unless everyone understands it.

So, tomorrow I'll discuss how life-like simulators, like cockpit simulators, are used to train pilots in both which buttons to push, and what happens when 2 or three of them try to push buttons at the same time. These are two entirely different training problems, as the airlines took decades to learn, but have finally grasped.

Still, studies show that three-quarters of commercial airline accidents occur on the very first day a new set of people, all fully trained in which button does what, have to try to cooperate as a crew and fly the same plane at the same time. The social part, collaboration, is fully as important as the technical part, button pushing. Both are required of today's pilots.

This kind of training is something I'd like to explore for the executive suite -- as a way to take managers, trained in Theory X organizations, and teach them how to fly a Theory Y organization, in the privacy of their own office, anonymously, so one one has to see their clumsy and awkward period figuring out which way is up, and discovering that all their old instincts and reactions and intuitions are now wrong or worse than wrong.

Friday, November 02, 2007

Decentralized sense-making in a cluttered world



If central planning isn't helpful for sense-making in a complex and cluttered world, what is?

The world of "image-processing" in computing has come up with some techniques that seem interesting models for action. I want to describe one that I've used in the past. You don't need any math for this. I tried to make it easy to follow.

The problem we had involved finding the edges of a brain tumor on a 3-dimensional Magnetic Resonance Imaging image. This is actually a set of "slices", stacked like a deck of cards, across a section of the brain.

Each slice looks something like this picture, which is a cross-section image I pulled off the web from the NIH Image database of public sample images. That's a vertical "slice" through someone's head, facing to the left. (Note - the person isn't actually sliced or injured - the computer just makes it look that way!)

Maybe if you think of baking an orange into a loaf of bread, and then running it through a bread slicer -- you get the image of a stack of slices, starting with all bread and no orange, then really small circles of orange, then slices with larger circles, then smaller again, and finally bread slices with no orange at all. Our job is to find the orange in the pictures of the slices of bread and reconstruct what it looks like in 3-D.

If there is or might be a tumor, it's important to find the edges as accurately as possible, based on these kinds of images. That's not as easy as you might think, because when you zoom to the high magnification, the images are actually pretty blurry and "noisy" and hard to read as to where, exactly, an "edge" is.

Here's some structure in a brain, probably not a tumor, for illustration. If you click on the image, you can zoom it up and see some sort of black dot with a white border fairly easily in the upper right, "Slice #19". But if you look at the previous slice, the next "card in the deck of cards",
"Slice #18", the edges of this are less distinct and this slice of the orange is smaller.


Similarly around slice 20, maybe we can still be fairly sure we "see" the edges of the white structure, but by slice 21 it's not clear what's that structure, and what's just normal tissue.

And, we're using the magic of human eyes. We want some way the computer can do a better job than people at finding the edges of a structure, once a trained radiologist points it out. (This was all done over a decade ago and I suspect they have way better tools today, by the way.)

Anyway, let me describe how the edges can be found. Look at it on one slice first. Imagine surrounding the tumor with a line of people attached by stretchable elastic cords or "slinkies" or springs. In this picture I just drew a red dot instead of person, but you get the idea. Pretend that's the view from above of many people with red hats connected to each other with adjustable bungee cords.

Then, you ask each person, when he gets to a place where he looks down and sees dark changing to light rapidly, that might be the edge of the tumor, so he should dig in his heels and try to stay there. But, at the same time, you start making the springs stronger, pulling people towards each other.

As you do that, initially with the springs fairly stretchy and loose, the circle starts being pulled smaller and smaller, like a drawstring tightening on a purse. When each person gets to what seems like it might be an edge, they try to drag their heels and stop moving, independently.

After this has gone on a while, you may end up with something like this:


You can see that most of the people have found the edge of the tumor and dug in their heels there. But people #1 and #2 found a bright edge that is probably just "noise". And the people numbered "3" have found something that it's hard to tell if that's tumor or noise.

How should they decide?

In this technique, if you just start tightening the springs, at some point the collective pulling force of the majority will break #1 and #2 loose from the feature they are snagged on, and they'll snap into place around the tumor.

Based on just this slice, the people labelled 3 may not move, because maybe that's actually an edge. (Tumors don't have to be round - they can be irregular.)

Well, how do the people at #3 decide? Here's the trick. While all this is going on on this slice,
the same thing is going on on all the other slices, and springs connect the dots / people across the slices. In other words, we actually have a sphere of dots / people, connected by springs, kind of like an over-inflated balloon, and we let it slowly deflate in three dimensions at once,
around the feature in all the slices.

In other words, there is not enough information in the vicinity of any one person to be able to sort out image from noise with certainty. But, most of them are nearly right. We just don't know which ones that is. So, within each slice (or region) the people consult with each other, while at the same time they are consulting across regions as well, with a mix of believing their own eyes, and enough humility to know, at some point, to let go and go with the crowd.

This seems like a very simple plan, and no computer is required. It turns out to be a very powerful technique ("algorithm") that does a remarkably good job at sorting out "noise" from "signal" in 3-dimensions, with only trivial programming required.

Each person / dot simply has to pay attention to what it sees ("independent investigation"), but balance that with consulting with neighboring people and at some point yielding to peer pressure and moving into line. If the balance of these two competing forces is right, the overall network turns our to be a very powerful analog computer that can solve a problem we have trouble even defining well.

No single person ever needs to "see" everything or see "the big picture" - he just needs to see his neighbors, compare notes, argue for his position, and, if it seems warranted, yield to the majority. If enough different dots do this, coming in from enough different directions at once ("diversity"), and remain independent and yet consulting ("unity in diversity") the algorithm works. The powerful solution "emerges" from each person's behavior.

In image processing this is called an "adaptive contour" technique. It is part of the larger class of techniques called "swarm computing" that is becoming increasingly popular as "the power of crowds" is increasingly being appreciated.

An area this could be used is in any sort of boundary measurement, or in aligning fragments of images to make a coherent overall picture. Examples of these, and my US Patent 5613013 in image alignment using effectively a swarm technique, are described on my web site here.

Wade

Saturday, October 27, 2007

How do we get anywhere?


Dances with Penguins - 2
Originally uploaded by Fotomom
Isn't there some tool that we could use so that our meetings always get us a little closer to where we want to go?

If we're going to get anywhere, we need to learn how to talk to each other. That's a conclusion I keep coming back to. With all these people, why can't we solve our own problems?

So, I come back to the side-point Professor Gary Olson made in class one day. He said that white-boards were proven to be very useful for making meetings get somewhere, but he hardly ever saw faculty or administrators use one when they met.

This is on top of arriving without a clear agenda, and working without minutes being taken of what was said.

Well, that's curious. Why is that? I mean, white-boards are really useful in removing ambiguity and bringing issues to the front where they can be seen by everyone. They help you leave the meeting with a common understanding of what was agreed to and a clear picture of what steps who is taking next.

So, I have to suppose that faculty and administrators prefer to keep issues hidden, prefer to avoid revealing conflict, and are happy to let everyone go off with their own misconceptions of what was decided and who is doing what next. And, I guess, it's OK in their minds that people who weren't at the meeting are now missing part of the picture and walking around misinformed.

So, let's start the "Five Whys" process and see if we can figure out what it would take to overcome this apparent social dysfunction.

So far we have:
  • We have major social problems that aren't being dealt with, locally, at a department or corporate level, regionally, statewide, and nationally, or being dealt with way too late to be effective
  • which is partly because: people don't get anywhere when they meet
  • which is partly because: they don't use obvious tools like agendas, white-boards, and minutes
  • which is partly because: those tools remove ambiguity which clarifies areas of conflict which is disruptive and unpleasant
  • which is partly because: people aren't good at dealing with conflict so they avoid it.
  • which is partly because: what?
I think part of the reason people have trouble dealing with conflict is that they don't think of it as "apparent conflict" and assume that it is "real conflict."

Because they think it's "real" conflict, they also think that the only way to survive is to "win", which means that everyone else must "lose", so they hardly want to be open and honest about their motivations. Most people also assume that everyone else is just like them, so they assume the same reasoning and motivations are behind what everyone else is doing as well.

We get some guidance from the superb book "Getting To Yes", which is about techniques that let the Soviet Union and the US negotiate during the cold war, when they hated and mistrusted each other.

The authors use the example of an orange that two kids are fighting over. Each wants the orange and "needs it" and "must have it."

On investigation of what they would do with it if they got it, one wanted to squeeze it and get the juice to drink, and the other needed the outside rind for some class project.

So, it turns out the "it" they were fighting over wasn't ever made clear enough to reveal that there were two "its" and one orange could satisfy both needs.

So, one problem the authors found is that people tend to jump to conclusions about what they think is the "only way" to do something that could "possibly" work. The conclusions are wrong, but are based on unstated or even unrealized assumptions or different life experience.

When the people can back off of their "positions" ( "I must have that orange!") and go back upstream a step to their "interests" ("I need orange juice!") new solutions suddenly appear to what was an "unsolvable problem."

If you keep on tracking back upwards one step after another, you end up coming back to basic needs, that people need to survive, to eat, to have clothing and shelter, etc. I think any negotiation has to start with the assumption that the goal in life is not the annihilation of the other party (which would be a position), but to figure out how to proceed so that the other party doesn't pose an on-going threat of annihilating me (an understandable and predictable interest.)

Many international conflicts are generated by the belief that the only way the other party will stop being a threat is if it is annihilated entirely, and that there are no other possible solutions to reducing the threat.

I'll look at other properties of rational ways to deal with conflict in other posts. Where I wanted to get in this one was to follow the chain of causality upstream far enough to see hope along one axis. So far, we've found that many of the reasons to avoid discussing conflict and actually resolving it are based in misunderstanding, unspoken assumptions, leaps of judgment about the "only way" something can happen, and perhaps hidden assumptions that the "only way" to reduce a threat to myself is to sabotage or eliminate someone else.

The humility required is being willing to accept that it is possible that somewhere you have leaped to some conclusion and leaped right past another solution you didn't see.

The belief required is being willing to accept that your own survival does not automatically require the elimination of someone else.

In other circles, maybe the realization is that your own wealth, empowerment, and happiness does not automatically demand dis-empowering everyone else. There may be other ways to survive and thrive and be legitimately happy and wealthy.

In fact, as I'll discuss in other posts, both health and wealth are bio-social constructs and if everyone else died, the wealth would be worth nothing and physiological and mental health would be impossible. And, just like our body doesn't have a "super-cell" that all the other cells bow down to and "obey", the planet doesn't need a "super-man" that all other men bow down to and obey. The whole concept of domination is fatally flawed, and has no biological or natural analog. Ecosystems can't make themselves subservient to one component, and the emergent power is so much larger than any component's individual power that subservience would make no sense. There is no "best" part of a mutually-dependent ecosystem.



(credits - photo Dances with Penguins, by Fotomom,
click on it to go to that site on Flickr., "dinner time" penguin photo by
Uploaded by c-basser on Flickr)