Tuesday, November 08, 2011

On restructuring government and power on earth

I think you (Yangbo) touched a key issue, and a systems thinking one, in the comment that "Of course, no system of governance can be perfect but can nonetheless be amended through continuous improvement. "

The twin questions of what a "more perfect" (!) system of governance would look like, at all, let alone how to get from here to there, are central. In general I'm biased towards trajectories that involve evolution, not revolution, given the long and inglorious track-record of revolutions being extremely expensive in lives and wealth, becoming co-opted, going bad, and ending up being as bad or worse than what they replaced.

More precisely, ARE there systems of governance which can, in fact, be amended through continuous improvement, and which do not, ultimately become corrupted, self-serving, and viewing efforts at improvement as some type of "enemy action"?

If we don't understand the mechanisms by which systems "go bad", it's hard to imagine we can make one that won't. This is indeed a systems thinking and modeling question, I think, and this group is a great one to reflect on that.

Much more specifically, all personalities, morality, and hidden motives and agendas of individuals aside, what are the STRUCTURAL feedback loops which comprise "governance"? Which structural loops are crucial to an on-going process of incremental improvement? And in what way have these loops failed to be load-bearing when ramping up in scale?

I'll assert without proof that all large organizations face this same question.

I think a wonderfully profound and delightful place to start that discussion is with John Gall's book "Systemantic... How Sysems Really Work and How They Fail", with a summary here:

A quote from the introduction of that book captures the thought: "Reformers blame everything on 'the system' and propose new systems that would - they assert - guaranteed a brave new world of justice, peace, and abundance. Everyone, it seems has his own idea of what the problem is and how it can be corrected. But all agree on one point - that their own System would work very well if only it were universally adopted. The point of view espoused in this essay is more radical and at the same time more pessimistic. Stated as succinctly as possible: the fundamental problem does not lie in any particular System, but rather in Systems As Such."

Of course, Gall goes on to point out that we are surrounded with previous efforts to build what TS Eliot would call "systems so perfect that no one will need to be good", and that our landscape is now dominated with the still-living artifacts of those "solutions" which have now, in fact, become "the problem". Rather than repeating that mistake yet one more time, hoping for different results, Gall suggests maybe we should understand better what exactly it is about systems we've misunderstood.

Wade Schuette One thing about systems that was made clear by Jay Forrester and illustrated by Peter Senge's "beer game" is that it is not SUFFICIENT for human beings to be well-intentioned if they have a limited range of perception of the distant ("system") effects of their actions and are caught in a feedback loop.

And John Sterman in Business Dynamics notes that essentially all humans, even MIT grads (!), have very poor perception of what parts of their environment are due to feedback from their own prior actions.

I'll toss in my own two cents here, and add that the perceptual problems in LARGE organizations will always occur because of a fact left out of most models, namely:

*** Reality is scale-dependent ***

My background is in physics and I side with Einstein, almost always misundertood and misquoted, who affirmed that there WAS INDEED an underlying reality, but that even perfect unbiased observers would perceive it differently due to space-time-curvature, AND SO THEY WOULD NEED TO CORRECT FOR THAT IN ORDER TO COMPARE NOTES AND SEE THAT THEY AGREED ON THE UNDERLYING SINGLE REALITY.

So, assume that all humans have a finite capacity for information, a finite-radius of perception, and even within that must vastly oversimplify the flood of data to come up with one or more mental models they will attempt to "snap observations to" in order to make sense of what is going on and respond to it. (Basic cybernetic behavior.)

One example of scale dependence would be the question of whether an molecule of H20 ("water") is "free" to move or "captive". On the scale of molecules, the molecule is free to move about. On the scale of plumbing, the "water" in the "pipe" is captive to go from the water-tank on the hill OUT the faucet. Both are true, but they are different. It is wrong-headed to ask "WHICH is true?" It is right-headed to ask "How can both of these be true, and what do we need to correct for before we attempt to compare and align them?"

As organizations become larger, EVEN WITH PERFECT NON-DISTORTING communication by levels of "management", the nature of reality at the top must diverge from the nature of reality at the bottom, because they operate on different scales of space and time.

Since this effect isn't recognized, it leads to increasing friction between "management" and "front line staff" in which each group views the other as being increasingly "out of touch with reality". (Which is true, in a sense.)

It has nothing to do with "intelligence". Here's a more specific example - a "hybrid image" which has the property that if you view it from normal screen distance is "clearly" Albert Einstein, yet if you view it from across the room is "clearly" Marilyn Monroe.


If we have any hope of building very tall structures in social space (ie, corporations, governments, etc) then we should take into account that this effect will come into play RAPIDLY, and figure out how to make load-bearing feedback loops that take it into account.

Right now, what happens is that as the top and the bottom of the organization diverge in scope, animosity, blame, and finally fracture occurs. Pretty much every time, and pretty much everything we've ever attempted to build to large scale has collapsed along this axis.

Designing social architectures to take this fourth-dimension into account will require computer modeling, because we as humans are not very good at 3-D geometry, let alone 4-D dynamic feedback geometry.

The only clean solution I can see to this type of fractally complex problem is precisely encapsulated in how fractal shapes emerge from very simple recursive rules. Namely, reverse the process. Seek eigenvectors, as it were. Seek simple recursively-stable governance structures such that at each level the problems remain "the same shape." Then we break the height constraint.


Related links:

More on Hybrid Images (like "Marilyn Einstein")

 "Hypnotized in High Places"

"Why we have so much trouble seeing"

Blindness at the Top  (bandwidth issues in central planning for crisis management)

Unity and adaptation  (design of control systems)

Unity and adaptation (part 2)

Why are so many flights delayed?  (system factors leading to blame and conflict)

Baha'i Faith principles for developing a world government that would work as intended

Why Blind and Stubborn Management is not a winning hand

Surprising case of Authority with listening - the US Army Leadership Manual

Sphere: Related Content

No comments: