If we assume that what we're building is, essentially, a massively-parallel connnectionist computing engine (consciousness) out of people and technology, we get the suggestion that the key roles are:
transparent communication at successively larger scales
coherence-building at successively larger scales, and
transparent interactions - ("phase-lock loops") across the components of the system.
Yes, computers will still be required for tracking the trillions of details needed to run a large company today, but that is, in Peter Senge's words "detail complexity." There's a huge amount of it, but it is, relatively, simplistic in nature aside from the amount of it. Enterprise computing knows how to do that, at least in theory.
What we are looking for in the next-gen company is the thing that ties it all together, that supports the feedback loops that maintain coherence and build integrity, the same way the circulating thoughts in the brain slowly emerge an "image" out of billions of "nerve impulses" from the retina.
This is "Technology-mediated collaboration" and more, so I'll call it "technology-mediated coherence." It is what allows "aperture synthesis" in large radio telescope arrays to act as if they are a single huge individual and the gaps "don't exist."
This is pretty much what the Institute of Medicine was recommending when it urged a focus on "microsystems" recently (see prior posts on "microsystems"). The point is that a small team (5-25 people) is capable of being "self-managing" if they can simply be given the power to do so by having access to information about what their own outcomes are. This information does not need to be packaged and interpreted at successively higher levels of management and then repackaged and distributed back to them a month later as "feedback." In fact, that doesn't help much. What really helps is speed. What helps is if they can see, today at 2 PM, how they have been doing collectively, up through, say noon. They can learn to make sense of the details, and don't need "management" to try to do that for them.
In fact, given the fractal density of reality, and the successive over-simplifications required to get data into a "management report", it is a certainty that we have something far worse than the game "telephone". What will come back down the line from upper management will bear little resemblance to what went up, breeding distrust and anger on both sides.
So the role of next-gen IT is to grab hold of the 'WEB 2" technology, that allows bidirectional websites to be both read and written by people, and that includes weblogs, wikis, and "social software" that encourages interaction and cooperation, including, gasp, "gossip."
This is the stuff that, in the right climate and context, can be converted into "social capital" and converging understanding by each employee as to what everyone else is doing and why.
Where there can be dashboards, they should best be very close, in both space and time, to the decision-making actors. Lag times are incredibly dangerous, and are the source of instability in feedback systems. (Imagine trying to drive a car with a high-resolution TV screen instead of a windshield, with a fantastically clear picture of what was outside the car 15 minutes ago. )
A relevant quote from Liker's "The Toyota Way" is this (page 94) where he is talking about the problems with large batches and the delays that go with such batches:
"...there are probably weeks of work in process between operations and itThe hugely complex computation of making sense of such data is what human brains and visual systems are built for, and tuned for, and that machines costing a billion dollars cannot replace yet. Just give people a VIEW into what is happening as a result of what they are doing, and they will, by a miracle of connectionist distributed neural-networks, figure out what's affecting what faster than a room full of analysts with supercomputers - in most cases.
can take weeks or even months from the time a defect is caused until the time it
is discovered. By then the trail of cause and effect is cold, making it nearly
impossible to track down and identify why the defect occurred.
That's the role that computation needs to look at - is close-to-real-time feedback in a highly visual form to the workers of the outcome of the work currently being done. (This is a step-up from Lean manufacturing visual signal system which is a signal to management that something is amiss.)
The "swarm" is capable, like any good sports team, of making sense of "the play" long before the pundits have had a chance to replay the video 8 times and "analyze" it. Yes, there is a role for longer-term, more distant view that adds value.
But what there is NOT is a way to replace real-time feedback and visibility with ANY kind of delayed information summary. All the bases must be covered, and long-term impacts and global impacts will not be instantly visible to local workers -- but they have to be able to see what their own hands are doing or they'll be operating blind. "Dashboards" with 1-month delays on them cannot cover that gap. Too much of the information is stale by the time it arrives. Both are needed. Local feedback for local news, and successively more digested, more global feedback for successively larger and more slowly varying views.
No comments:
Post a Comment