Thursday, May 31, 2007

Two more arguments for unity

I discussed in an earlier post some arguments for why it may be a bad idea to put off efforts to deal with large-scope problems "until we have all the smaller-scope ones completed."

This, again, is flying in the fact of exactly the opposite trend among many business leaders, who have jettisoned concern about long-range planning, or else reduced it to a horizon of 3 months and call that "long-range". And, it flies in the fact of advice from many PhD advisors, who try to train their students to focus on smaller, shorter-term, more "realistic" problems.

The implicit sense is that the total energy and effort required to complete a task gets larger as the scale of the activity gets broader. In mathematical terms, it is assumed that effort to do a credible and useful job is a "monotonically increasing function of scope."

I completely disagree with that, and feel that by the same logic, no one should study astronomy, or even make a map of the stars, until we understand atoms perfectly. Or, perhaps, no one should study sociology and government until we understand everything there is to know about individual people.

It seems obvious, on reflection, that we can learn a lot about people by observing precisely the emergent phenomena around us. We can learn a lot about air by observing the weather, clouds, thunderstorms, and tornadoes that we would have a hard time "seeing" in a beaker of air, regardless how well and how long we studied it.

Also, we would have to explain why it is that it is far easier to describe the equations governing water in pipes in our plumbing than it is to describe molecular quantum mechanics. That, alone, seems to be a counter-example that disproves the hypothesis that larger things must be harder to get a useful handle on.

This is particularly true when large scale phenomena are actually causal, by all our standard definitions of that word, and the small scale phenomena of which the large is composed is not causal. Pretty much any electronic device is an example of that, where we rely on the statistical behavior of "current", without actually caring whether particular electrons move or kick back and chill, so long as most of them do what we expected every time.

Newtonian and Laplacian bases of description.

These are big words but relatively simple concepts. Newton described things in terms of points, so we might describe a ball's motion by listing where it was at time "t" for t=1, t=2, etc. to whatever precision we desire. To describe anything "large" in size therefore takes a "large"number of such measurements, and is correspondingly expensive and difficult.

Note importantly that the Newtonian method is always, literally, "full of holes" at the times we did not specify. It is an incomplete description, but works for many purposes, especially if we "interpolate" and "extrapolate" to "fill in the missing pieces" and "connect the dots."

That's not the only way we can describe the position of a ball. An alternative method, equally capable of being "complete" to whatever accuracy we desire, is to start by stating where the ball's average position will be over the time period of interest. That's the first data point.

The second data point could be the "standard deviation" or other measure of the variability of the ball's position over the time period of interest. (Or, we could pick as second and third data points the maximum and minimum positions of the ball over the time period of interest.)

Now here's an interesting thing. The way Newton described things, with three data points of the temperature today, we'd have the temperature at Midnight, at 12:01 AM, and at 12:02 AM., and not know very much that people care about, regardless how precisely those three measurements were taken. On the other hand, if I tell you the average temperature today will be 83, with a low of 55 and and a high of 92 F, I've also told you exactly 3 things, but they contain global information, not local information, and are way more helpful to you in selecting
clothes to wear, etc.

This is another counter-example, where a global measure is actually far easier and more useful than an arbitrarily precise local measure.

The specification of something, starting with the "low frequency, large scale" average color, then adding successively higher frequency variations around that base color, is basically how the jpeg images are encoded. Again, if a progressive jpeg is downloaded, you may be able to see in the first few seconds that it's not what you want as it emerges from the mist, and move on to something else. Meanwhile, viewers of TIFF images, are waiting for the top row of pixels to arrive, then the second entire row., then the third entire row, etc. You could need to download most of the picture to see what it is and whether you want it.

JPEG's can be arbitrarily precise, as precise as TIFF images, but it is seldom necessary for what humans do with images.

All the above is one set of arguments for why "large scale" properties are no harder than "small scale ones", and are often easier, and should not be neglected just because they are "large."

And, for phenomena that are "context dependent", as so much is, it may be far more valuable to us to get the first few "moments" of a distribution (the average, the standard deviation, etc.)
than to get the first few data points of the time-series. So, it can be far faster for many real decisions we need to make.

And, in physics, there are conserved properties such as "total energy" and "total momentum" that don't care at all how these rearrange themselves at the local internal level, so long as the overall total remains constant as seen from outside.

A completely different case for working from the top-down, instead of the bottom up, is called precisely that - "top down design" and "top down computer programming". A few hundred thousand person years of experience programming have led experts to believe that it is much more effective to describe a problem starting at the top, in very broadest terms with the least depth, and work our way "down" into successively more detail -- than it is to go the other way.

The other way, bottom up, in fact, is viewed as the major source of time-consuming "bugs" and conceptual errors that are very hard to resolve and hard to locate. In fact, if an organization ever gets an "Escher waterfall" shape in place, and realizes it is flawed, they might simply choose to live with the pain of the flaw, because of the amount that has been vested in "getting the pieces right" so far, the pieces that everyone has adapted to and is willing to accept as "the devil we know" rather than "starting over." At that point, as Zorba the Greek might say, we have "the full catastrophe" of a flawed design that no one wants to let go of, even though it is demonstrably broken.

With top-down design, the details rest on the the larger and larger contexts, not vice versa.
This is great, because the things most likely to change are the details, not the largest contexts.
If we rested everything on the details, every time a detail changed we'd have to redo the entire program. If we rest on top-down hierarchy of contexts, usually all but the very last few, the most detailed, remain constant over the life of the program, and the amount of change to code required is minimized. Most of the code, the upper levels, remains stable and validated and doesn't need to be touched. In the "bottom up" world, if you change one detail, you probably need to rewrite everything.

So, what I'm arguing is that these principles seem to apply as well to descriptions and measurements of our social organizations. Top down metrics may be much easier to do and reveal everything we need to know much faster than trying to get a huge number of detailed data points at the bottom levels.

In my mind, then, the mathematics and science argue strongly for working top down, and getting the large conceptual pieces resolved before worrying about the details, not the other way around. This progression seems, also, to match up with what Fisher and Shapiro are arguing in "Getting to Yes", that our problems only seem intractable because we're trying to resolve them at the most detailed level of "positions" when we could and should move up a few levels to "interests", where it is far more likely that we can find common ground.

For these reasons, from the discussion of fraying and gaps in human responsibility of synthesized tasks, and many others, I urge exploration of the "larger issues" at least in parallel with the "smaller" ones.

There are two final reasons I'll add to the mix.

First, although scientists tend to forget it, the entire enterprise of science is a social entity, and, as scientists seem always shocked to rediscover, the enterprise rests on a political and social matrix in which it is embedded.

Put most simply, if the social interests and the scientific interests clash too much, it is the scientists who will be out of jobs, not the society. If the society collapses politically, or has a global thermonuclear war or global biological war, the rest of science becomes moot. It doesn't matter how precise you are when you're dead.

There is, in other words, some timetable, some urgency, to getting sufficient data together to make some very large, very important decisions that will need to be made soon, that will dramatically effect us all. A response to the global rising epidemic of drug-resistant Tuberculosis is one, and what trade off civil liberties should have versus the rights of "the public" to be protected from people who are carrying infectious diseases. Ditto for AIDS.
What we should do about the "middle east problem" is another.

We don't have time for Science to analyze molecules sufficiently well to be able to tell us who to vote for in 2008, and that won't happen regardless how long we wait. The data and factors and variables of interest to us don't even exist at the molecular level. The universe is not deterministic upwards, as physics has shown us finally.

So, if we have some hard, global decisions coming up, we cannot wait for a bottom-up assembly of concepts and fragments of knowledge to succeed, because even if it were possible to happen, which it seems not to be, it would take "longer than we have."

We have, maybe, a decade or two to decide our fate, in some rather permanent and irreversible ways.

Given all the above, this argues that at least some effort should be given to looking for a "top down" approach to understanding how things work and what our options are. In that conclusion, I find myself in complete agreement with the teachings of the Baha'i Faith, which I will close with, as quotes from http://www.bahai.org.

But First, Unity

Is unity a distant ideal to be achieved only after the other great problems of our time have been resolved?

Bahá’u’lláh says the opposite is the case. The disease of our time is disunity. Only after humanity has overcome it will our social, economic, political, and other problems find solution.

Today, several million people around the world are discovering what He means. We invite you to explore His message with us.

I didn't set out to "prove" that the Baha'i's are "right" and that is not why I raise the issue now. I raise it because the group has been focused for 150 years on precisely this core issue of "unity in diversity", the one that the rest of academia is finally recognizing, and the group has studied first hand by direct experience what it takes to make that work in various parts of this actual planet we live on. That experience is hard-won and we don't have time to replicate it.

In that regard, it seems "due diligence" to at least read what they have to say.

W.



No comments: