Wednesday, July 04, 2007

A question of baseline


One possible scientific working hypothesis about religion is that it's just remarkable how many people are delusional. Another is that there is something under all that smoke, regardless how poorly it has been resolved and how artfully it has been decorated.

Given my understanding of both humans and feedback loops and psychology, I can see the power of shared myths to persist and feed and grow, as effectively a living thing. On the other hand, given what I've learned about the computational power of massive parallel "connnectionist" architectures, and neural networks and human vision models and computer vision models, I tend to think that there is indeed a very real potential for emergent power in crowds to detect signals that any individual would miss.

A species as a living thing may perceive a different world, dimly but correctly, that is not accessible to individuals in the species -- just as your brain perceives a world that would not make any sense to an individual neuron. If you and your neuron could meet for coffee, there's not much you could talk about in common, except for things like the problems with control, and how hard it is to get good help these days. Taxes, defense policy, immigration, college applications are simply not sensible on the scale of one neuron.

Under a slight extension of the general Cosmological Principle ("there is nothing special about where we are, when we are, or what scale we are") we have to assume that this principle of "insensible larger concerns" is true as well for people-level thingies (us.) That is, there may be a lot going on that not only do we not know about, but that, given our size, we will never know about. In fact, if we use some sort of iterative reasoning, and apply this Cosomological principle yet again, there must also be some things that even earth-scale species, regardless how electronically wired in the future, will never be able to comprehend. And so on, who knows how far upwards. Maybe in this universe even Galactic scale (10 to the tenth stars) thingies will have galactic-cluster events that they will never be able to comprehend.

So far, I think that is pretty solid scientific reasoning. In short it is a more reasonable hypothesis that we humans are permanently shut out of certain knowledge, due to our finite size, than that we're almost gods. Yes, we can wire up the blogosphere and let the huge connectionist engine start cranking and discovering Things that it can respond to, but it can never really tell us fully what those Things are, any more than we can explain a 1040 tax form to a neuron. It's a simple bandwidth limit. None of us have 500 years to listen to the details, for starters, so anything that takes over 500 years to explain is out. It's a very strong assertion to say that that set is empty, and a much weaker assertion to say that there may be stuff in that set.

These days, most of us cannot and will never grasp things that take more than 5 years to explain, except in very narrow tertiary specialty areas. In business and politics, sometimes it seems that 15 minutes is the cut-off, and any concept that takes over 15 minutes to explain is simply in the "insensible" or "incomprehensible" set. I think that political advertising assumes that anything that takes over 30 seconds to explain the the public might as well not even be attempted.

This cut-off frequency to the full spectrum of knowledge in turn must result, by signal theory, in some rather major distortions in what it is that we do think we see with what limited capacity we do have. A classic result in radio-astronomy for example was Ron Bracewell's realization, around 1926 or so I think, that the best details that could be resolved, even with infinite observations and averaging out the noise, was limited buy a resolution of lamda over D, where lamba is the wavelength being monitored and D is the diameter of the radio telescope "dish" or "grid" or "lens" or "mirror" being used. Similarly for eyeballs - if you want eyes like a hawk, you need a wide-diameter pupil, and humans just can't go there with out tiny eyeballs.

So, the question comes then, are the "details" important or negligible? This is worth stopping to ponder. I spent 5 years at Parke-Davis pharmaceutical R&D division generating cross-sectional images of blood-vessels to assist in research on coronary disease. With microscopes as with people, we had a choice of picking a high-power lens, and seeing details down to the individual cells staining structure, or a low-power survey lens, and seeing the big picture, but we couldn't do both. It turned out that was a critical problem and gave wrong answers. We needed both the details and the big picture to grasp what was going on and how and why. So I developed techniques to take many high-resolution images and assemble them into a high field-of-view montage, and then we finally had something the computer could analyze and get meaningful results that corresponded to biological truth.

The image at the top of this post is one such montage I made in 1995. (repeated here).

There are some other examples of that sort of work on my quantitative biomedical imaging website, at http://www-personal.umich.edu/~schuette.

So, at least in that one case, yes, the details mattered. Hmm. OK, then in at least some cases, the details matter and change the answer. Do we know anything at all about which cases that might be? Well, it will certainly include cases where the details add up to more than the low-resolution/high-field of view facts. This could easily include feedback loops, where those tiny details ( like,say, a persistent 5%/year drop in value of the US dollar ) add up or compound over the span in question, and end up dominating the computation of the final value in your dollar-denominated bank account.

The distortion caused by cutting-off a portion of the spectrum is also not negligible. You can't just chop off a signal and get the middle part you hoped for, but instead you get large scale distortion that is mathematically the Fourier transform of your chopping function.

Here's an example:

Skipping the math, if you try to "look at" a point source, like a star, through a hole in a piece of metal, say, even if the point source "fits" in the hole, what you see will be distorted as shown on the bottom. You'll "see" a sort of diffuse bell-curved shape source in the middle, surrounded by a dark ring, then a bright ring, and another dark ring, and yet another dimmer bright ring, etc.

You cannot get around this, known in the optical domain as "diffraction". It's a law of physics and signals. If you look up diffraction in Wikipedia, you'll see another example of what you see looking through a square hole:

OK, wow. So the size and shape of the hole or "aperture" through which you are trying to look can dramatically change what you "see" or directly perceive or detect with film or an imager or a radio signal detector.

Now, that's not a fatal problem if you know your distortion pattern, because you can "back it out of the equation" and computationally figure out what shape actual signal must have been there to generate the signal you "measured".

That works for optical and radio astronomy and optics in general. For microscopes this is the "psf" or "point-spread function." You can use the magic of Fourier Transforms to undo the distortion and get a clean image of what you'd see without it, mostly.

But here's our problem as a society. To me, the same effect applies to looking at the world through a finite aperture, or gap, defined by a limited set of "scales" and time-frames we are going to observe. So as humans, we are making observations on smaller and smaller scales of time, and on magnitude scales that tend to be about the same size as us, whoever "us" is (person, corporation, nation, culture.)
But by blocking out larger scale information (that we might call "context") and smaller-scale information (that we might call "negligible details") we are then bound, by the laws of physics, to be directly perceiving a distorted signal. The problem is, we don't realize we need to undistort it before paying attention to what it says.
How distorted will it be? Well, look at the point source viewed through the square hole. Pretty distorted.

The same thing is true on one-sided "holes", or simply blocking out everything to the "Right" of a point source with a sheet of metal, we'll still get distortion near the edge.

Hmm. So, if we don't realize we're cutting off all signals above a certain scale (slowly varying, long-wavelength) and below a certain scale (rapidly varying "details" of very short wavelength), we won't realize that we cannot help but see a distorted picture of the real world out there - the one we'd see without such distortion.

That brings me full cycle back to pondering what sort of "receiver characteristics" a massive array of people-shaped sensors over 1000 years might have. We can say something, with no further details, about what kinds of signals and patterns and frequencies it would be able, in theory, to "pick up" or detect, what it would be blind to, and what sort of distortions it would necessarily have.

If that social-shaped antenna "sees" something it resolves into a pattern it calls "God" or some of the other aspects of "religion", we need to reflect carefully on simply dismissing that data point as "noise". It is not at all obvious that it is receiver noise, and it is the worst abuse of science to discard a data point simply because we don't like it, or because it doesn't fit our preconceived notion of how things are and what "should" be there.

Besides, we don't have much opportunity to do such long-baseline (1000 year long) observations ourselves, so we should treasure the few we do have.

We know a few things with a fair amount of certainty. We know lamda over D will be a limit on resolution of details, unless it is computationally broken using a technique of hyper-resolution.
That means, in lay terms, that the wider the baseline diameter, the more we can potentially "see". The more diverse the observing group is, the wider it is spread out along any dimension, the better the group can triangulate in on something and resolve how far away it is from us. A very wide baseline will let us sort out foreground from background. A totally uniform set of sensors will have zero resolution and be totally blind to telling foreground from background. Diversity matters, in a purely information-capture sense.

The question for Science, with respect to Religion, then, it seems to me, is not to be obsessed with the persecution of Gallileo or Iowa's decisions about evolution, but to ask what this irreplaceable observational unit in our heritage may have seen that we, here, now, looking over a few year window, could never possibly see.

Even lousy sensors, such as the cells in our retina, can give you a good picture of the world if you process the signal correctly, which is what much of our brain is hardwired to do. There is value in those low-grade signals, if processed cleverly and assembled into a big picture.

It's not a question of "right" versus "wrong". It's a question of baseline.

There is no scientific justification for discarding one of our longest-baseline observations, regardless how "bright" or "technical" the individual sensors in that array were compared to us. We are stuck in the narrow "now", and they have the advantage of a several thousand year baseline. Nothing we can do with gigahertz processors and PhD's can overcome that physical law on signal processing theory. A crowd may see things our most brilliant scientist missed.

Wade

No comments: