Showing posts with label cybernetics. Show all posts
Showing posts with label cybernetics. Show all posts

Monday, July 02, 2007

The power of delusion -- genetic causality

What was reported as a dramatic event came this week, if we are to believe, in the official recognition of the fact that human genes co-operate as complex systems, not as some sort of "one gene, one function" machine tools.

Here's the heart of the New York Times article today (7/2/07) by Denis Caruso, identified
as follows: "Denise Caruso is executive director of the Hybrid Vigor Institute, which studies collaborative problem-solving. E-mail: dcaruso@nytimes.com."
A Challenge to Gene Theory, a tougher Look at Biotech

The $73.5 billion global biotech business may soon have to grapple with a discovery that calls into question the scientific principles on which it was founded.

Last month, a consortium of scientists published findings that challenge the traditional view of how genes function. The exhaustive four-year effort was organized by the United States National Human Genome Research Institute and carried out by 35 groups from 80 organizations around the world. To their surprise, researchers found that the human genome might not be a “tidy collection of independent genes” after all, with each sequence of DNA linked to a single function, such as a predisposition to diabetes or heart disease.

Instead, genes appear to operate in a complex network, and interact and overlap with one another and with other components in ways not yet fully understood. According to the institute, these findings will challenge scientists “to rethink some long-held views about what genes are and what they do.”

[T]he report is likely to have repercussions far beyond the laboratory. The presumption that genes operate independently has been institutionalized since 1976, when the first biotech company was founded. In fact, it is the economic and regulatory foundation on which the entire biotechnology industry is built.

But when it comes to innovations in food and medicine, belief can be dangerous.

Overprescribing antibiotics for virtually every ailment has given rise to “superbugs” that are now virtually unkillable.

The principle that gave rise to the biotech industry promised benefits that were equally compelling. Known as the Central Dogma of molecular biology, it stated that each gene in living organisms, from humans to bacteria, carries the information needed to construct one protein.

The scientists who invented recombinant DNA in 1973 built their innovation on this mechanistic, “one gene, one protein” principle.

Because donor genes could be associated with specific functions, with discrete properties and clear boundaries, scientists then believed that a gene from any organism could fit neatly and predictably into a larger design — one that products and companies could be built around, and that could be protected by intellectual-property laws.

In the United States, the Patent and Trademark Office allows genes to be patented on the basis of this uniform effect or function.

In the context of the consortium’s findings, this definition now raises some fundamental questions about the defensibility of those patents.

“We’re learning that many diseases are caused not by the action of single genes, but by the interplay among multiple genes,” Ms. Caulfield said.

Even more important than patent laws are safety issues raised by the consortium’s findings. ...

“Because gene patents and the genetic engineering process itself are both defined in terms of genes acting independently,” he said, “regulators may be unaware of the potential impacts arising from these network effects.”

With no such reporting requirements, companies and regulators alike will continue to “blind themselves to network effects,” he said.


Now, the field of "Systems Dynamics", celebrating its 50th anniversary this week, is devoted to studying how to describe, analyze, and design complex systems made up of many components interacting in "non-linear" ways -- which is to say, interacting so that any given "function" is carried out by many different components acting in concert.

This property, which I've been calling a "scale-invariant" design principle, can be found at all levels of life, or any computer system, from cellular components to genetic "circuits" to humans in a sports team or office, to scientists themselves doing research, to the role individual corporations have in the ecology of the economy.

The big question in my mind isn't really that genes interact and cooperate in getting their chores done -- it's that our best researchers took 31 years to figure this out, working together, in the face of what is sure to be seen, in hindsight, of overwhelming evidence that it is true.

This gets me back to yesterday's post on "The Power of Yarn", and the single sentence that captured the essence of that for me in the Yarn Harlot's story " There are some truths. Things that just are the way they are, and no amount of desperate human optimism will change them."

One of these truths is that living things operate in complex ecologies, not designed to make life easy to analyze. Another such truth is that "feedback is important" and that, again quoting the yarn harlot,
See how 10 is bigger than 9? See how there is no way that 10 can be made smaller than 9?
I've been asserting almost daily that the "scientific method" has a major weakness, as practiced, in that it focuses our attention on separable parts and analysis based on the General Linear Model, that assumes critically that causality is not circular - that is, that there are no feedback loops. Unfortunately for those who wish for such simplicity, Life is dense with such feedback loops, if not actually defined by such loops.

It is an astonishing fact of life, which the Times article reveals, that the desire for life to be simpler is so powerful that it can cause 10,000 "trained" scientists, with PhD's, to take 30 years to finally collectively observe what others outside their mutual-blindness-field already knew.

As I've said, textbooks such as "Feedback Control of Dynamic Systems" are in their 5th editions in Control System Engineering, but biologists, and much of public health's biomedical research community, discount that literature to the point of invisibility and effectively treat it with contempt. To them, this literature does not exist. When seen, it "comes as news to them", and is promptly forgotten, because it conflicts with the shared myth of their culture, and cultural myths always win out over boring contrary evidence.

Science, as an enterprise, as practiced by real people in the real world, is not immune or exempt from such behavior. I really must tip my hat to the late Dorothy Nelkin, who gave a graduate seminar back in the 70's or so at Cornell in "The Sociology of Science", for awakening me to this fact, which, as a physicist by training, was "news to me."

Similarly, Science, as an enterprise, and Medical Science as well, should not be astonished, but often are, that people outside their internally-blinding-fields have less regard for the collective ability to discern truth than the scientists inside the myth-field would expect. In fact, it sometimes appears from outside that the "scientific method", as practiced, produces a type of "idiot-savant" who can see with tremendous power along such a narrow trajectory that they have almost complete peripheral blindness. Their history of crashed theories and trail of mistaken certainties are painfully evident to outsiders, but almost invisible from within.

If confronted with the trail of past casualties of the "scientific method" we get a response that "see, it works!" when , as with biology, in only 30 years they get around to being forced to see something that makes their life more inconvenient and part of their training irrelevant or impotent. Comfortable delusion wins out, especially if shared with everyone nearby and only challenged by distant outsiders who are clearly ignorant fools.

So, yet, it is true, that some biologists have started to realize that in some cases Life involves complex systems and feedback. Perhaps in another 30-50 years, this will be dealt with, and, golly, they might realized that feedback crosses the vertical hierarchy and "local" events may in fact be determined by "distal" factors or even social factors. But I won't hold my breath, because, (a) I can't hold it that long, and (b) this fact would be so inconvenient, and such a problem, that it will find some way to be rejected yet again for another 30 years.

Yesterday, somehow prompted by doing the Time's Sunday Crossword puzzle, I came across a history of how the US Military stubbornly refused to see that airplanes could possibly damage ships at sea - a fact that flew in the face of existing "doctrine." Just as Semmelweiss was ostracized and removed for his myth-challenging assertion that it was doctors' dirty hands that were causing women to die in labor or surgery, so Billy Mitchell was court-martialed for convincing the military that their official doctrine had clay feet.

It is a little puzzling that very good researchers, who wouldn't think of peeking at the identifiers of samples in doing a double-blind experiment to defend against bias, can operate in a world with such huge, collective bias against certain ideas and be oblivious to it and resistant to the meta-idea that such bias exists and that they, caught up in that non-level playing field, have a huge effective bias affecting their results that they are unaware of and not properly countering.

If they knew it was there, yes, the would adjust for it. I love scientists. Part of my heritage is science. They're good researchers, but they're simply not familiar with the power of context to focus and blind and bias their very own selves to facts that are trying to leap off the page. Stephen Jay Gould documented much of the power of this effect so well in The Mismeasure of Man, but most scientists haven't read that, or think it doesn't apply to them because "they're very careful."

This is the heart of all the work in high-reliability systems as well-- how to overcome collectively formed mental models and myths and paradigms that have taken hold and are now blinding everyone to facts they should be seeing, but aren't.

Well, maybe at last, with computer modeling and the power of interactive animations, researchers may realize at last that bias comes in many sizes, and the larger models are almost as hard to see from those embedded within them as gravity waves.

It's not just scientists that are prone to this, but many of the rest of us have a little more humility or experience and realize our judgement is not 100% reliable. Scientists when they have checked off the boxes within their own tiny trajectory that has now become their entire world seem, collectively, to lack such humility - sort of an iatrogenic side-effect of the PhD process and of hanging around a very non-diverse crowd that shares the same viewpoint.

These silos of tertiary specialization are the source of much friction, particularly if it is not recognized that the distortion of the perspective of the silo is causing the blindness.

More on this in some later post. It's too important to breeze by, and core to the frustrating battle between religion and science over large-scale social processes.

This is the challenge all organizations, all cultures, all s-loops face -- how to achieve dynamic stability, to be resistant to type-1 errors of being too gullible and believing flashes in the pan, but of being still capable of avoiding type-2 errors - of being to stubbornly fixed on a particular data value, or mental model, or paradigm, or goal-set, or identity that it cannot accept any feedback at all and there is no reasonable way to get updates up to the top where they do any good.

This is perhaps the single largest core cybernetic challenge for a survival-enhancing model.


Wade

Saturday, June 16, 2007

Being a robot - 101: The cybernetic loop

I realized that I was just assuming that everyone knew how robots think.
Or for that matter, how babies think when they have to grab something.

We usually think of actions as big chunks, such as "Catch the ball."

Robots have to operate on a much more detailed, step by step level, with everything spelled out for them. Nothing is certain, so everything is just a process of getting a little closer and seeing if anything broke yet. And repeat.

They do this by following a very simple loop, over and over again. Spot where the ball is. Push your hand towards it a little bit. Remember that your hand doesn't always end up where you were trying to push it. Figure out which way the ball is NOW from your hand. Push your hand that way one notch. Figure out again which way the ball is now. Push your hand. Etc.

In a diagram, it would look something like this:
Congratulations! If you understand that diagram, you are much closer to understanding how anything works. Actually, I think you're one huge step close to understanding how almost everything works.

There is a cycle of action, looking, planning, action, looking, planning, etc. Over and over.

The "planning" tends to be very short-range, uncomplicated planning - but what it lacks in complexity, it makes up for with speed and persistence and never getting bored.

So here's a very powerful fact about life. Not only does "a journey of 1000 leagues start with one step", but sometimes the ONLY way to plan that journey is one step at a time.

In fact, a series of small steps is a thousand times more capable than one big step, regardless how clever you are, and regardless how well "planned" that one step is. It took computer scientists almost 50 years to figure out that many small computers is actually much better than one large computer for getting work done. It took "artificial intelligence" workers about 30 years to figure out that many small, dumb rules added up to a better way to work than one huge, complicated rule - and it was easier to write and easier to fix too.

Why is this? Imagine that you are on one side of a small woods and you want to get to the other side.
It is very likely that there is no direction you can pick to walk in a straight line that won't bump into a tree.
But, if each step can be a slightly different direction, there are thousands of paths you can use to walk through the same forest without running into a tree.

What's the moral? It seems so "obvious" now, but it baffled scientists for 50 years -- a "curved" path is more flexible than a "straight" one. You can get places with a stupid little loop as guidance that no amount of clever planning can get you if you have to move in one step in one straight line.

This kind of cycle with many tiny steps and a very short pause to think between each step is called a "cybernetic loop". It looks deceptively simple, while it is amazingly powerful.

It can keep on working if the wind is blowing, without having to be reprogrammed. It can keep on working if the ball is rolling on a bumpy hillside. It can keep on working if your robot arm is rusty and doesn't always move as far as it used to when you push it, and sometimes it sticks entirely. This deceptive little loop is all the computer programming required, essentially.

Now, it will work a little better if the robot has some learning capacity and has done this kind of reaching thing before. The robot may learn that it should reach for where the ball will be, not where it is now.

You learned this so long ago you have forgotten that you learned it. Imagine a baseball game where the batter hits a high, fast ball and the guy in the field runs towards home base instead of towards where the ball looks like it will come down again, because that's "where the ball is now."
So, yes, taking the speed of the ball into account does help. But that's a minor change to the program. The same loop works, except the "planning" step is a little bit longer.

So, this is profound wisdom I'm giving you here. It took all of mankind 50 years to figure this out, and some haven't got the news yet. You get it for free, right here, right now.

So, let me run it by you one more time. Here's the same moral, or same story, in slightly different words:

A plan of action that involves a repeated cycle of very small steps, with some looking and thinking between steps, is much more flexible, and much more "powerful" than trying to "solve" any problem in huge step.

Furthermore, if the world is complicated, and tends to have hills and bumps and wind gusts and rusty arms, you can be guaranteed that no "single-step" plan will ever succeed. In that case, ONLY a multi-step approach will get you where you want to go. If your job involves "going through the woods" and around trees that you don't even know about yet, it is much easier to plan to go around trees than try to "collect data" on the location of every tree, put it into some huge list or database, print out a map, and find "a straight path" through the forest.

This doesn't say "don't bother planning." It does say, "don't waste your time trying to find a linear solution to a curved path." There are millions of curved paths that can work just fine, in cases, like the woods, where there is no straight path possible.

And, one more time through it, from the Institute of Medicine's perspective, as in dealing with small teams (called "microsystems"). If you are dealing with a "complex, adaptive system" (like a hospital), it is way more powerful to just rig up the team with eyes and a feedback loop than it is to try to have hospital management "plan" how to improve things. Ditto for "The Toyota Way", or the power of "continuous improvement" or what Demings taught, or a "plan do check act (PDCA) cycle".

Empowering your front-line employees by giving them "eyes" and a little room to maneuver on their own to get around "trees" is a very powerful strategy that works in practice.

It is based on the most powerful "algorithm" we know of today - the "cybernetic loop."

Oh, yes, one more tiny thing. Since this is such a powerful "algorithm" or "paradigm" or way of doing things, much of Nature and your body already knew about it and uses it.

Public Health is sort of vaguely discovering that the "action" step always needs to be followed with a "reflection" or "assessment" step, but hasn't yet sprung to the fact that it is reinventing the wheel, or more precisely, the cybernetic loop, yet one more time. It hasn't figured out that many smaller steps adds up to a more powerful path-generator than one large step.

And, sigh, enterprise budget processes don't reflect this wisdom. For years I fought with the fact that Universities tend to have "annual budget cycles", and enterprise computing is seen as coming in only two flavors: "maintenance" and "huge projects". Maintenance money can only be spent keeping things the same. Huge Project money ("capital budgets") can only be used to take, well, huge steps in a big straight line, and the big straight line, or "project plan" has to be computed up front and committed to before starting.

Well, duh, no wonder that doesn't work. That CANNOT BE MADE TO WORK. There are too many unknowns and unknowables, too many rusty arms, too many trees.

But every time it fails, the "solution" is to plan every LARGER steps next time, with a much BIGGER database that lists every single tree and bush and pothole. THEN, oh boy, you betcha we'll succeed.

Nope. That's a bad algorithm, a bad paradigm. The cybernetic loop model tells us the answer is way back at the other end: continuous, incremental, small improvement steps. Steps driven by local "feedback" that doesn't even involve upper management.

You can get to places you need to go with a million simultaneous tiny, sensible steps that people can understand that you cannot get to with one huge project, regardless how many billions you spend on "planning" it. Our whole accounting system, meant to help us spend money wisely, is causing us to spend it foolishly.

As the IOM report realizes - "We don't need a billion dollar project -- we need a billion, one-dollar projects." (paraphrased from "Crossing the Quality Chasm"). This isn't "sour grapes" or "some dumb idea" -- this is the most profound wisdom humanity has come up with yet.

It's kind of the Chinese approach. If every person picks up one piece of trash a day, it's way more successful than if every person sends $1000 per year into a central location where we build the Institute of Trash Pickup and study the trash-pickup problem and produce endless reports and finally some huge trash collection system that doesn't really work but is really expensive to maintain when they're not on strike (thank you, John Gall, for that insight.)

Ditto for installation of some kind of automated physician order entry system or other massive cultural change of the way things are done. It may seem "hard" to figure out what huge new system, in one step, will get us from point A to point B. Hmmm. Maybe that's because there aren't any "one-step" solutions to getting through the forest, and we need to reconsider our approach. Maybe a million tiny adjustments will solve two problems at once: the "What do we do?" problem, and the every popular "How do we implement it?" problem.

Ten thousand tiny search engines (people) each looking for one tiny step that is possible and totally understood that would help "a little bit" actually constitutes a "massively parallel supercomputer" that can outstrip almost any other way of "solving" BOTH of those problems simultaneously. That's really cool, because it turns out not to matter how great a solution is on paper or at some other site, if there's no way to get it implemented here without spilling the coffee and crashing the bus. That's the lesson Toyota learned. Forget central planning, which the Soviet Union demonstrated doesn't work. Empower the troops to use their eyes and brains and good judgement and make a million adjustments of 0.001 percent size.

It's an incredibly powerful algorithm. It doesn't require brilliant central planning officers. But it does require believing that the ground troops have enough brains to carry their coffee across the office without spilling it, even if they just waxed the floor. Turns out, according to Toyota, that's probably true.

Oh, yes, I almost forgot. It would seem to make sense that, if this cybernetic doodad is so powerful, that it is in operation already in billions of places around us in society and biology. That would argue that it might be worthwhile to have cybernetic doodad detectors, and cybernetic doodad statistical tools available to use to spot and describe and tweak such thingies.

Most of the last 6 months postings to this weblog have tried to make that argument, in more complex ways, and maybe that's my problem.

The American Indians knew this - that the Great Spirit worked in circles, not lines. Taoism knows about circles and cycles. "Systems thinking" involves accepting that there are important places where feedback loops just might possibly be involved.

We're so close now. Bring it home, baby!

(Posted in memory of Don Herbert, "Mr. Wizard", who died last week, and taught millions of kids, including me, basic science-made-easy on his TV show.)