Showing posts with label reliability. Show all posts
Showing posts with label reliability. Show all posts

Monday, December 17, 2007

New life forms from Synthetic DNA - Washington Post


The Washington Post today deals with "Synthetic DNA on the brink of Creating New Life Forms." Talk about children playing with matches... Rick Weiss begins " It has been 50 years since scientists first created DNA in a test tube..." I'd add - it has also been 50 years since Jay Forrester's classic piece on "unintended consequences."

Here was my reply:

wade2 wrote:
Bio-error indeed. Maybe error-gance is the bigger threat, and very real. Our social approach to low-odds of very-high-risk accidents, as Carl Sagan pointed out re return of samples from Mars, is completely overwhelmed by our normal intuition. At Los Alamos, the first atomic bomb was tested when only a minority of the scientists on the project (something like 6 of 14) thought it would detonate the earth's crust and explode the entire planet. No one was sure, so they tested it. Hmm.

Good books like "Lethal Arrogance" by Dumas and "Normal Accidents" by Perrow detail hundreds of examples of our tendency to run it till it breaks, and then, only then, stop to think.
The tools to even begin to think about the way coupled feedback-loops get their job done, such as System Dynamics, have languished for 50 years. MIT's John Sterman, in "Business Dynamics - Systems Thinking and Modeling for a Complex World" , details the lack of correct intuition, even for the MIT community, brighter than most. PhD's don't generally help, and most of us have less to work with.

So, at best we can model and simulate, which has been done at the Santa Fe Institute for the last few decades, with "artificial life" - virtual life and virtual DNA, genetic algorithms breeding and evolving, to see what happens. http://www.santafe.edu/ describes the work of many Nobel Prize winners.

In short (1) the little buggers are far smarter than we are and (2) parasitism evolves almost instantly in every case. The lesson of the movie Jurassic Park is a mild taste of the tenet "Life will find a way."

If the rest of our human affairs were measured and mature and stable, this would be a risky business. Having unstable tyrants convinced they must "master" this technology and use it to attack others, or defend from attack (exact same research), leads to the Russian model of stockpiling hundreds of tons of Anthrax or worse, in delusions that bio-warfare would be controllable or could be "won".

There are good odds the viruses and fungi and insects will win, not so good for humans.

Life is built with interactions with emergent properties on multiple levels, and we tend to think of "machines" at one level with only one function. But genes don't work like machines, they work like cooperative swarms.

Bio-warfare research has a "life of its own" that should already put us on alert that it is way easier to create things that "might as well be alive" than we think. Since we cannot stop it, we are committed to trying to get ahead of it and get the reins back, which means we should pour billions into understanding the world that the Santa Fe Institute has pioneered - massive interactions, how they go good, and how they go bad.

It becomes clear very quickly that, with complex systems, by the time you realize you "shouldn't have done that" it's too late. Experience is something that comes just after we need it.
For very high-stakes mistakes, that's too late. If we keep gambling with the whole planet on the table, sooner or later we'll lose one turn.

One is all it takes.

12/17/2007 6:07:22 AM
=========

Actually, all the research on high-reliability systems like nuclear power plant control rooms show that the maturity of the social system is what makes or breaks the technology-based system. Psychologically safe environments are needed for people to raise their hand, without fear of reprisal, and question what the heck is going on.

What we have instead is a whole culture used to using fear as a workplace and political context to "get things done", as described by Harvard Professor Amy Edmondson.

The Shuttle Columbia (picture at left) exploded because of an "o-ring" problem, that all the project engineers knew about, and had in fact gone in that day to tell the boss to tell the White House that it was too cold to launch safely. They all lost their nerve under workplace pressure to "deliver" so the Pres could talk to an orbiting teacher during the State of the Union address. She did, in fact, leave a message for us (picture at left) of what happens when we don't listen -- but, I guess we're still not learning that lesson.

Further reading

The classic paper in this field is Jay Forrester's congressional testimony:
"The Counterintutive Behavior of Social Systems",
https://mail.jhsph.edu/exchweb/bin/redir.asp?URL=http://web.mit.edu/sdg/www/D-4468-2.Counterintuitive.pdf

Quoting the abstract:

Society becomes frustrated as repeated attacks on deficiencies in social systems lead only to worse symptoms. Legislation is debated and passed with great hope, but many programs prove to be ineffective. Results are often far short of expectations Because dynamic behavior of social systems is not understood, government programs often cause exactly the reverse of desired results.

Another quote from the Washington Post article is this:

"We're heading into an era where people will be writing DNA programs like the early days of computer programming, but who will own these programs?" asked Drew Endy, a scientist at the Massachusetts Institute of Technology.

How true that is. I've been programming computers for over 40 years, and agree that the programs they write will be exactly like the "single-threaded" programs that mess up our airline reservations and everything else. In fact, a look inside some place like a hospital reveals the workings of the multiple legacy computer systems cobbled together in absence of any fundamental theory at all of how many interacting things should be structured in order to be reliable. Thirty years of research in computer science on "distributed operating systems" and how to build reliability in has had close to zero impact on the quick and dirty, cut-corners-now-and-we'll-debug-it-later model that vendors find locally profitable, but that always breaks down, producing, ta da!, more profitable rework. As a business model it's very popular; as a way of getting reliability, we all have seen the results. This is the culture we expect to "program" our genes? I'm not rushing to sign up.

The article quotes someone on the "unprecedented degree of control of creation" that the DNA technology gives us. Right. This is about the degree of "control" that a Labrador Retriever on your lap in the car at rush-hour has -- yes, it can turn the steering-wheel, but I wouldn't use the term "control" for what happens next. If you think our economy and business development and health care system are "under control", then maybe you would think genes could be "controlled" the same way - and they can, with about the same results.

Sadly, control requires maturity and depth of understanding, instead of simply strong muscles and a short attention span. I wish it were our strong suit as a nation, but see little evidence that it is, or even that it is valued or desired as a long-term goal.

We have instead young children playing with the cool gun they found in daddy's nightstand.

Oops.

======= Some after-thoughts:

Unlike the video games and computers this generation grew up with, life does not always have an "undo" button.

The core task of a civilization is to capture the wisdom we finally learn too late, and get it into a form that modifies the behavior of the next generation so those same lessons don't have to be learned all over again.

The hardest part of that task is that the next generation typically doesn't want to take advice from old people about situations the village elders seem way too concerned about - like, not going into debt over your head, you know, crazy stuff like that.

George Santayana said "Those who cannot learn from history are doomed to repeat it." I'd modify that slightly and add "Those who cannot learn from near-misses will someday not miss."

Each time we don't learn this, as a society, the costs go up. The biggest unknown in "the Drake Equation" about odds of there being other intelligent life in the galaxy that we could detect with radio is how long a civilization survives after it has gotten to the point where it has that much technology. The complete absence of any detectable signals from 100 trillion worlds "out there" suggests this is a pretty small number of years -- maybe under 200 years.

At the rate we're going, we're heading towards adding one more point to that data set.
Learning how to learn from our mistakes and our own past seems to be as important a problem as global warming, but actually more urgent, because time is running out a little faster on the 400,000 ways, besides global warming, that we can end human life on the planet.

Humans are remarkably inventive, and if every weapon and sharp object on the planet vanished, they'd find ways to attack each other with stones. Instead of tackling each symptom like global warming or genocide or terrorism, it would seem wiser to track further upstream and find the root-cause problem for why people are driven to fight, and fix that.

======================================

More further reading:

On High Reliablity organizations, which are sobering. They try really really hard to not have accidents, and still don't succeed from time to time:

http://www.highreliability.org/

I'm sure the US military tries very hard to keep nuclear weapons under control. Even that intense level of attention isn't enough to do the job 100% of the time, illustrating John Gall's law that "complex systems simply find complex ways of failing."

"Honey, I lost the nuclear weapons"

The US National Institutes of Medicine on how much the social relations of the front-line teams matter when your job is to get reliability in hospital care:

Crossing the Quality Chasm and other links

=========================
Photo credits :
Oops (car) by
estherase
US Space Shuttle by
Andrew Coulter Enright

Thursday, May 31, 2007

Two more arguments for unity

I discussed in an earlier post some arguments for why it may be a bad idea to put off efforts to deal with large-scope problems "until we have all the smaller-scope ones completed."

This, again, is flying in the fact of exactly the opposite trend among many business leaders, who have jettisoned concern about long-range planning, or else reduced it to a horizon of 3 months and call that "long-range". And, it flies in the fact of advice from many PhD advisors, who try to train their students to focus on smaller, shorter-term, more "realistic" problems.

The implicit sense is that the total energy and effort required to complete a task gets larger as the scale of the activity gets broader. In mathematical terms, it is assumed that effort to do a credible and useful job is a "monotonically increasing function of scope."

I completely disagree with that, and feel that by the same logic, no one should study astronomy, or even make a map of the stars, until we understand atoms perfectly. Or, perhaps, no one should study sociology and government until we understand everything there is to know about individual people.

It seems obvious, on reflection, that we can learn a lot about people by observing precisely the emergent phenomena around us. We can learn a lot about air by observing the weather, clouds, thunderstorms, and tornadoes that we would have a hard time "seeing" in a beaker of air, regardless how well and how long we studied it.

Also, we would have to explain why it is that it is far easier to describe the equations governing water in pipes in our plumbing than it is to describe molecular quantum mechanics. That, alone, seems to be a counter-example that disproves the hypothesis that larger things must be harder to get a useful handle on.

This is particularly true when large scale phenomena are actually causal, by all our standard definitions of that word, and the small scale phenomena of which the large is composed is not causal. Pretty much any electronic device is an example of that, where we rely on the statistical behavior of "current", without actually caring whether particular electrons move or kick back and chill, so long as most of them do what we expected every time.

Newtonian and Laplacian bases of description.

These are big words but relatively simple concepts. Newton described things in terms of points, so we might describe a ball's motion by listing where it was at time "t" for t=1, t=2, etc. to whatever precision we desire. To describe anything "large" in size therefore takes a "large"number of such measurements, and is correspondingly expensive and difficult.

Note importantly that the Newtonian method is always, literally, "full of holes" at the times we did not specify. It is an incomplete description, but works for many purposes, especially if we "interpolate" and "extrapolate" to "fill in the missing pieces" and "connect the dots."

That's not the only way we can describe the position of a ball. An alternative method, equally capable of being "complete" to whatever accuracy we desire, is to start by stating where the ball's average position will be over the time period of interest. That's the first data point.

The second data point could be the "standard deviation" or other measure of the variability of the ball's position over the time period of interest. (Or, we could pick as second and third data points the maximum and minimum positions of the ball over the time period of interest.)

Now here's an interesting thing. The way Newton described things, with three data points of the temperature today, we'd have the temperature at Midnight, at 12:01 AM, and at 12:02 AM., and not know very much that people care about, regardless how precisely those three measurements were taken. On the other hand, if I tell you the average temperature today will be 83, with a low of 55 and and a high of 92 F, I've also told you exactly 3 things, but they contain global information, not local information, and are way more helpful to you in selecting
clothes to wear, etc.

This is another counter-example, where a global measure is actually far easier and more useful than an arbitrarily precise local measure.

The specification of something, starting with the "low frequency, large scale" average color, then adding successively higher frequency variations around that base color, is basically how the jpeg images are encoded. Again, if a progressive jpeg is downloaded, you may be able to see in the first few seconds that it's not what you want as it emerges from the mist, and move on to something else. Meanwhile, viewers of TIFF images, are waiting for the top row of pixels to arrive, then the second entire row., then the third entire row, etc. You could need to download most of the picture to see what it is and whether you want it.

JPEG's can be arbitrarily precise, as precise as TIFF images, but it is seldom necessary for what humans do with images.

All the above is one set of arguments for why "large scale" properties are no harder than "small scale ones", and are often easier, and should not be neglected just because they are "large."

And, for phenomena that are "context dependent", as so much is, it may be far more valuable to us to get the first few "moments" of a distribution (the average, the standard deviation, etc.)
than to get the first few data points of the time-series. So, it can be far faster for many real decisions we need to make.

And, in physics, there are conserved properties such as "total energy" and "total momentum" that don't care at all how these rearrange themselves at the local internal level, so long as the overall total remains constant as seen from outside.

A completely different case for working from the top-down, instead of the bottom up, is called precisely that - "top down design" and "top down computer programming". A few hundred thousand person years of experience programming have led experts to believe that it is much more effective to describe a problem starting at the top, in very broadest terms with the least depth, and work our way "down" into successively more detail -- than it is to go the other way.

The other way, bottom up, in fact, is viewed as the major source of time-consuming "bugs" and conceptual errors that are very hard to resolve and hard to locate. In fact, if an organization ever gets an "Escher waterfall" shape in place, and realizes it is flawed, they might simply choose to live with the pain of the flaw, because of the amount that has been vested in "getting the pieces right" so far, the pieces that everyone has adapted to and is willing to accept as "the devil we know" rather than "starting over." At that point, as Zorba the Greek might say, we have "the full catastrophe" of a flawed design that no one wants to let go of, even though it is demonstrably broken.

With top-down design, the details rest on the the larger and larger contexts, not vice versa.
This is great, because the things most likely to change are the details, not the largest contexts.
If we rested everything on the details, every time a detail changed we'd have to redo the entire program. If we rest on top-down hierarchy of contexts, usually all but the very last few, the most detailed, remain constant over the life of the program, and the amount of change to code required is minimized. Most of the code, the upper levels, remains stable and validated and doesn't need to be touched. In the "bottom up" world, if you change one detail, you probably need to rewrite everything.

So, what I'm arguing is that these principles seem to apply as well to descriptions and measurements of our social organizations. Top down metrics may be much easier to do and reveal everything we need to know much faster than trying to get a huge number of detailed data points at the bottom levels.

In my mind, then, the mathematics and science argue strongly for working top down, and getting the large conceptual pieces resolved before worrying about the details, not the other way around. This progression seems, also, to match up with what Fisher and Shapiro are arguing in "Getting to Yes", that our problems only seem intractable because we're trying to resolve them at the most detailed level of "positions" when we could and should move up a few levels to "interests", where it is far more likely that we can find common ground.

For these reasons, from the discussion of fraying and gaps in human responsibility of synthesized tasks, and many others, I urge exploration of the "larger issues" at least in parallel with the "smaller" ones.

There are two final reasons I'll add to the mix.

First, although scientists tend to forget it, the entire enterprise of science is a social entity, and, as scientists seem always shocked to rediscover, the enterprise rests on a political and social matrix in which it is embedded.

Put most simply, if the social interests and the scientific interests clash too much, it is the scientists who will be out of jobs, not the society. If the society collapses politically, or has a global thermonuclear war or global biological war, the rest of science becomes moot. It doesn't matter how precise you are when you're dead.

There is, in other words, some timetable, some urgency, to getting sufficient data together to make some very large, very important decisions that will need to be made soon, that will dramatically effect us all. A response to the global rising epidemic of drug-resistant Tuberculosis is one, and what trade off civil liberties should have versus the rights of "the public" to be protected from people who are carrying infectious diseases. Ditto for AIDS.
What we should do about the "middle east problem" is another.

We don't have time for Science to analyze molecules sufficiently well to be able to tell us who to vote for in 2008, and that won't happen regardless how long we wait. The data and factors and variables of interest to us don't even exist at the molecular level. The universe is not deterministic upwards, as physics has shown us finally.

So, if we have some hard, global decisions coming up, we cannot wait for a bottom-up assembly of concepts and fragments of knowledge to succeed, because even if it were possible to happen, which it seems not to be, it would take "longer than we have."

We have, maybe, a decade or two to decide our fate, in some rather permanent and irreversible ways.

Given all the above, this argues that at least some effort should be given to looking for a "top down" approach to understanding how things work and what our options are. In that conclusion, I find myself in complete agreement with the teachings of the Baha'i Faith, which I will close with, as quotes from http://www.bahai.org.

But First, Unity

Is unity a distant ideal to be achieved only after the other great problems of our time have been resolved?

Bahá’u’lláh says the opposite is the case. The disease of our time is disunity. Only after humanity has overcome it will our social, economic, political, and other problems find solution.

Today, several million people around the world are discovering what He means. We invite you to explore His message with us.

I didn't set out to "prove" that the Baha'i's are "right" and that is not why I raise the issue now. I raise it because the group has been focused for 150 years on precisely this core issue of "unity in diversity", the one that the rest of academia is finally recognizing, and the group has studied first hand by direct experience what it takes to make that work in various parts of this actual planet we live on. That experience is hard-won and we don't have time to replicate it.

In that regard, it seems "due diligence" to at least read what they have to say.

W.