Monday, December 17, 2007

New life forms from Synthetic DNA - Washington Post


The Washington Post today deals with "Synthetic DNA on the brink of Creating New Life Forms." Talk about children playing with matches... Rick Weiss begins " It has been 50 years since scientists first created DNA in a test tube..." I'd add - it has also been 50 years since Jay Forrester's classic piece on "unintended consequences."

Here was my reply:

wade2 wrote:
Bio-error indeed. Maybe error-gance is the bigger threat, and very real. Our social approach to low-odds of very-high-risk accidents, as Carl Sagan pointed out re return of samples from Mars, is completely overwhelmed by our normal intuition. At Los Alamos, the first atomic bomb was tested when only a minority of the scientists on the project (something like 6 of 14) thought it would detonate the earth's crust and explode the entire planet. No one was sure, so they tested it. Hmm.

Good books like "Lethal Arrogance" by Dumas and "Normal Accidents" by Perrow detail hundreds of examples of our tendency to run it till it breaks, and then, only then, stop to think.
The tools to even begin to think about the way coupled feedback-loops get their job done, such as System Dynamics, have languished for 50 years. MIT's John Sterman, in "Business Dynamics - Systems Thinking and Modeling for a Complex World" , details the lack of correct intuition, even for the MIT community, brighter than most. PhD's don't generally help, and most of us have less to work with.

So, at best we can model and simulate, which has been done at the Santa Fe Institute for the last few decades, with "artificial life" - virtual life and virtual DNA, genetic algorithms breeding and evolving, to see what happens. http://www.santafe.edu/ describes the work of many Nobel Prize winners.

In short (1) the little buggers are far smarter than we are and (2) parasitism evolves almost instantly in every case. The lesson of the movie Jurassic Park is a mild taste of the tenet "Life will find a way."

If the rest of our human affairs were measured and mature and stable, this would be a risky business. Having unstable tyrants convinced they must "master" this technology and use it to attack others, or defend from attack (exact same research), leads to the Russian model of stockpiling hundreds of tons of Anthrax or worse, in delusions that bio-warfare would be controllable or could be "won".

There are good odds the viruses and fungi and insects will win, not so good for humans.

Life is built with interactions with emergent properties on multiple levels, and we tend to think of "machines" at one level with only one function. But genes don't work like machines, they work like cooperative swarms.

Bio-warfare research has a "life of its own" that should already put us on alert that it is way easier to create things that "might as well be alive" than we think. Since we cannot stop it, we are committed to trying to get ahead of it and get the reins back, which means we should pour billions into understanding the world that the Santa Fe Institute has pioneered - massive interactions, how they go good, and how they go bad.

It becomes clear very quickly that, with complex systems, by the time you realize you "shouldn't have done that" it's too late. Experience is something that comes just after we need it.
For very high-stakes mistakes, that's too late. If we keep gambling with the whole planet on the table, sooner or later we'll lose one turn.

One is all it takes.

12/17/2007 6:07:22 AM
=========

Actually, all the research on high-reliability systems like nuclear power plant control rooms show that the maturity of the social system is what makes or breaks the technology-based system. Psychologically safe environments are needed for people to raise their hand, without fear of reprisal, and question what the heck is going on.

What we have instead is a whole culture used to using fear as a workplace and political context to "get things done", as described by Harvard Professor Amy Edmondson.

The Shuttle Columbia (picture at left) exploded because of an "o-ring" problem, that all the project engineers knew about, and had in fact gone in that day to tell the boss to tell the White House that it was too cold to launch safely. They all lost their nerve under workplace pressure to "deliver" so the Pres could talk to an orbiting teacher during the State of the Union address. She did, in fact, leave a message for us (picture at left) of what happens when we don't listen -- but, I guess we're still not learning that lesson.

Further reading

The classic paper in this field is Jay Forrester's congressional testimony:
"The Counterintutive Behavior of Social Systems",
https://mail.jhsph.edu/exchweb/bin/redir.asp?URL=http://web.mit.edu/sdg/www/D-4468-2.Counterintuitive.pdf

Quoting the abstract:

Society becomes frustrated as repeated attacks on deficiencies in social systems lead only to worse symptoms. Legislation is debated and passed with great hope, but many programs prove to be ineffective. Results are often far short of expectations Because dynamic behavior of social systems is not understood, government programs often cause exactly the reverse of desired results.

Another quote from the Washington Post article is this:

"We're heading into an era where people will be writing DNA programs like the early days of computer programming, but who will own these programs?" asked Drew Endy, a scientist at the Massachusetts Institute of Technology.

How true that is. I've been programming computers for over 40 years, and agree that the programs they write will be exactly like the "single-threaded" programs that mess up our airline reservations and everything else. In fact, a look inside some place like a hospital reveals the workings of the multiple legacy computer systems cobbled together in absence of any fundamental theory at all of how many interacting things should be structured in order to be reliable. Thirty years of research in computer science on "distributed operating systems" and how to build reliability in has had close to zero impact on the quick and dirty, cut-corners-now-and-we'll-debug-it-later model that vendors find locally profitable, but that always breaks down, producing, ta da!, more profitable rework. As a business model it's very popular; as a way of getting reliability, we all have seen the results. This is the culture we expect to "program" our genes? I'm not rushing to sign up.

The article quotes someone on the "unprecedented degree of control of creation" that the DNA technology gives us. Right. This is about the degree of "control" that a Labrador Retriever on your lap in the car at rush-hour has -- yes, it can turn the steering-wheel, but I wouldn't use the term "control" for what happens next. If you think our economy and business development and health care system are "under control", then maybe you would think genes could be "controlled" the same way - and they can, with about the same results.

Sadly, control requires maturity and depth of understanding, instead of simply strong muscles and a short attention span. I wish it were our strong suit as a nation, but see little evidence that it is, or even that it is valued or desired as a long-term goal.

We have instead young children playing with the cool gun they found in daddy's nightstand.

Oops.

======= Some after-thoughts:

Unlike the video games and computers this generation grew up with, life does not always have an "undo" button.

The core task of a civilization is to capture the wisdom we finally learn too late, and get it into a form that modifies the behavior of the next generation so those same lessons don't have to be learned all over again.

The hardest part of that task is that the next generation typically doesn't want to take advice from old people about situations the village elders seem way too concerned about - like, not going into debt over your head, you know, crazy stuff like that.

George Santayana said "Those who cannot learn from history are doomed to repeat it." I'd modify that slightly and add "Those who cannot learn from near-misses will someday not miss."

Each time we don't learn this, as a society, the costs go up. The biggest unknown in "the Drake Equation" about odds of there being other intelligent life in the galaxy that we could detect with radio is how long a civilization survives after it has gotten to the point where it has that much technology. The complete absence of any detectable signals from 100 trillion worlds "out there" suggests this is a pretty small number of years -- maybe under 200 years.

At the rate we're going, we're heading towards adding one more point to that data set.
Learning how to learn from our mistakes and our own past seems to be as important a problem as global warming, but actually more urgent, because time is running out a little faster on the 400,000 ways, besides global warming, that we can end human life on the planet.

Humans are remarkably inventive, and if every weapon and sharp object on the planet vanished, they'd find ways to attack each other with stones. Instead of tackling each symptom like global warming or genocide or terrorism, it would seem wiser to track further upstream and find the root-cause problem for why people are driven to fight, and fix that.

======================================

More further reading:

On High Reliablity organizations, which are sobering. They try really really hard to not have accidents, and still don't succeed from time to time:

http://www.highreliability.org/

I'm sure the US military tries very hard to keep nuclear weapons under control. Even that intense level of attention isn't enough to do the job 100% of the time, illustrating John Gall's law that "complex systems simply find complex ways of failing."

"Honey, I lost the nuclear weapons"

The US National Institutes of Medicine on how much the social relations of the front-line teams matter when your job is to get reliability in hospital care:

Crossing the Quality Chasm and other links

=========================
Photo credits :
Oops (car) by
estherase
US Space Shuttle by
Andrew Coulter Enright

No comments: