Wednesday, November 17, 2010

Brief thought on internet security and federal snooping

There is an effort underway, according to the papers, to expand the US government's ability to eavesdrop on personal conversations that take place in social media,  especially things like Skype.

People don't trust this snooping to be well-intentioned or well-managed, and don't like it. Businesses don't trust this snooping to be well-intentioned or well-managed, and they don't like it.

The argument presented by the government is that this snooping can catch bad guys before they do serious damage, and so it is justified, because it protects us from harm.

As as been asked of full-body scans at airports,  at what point does the harm done BY the effort to be safe exceed the harm it is intended to prevent?  How many of our values is it justifiable to destroy in order to save the rest of them?

I should point out that there is an extraordinary mathematical equation being used behind the scenes here.  To explain it, let me give an example.   I found out once that a database of names I was looking at had tens of thousands of people in it with a middle name of "Von".    We know that essentially no one in the real world has a middle name of "Von".  We know that name-parsers on data systems are typically baffled by "surnames" which have more than one part separated by spaces, or by punctuation or capitalization, such as "von Schmidt" or "de la Iglesia" or "McMuffin" or "O'Reilly".  There are extremely good odds that any given name in this database that shows a middle name of "von" would, if investigated, prove to be a parsing error of a last name with "von" and a space as part of it.

BUT,  when I suggested writing a short program to do a single mass update and move the "von" to being part of the surname,   the idea was rejected.   It wasn't rejected because of disagreement that in 99.5% of the cases, this would take a wrong entry and make it correct.   It was rejected because in a handful of cases, it would take a CORRECT entry and make it wrong.  There may in fact be a few people in the world who have a middle name of "Von".  (It would be easy enough for anyone with the US Post office name and address file to compute the actual percent of people mid-named "von" - if someonen does that, let me know,and I'll add the comment here.).

Nevertheless,   the RISK to a mid-level bureaucrat of making something WRONG,  in 0.01% of the cases,   is sufficient to completely outweigh the BENEFITS to the entire system of making the other 99.99% of the cases change from "WRONG" to "CORRECT."    There is a massive asymmetry in the equation.

I see the same kind of thinking at work in the decisions to go ahead and try to eavesdrop on every possible personal communication between people and business.      Yes, in a handful of cases, some bad people will be detected.    The question is, in all the other cases, how much suppression of free-conversation is caused by this eavesdropping,    how many people will be falsely harassed or imprisoned or have their equipment or books or company impounded due to something they said taken entirely out of context,  etc.    In order to avoid saying things that could be "taken wrong by a federal agent with no sense of context, humor, etc.,  everyone would have to continuously remember that there is no privacy, that someone who is an idiot is listening,  and that the someone, if offended,  can spring up and take action without a reason being given or even available after the fact, due to "security concerns."

Free conversations between people and "unconsidered thinking and words" that will vanish from our minds tomorrow will suddenly be saved for all time out of context.

My last post is on the tremendous impact on our health of having and keeping friends.  A substantial part of such friendships is a high volume of daily communication that is carried out with one's guard down,   because one is in the company of a good friend who won't take offense, who knows our humor, and who can simply ask if they are not sure if we are serious or not, or who doesn't need to ask"

If all privacy on the internet social media is removed, and blatantly removed, with a few examples made very public to make the point,  there will be a real, negative,  damaging effect on the role of the internet in sustaining the individual and collective health of the USA.

I submit that this negative impact on health is much LARGER, and will occur in EVERY case, than the 00.01% of the cases in which bad people are detected and deterred.

ONLY by the skewed math whereby a bureaucrat sees a 00.01 % risk of damage-type-1 as being more important than a 99.99% risk of damage-type-2 is the idea of internet snooping and removal of the last channels for open discussion and dissent warranted.    (damage-type-1 is the kind that the bureaucrat might be BLAMED for,  and damage-type-2 is the kind that the bureaucrat gets zero credit for if it is avoided.)

By the same logic that preventive medicine gets no respect, or that my fixing the "von" problem was rejected,  I would expect that internet snooping by the government will in fact become a reality, causing substantial damage to the country that the effort is supposed to be protecting.

Only people who could say "We had to destroy the village to save it." would find comfort in that.

I am not opposed to there being "security measures".  I truly believe there are people out there who mean us harm, or who simply enjoy causing harm.     I acknowledge that if security is not tight that some of those people who might have been caught and deterred will not be caught and deterred, with subsequent damage.

The problem is a larger one of letting the "bureaucratic risk avoidance" equation dominate the conversation, instead of a rational risk avoidance calculation.

I have the same feeling that the same kind of computation is involved in extending the wars the country is involved in far beyond any rational break-even point, well into the domain of not only diminishing returns, but actually causing the problems it is intended to prevent.   But no one can stop a war, once started, because of the bureaucratic equation -- to let it continue is a "risk free" process because "someone ELSE" made the decision.   To stop it entails a risk that something bad might get through the lowered net,  which one could be tarred and feathered for.  So,  despite the fact that EVERYONE thinks a war is bad, NO bureaucratic politician will vote to end it.   In human relations literature, or cogntive errors groups make, this is the "Abeline error" -- the impression of a concensus about somthing that actually has no concensus.

And the cost of it in lives and treasure on all sides is huge.

The key element of "social capital" is TRUST.    If trust is low,  the "transaction cost" for any action rises.   If the multiplier effect is in the trillions,   a decrease in  trust can cost the entire payroll.  We can easily imagine security around the US becoming tighter and tighter,  while the impact is that individuals,  scholars,  visitors,  conferences, and entire businesses decide to relocate somewhere else.  We will finally achieve a huge wall of protection around the dead residual core of a once great and thriving nation.  But the bureaucrats will feel good, cause no one can "blame them" for it.

This is the intervention point that is such a challenge.  Yes, indeed, we MUST start blaming bureaucrats for the "opportunity costs" of their interventions and constraints on our lives.

To paraphrase John Gall's law of Systemantics:  "The component that will cause your system to fail is the component you just added to cause your system to be fail-safe."

This was most wonderfully illustrated recently on campus when computing was shut down for several days on the Engineering quad because the batteries in the "uninterruptable power supply" caught fire, shutting down the computing center.

It was illustrated well in the Crash of Comair 5191, that I wrote about previously, brought about largely because the copilot was so busy going through the extensive safety checklist that he never had time to look out the window at where the plane was going.


Quoting Lewis Thomas (1974):
When you are confronted by any complex social system, such as an urban center or a hamster, with things about it that you're dissatisfied with and anxious to fix, you cannot just step in and set about fixing things with much hope of helping. This realization is one of the sore discouragements of our century.... You cannot meddle with one part of a complex system from the outside without the almost certain risk of setting off disastrous events that you hadn't counted on in other, remote parts. If you want to fix something you are first obligated to understand ... the whole system ... Intervening is a way of causing trouble.
IN reality there are no side effects, there are just effects.  We need to do our sums correctly, when thinking about security,  and be sure we have included the costs of our interventions, not just the benefits, when we want to compare them.

Otherwise, at the end of the day,  feeling proud that we have defended "our country and its values" we will be shocked and baffled that the country and values inside the box we've been defending is damaged beyond recognition "despite" our "best efforts" to protect it.

Oh yes,  one more thing.  The POWER of Skype and Napster are in the fact that they BREAK the constraint that all communications have to go through a central point, and allow point-to-point communication.  To a computer science guy, this is the difference between unbounded success at setting up communication that works and a system that quickly bogs down just as some emergency occurs and everyone wants to talk to each other about what to do about it.

The peer-to-peer architecture is VASTLY more functional than the STAR (central point) architecture and the difference is "WORKABLE" versus "UNWORKABLE",  or scalable versus not-scalable.

What the Feds are asking Skype and others to do is to UNDO all of the progress of the last 20 years in moving away from central bottlenecks to point-to-point systems,    and FORCE all such systems to return to the model of sending everything through a central point, so that it is easier for the feds to monitor it.    Again, lack of trust induces huge transaction costs.  The communication system, now mostly immune to crashing during extreme social events,  will suddenly have a new property of becoming unusably SLOW precisely when business and individuals NEED to talk to each other.

Point-to-point traffic is the model the human brain uses for intelligence.   Bottleneck traffic is the model Washington uses for intelligence.   One is WAY more vulnerable to failure, abuse, mis-use, and WAY more expensive to maintain at any given level of reliability.

I do not think that making it HARDER for people to work together and talk to each other is the way out of the massive combination of social crises we find ourselves in today,  of which "terrorism" represents less than 1 percent.   We're doing a fine job of killing ourselves off without any action by terrorists at all.  It's THOSE threats that need to be on the table when we design a rational security policy for the nation.

The set of F-22 fighters that we proudly sent the first time to Japan (Feb 24, 2007) had almost complete system failure as they crossed the International Date Line.     The systems themselves were OK, except for one which had trouble with arriving at a point "the day before it left" the last point.    However, the broken system generated errors at the rate of 50 per second, and quickly filled the "error log".   In order for the F-22 to be "safe" on fly-by-wire, all components had to wait for the operational log (eavesdropping) to accept their latest step before proceeding. Suddenly, nothing could proceed, because the eavesdropping log was full.       The F-22's survived only because the refueling tankers were nearby, and by hand signals the flight got the tankers to lead them back to Hawaii for repairs.    This is a perfect example -- the entire fighting force was defeated by the sincere efforts to be "totally extra safe" from "error".
Finally, of course, there is the security risk that some internal group of the government, or some very bright well-funded group of bad people (read "organized crime") may be absolutely delighted that all traffic now has go to through one place, because THEY only need to compromise a single component of the US infrastructure in order to eavesdrop on everything.  AND, of course, if they want to bring down the entire communication channel in an act of sabotage at the start of a true massive offensive against the US,   they're equally delighted that we've taken a robust,  scalable,  nuclear-blast-resistant internet and wired the entire thing to go through a single point of failure, where they can attack.

These are very real costs.  They should be on the table.

What we need is the cohones to say that "Even if some bad thing occurs due to the lack of such 'security measure' that we could have done,  NOT doing it was, and CONTINUES TO BE the correct response."

This is a really hard pill to swallow. These words cannot be uttered by a politician, who confuses "best strategy" with "error-free outcomes".

This is as hard to accept as rescinding a law requiring motorcyclists to wear helmets, and then being willing to let them die after an accident with a head injury instead of spending social resources to keep them alive.      Most of us are not used to decisions which are basically triage, where, REGARDLESS what we do,  someone will die -- the only question is which someone will it be. These are questions that have no "great answer" at the back of the book.    In the USA, if a patient has a bad OUTCOME, they tend to sue the doctor, regardless of the fact that the doctor used the best known PROCESS to decide what to do.   The effect in the short run may be to win some cash, but in the long run it's to discourage good people from wanting to take up a career as a doctor. 

We seem to thrive on "blame" in this society in this day and age, for some reason -- perhaps because it is so simplifying to take complex systems (and "hamsters", ala Lewis Thomas, above) and reduce them to a comic-book level of linear cause and effect, under the delusion that we have now "taken control" of this threat.

No comments: