Can We Be Too Careful?
March/April 2012 (Vol. 10, No. 2) pp. 3-5
1540-7993/12/$31.00 © 2012 IEEE

Published by the IEEE Computer Society
Can We Be Too Careful?
Jeremy Epstein, Associate Editor in Chief
  Article Contents  
  Risks vs. Benefits  
  Accurately Attributing Failure  
  The Human Interface  
  Avoiding Paralysis  
  References  
Download Citation
   
Download Content
 
PDFs Require Adobe Acrobat
 
Everyone—even security specialists—falls somewhere on a paranoia continuum, from "always cautious about everything" to "never cautious about anything." The most sensible place to be is somewhere in the middle, and one of life's great challenges is finding the right place—and learning to be comfortable with it.
Computer security specialists and reliability experts tend to walk around under a dark cloud. We see all the things that can go wrong and are fearful that they will go wrong, thus bringing civilization as we know it to a halt.
The universe of all possible threats is massive, and many of us security types jump to conclusions whenever we see something that could be a security breach, assuming a technology failure or cyberattack. For instance, in a recent presentation, a computer science professor from a major university discussed the cyberattacks on an Illinois water system. 1 The media linked the attack to Stuxnet, cyberwar, and indications of a "cyber Pearl Harbor." Except it wasn't a cyberattack—it was merely a water pump burnout combined with a water authority consultant accessing the logging system while on vacation in Russia a few months earlier. 2 The professor hadn't done his homework and was unaware that the early reports were incorrect. In addition to spreading inaccurate information, the experience also reduced his professional credibility.
Are security specialists becoming the boy who cried wolf? The Risks forum ( www.risks.org) publishes regular reports of system failures owing to computer failures, whether caused by security or reliability challenges. Although the risks are real, technology is sometimes wrongly blamed. Whereas the technology might introduce risks, it might carry advantages that justify these risks.
Risks vs. Benefits
Sometimes it's unclear whether a technology's risk or benefit is greater. For example, the jury is still out on whether backscatter radiation systems used in airports cause health risks. Do these systems—which are highly computerized—have information security risks that would allow an attacker to modify the radiation levels, thus increasing the harm to innocent passengers? Can the attacker modify the software to reduce the chance of detecting a real threat? Thinking of privacy, could an attacker release sensitive passenger images, as happened with some non-airport detection systems? 3 It's a safe bet that networked Windows or Linux systems constitute the underlying technology. Is there a process in place to ensure that those systems are patched regularly and that the configuration preventing images from being saved isn't modified? Are there processes to ensure that the systems aren't exposed to unpatched vulnerabilities and insider tampering? How do we balance those risks against the systems' goal, namely, to successfully detect and stop terrorist attacks? As we were going to press, the US Department of Homeland Security released a report concluding that backscatter machines are safe ( www.oig.dhs.gov/assets/Mgmt/2012/OIG_12-38_Feb12.pdf); unfortunately, the report is silent on information-security-related issues. Like most frequent travelers, I'm no fan of airport security as currently implemented, but I have a hard time determining whether the scanners are a net positive or negative when considering both risks and benefits.
Voting is central to democracy, and elections have been a set of risk tradeoffs dating back thousands of years—trying to ensure that eligible voters can cast votes, but no more than one per person, while trading risks of disenfranchisement of legitimate voters. In the US, most states require residents to register prior to the election and to update their home address as it changes. To reduce costs and increase convenience (and hopefully increase participation), many states are moving toward online registration and address changes. Will this increase risk? At first, it seems obvious that allowing online registration opens the door to fraud. However, existing paper-based mechanisms aren't significantly different. In either case, validating the voter's ID against the registration is the same—the voter's ID is checked at the polling place the first time he or she comes to vote. As long as fraudulent online registrations don't overwhelm the election office's ability to process them, the risk is probably no worse than that of paper registrations.
Similarly, voters changing their election registration address online would seem to be a greater security risk than paper address changes. Again, electronic fraud can be scaled more easily than on paper. However, relatively simple out-of-band verification can limit the risk, such as by sending a postcard to the old and new address notifying the voter of the address change and providing a contact if the address change was invalid. Supplementing such a system with manual verification can help detect any large-scale fraud attempts. Overall, online address changes, when incorporating appropriate safety checks, appear to be a cost-effective way to improve voter convenience, despite some concerns about security. In this case, the benefits outweigh the risks.
The question gets fuzzier when it comes to Internet voting. On one hand, there is no doubt that online voting is risky. Fraudsters can place malware on the voter's computer to manipulate votes, attack the servers that hold votes, perform phishing attacks, and so on. But how do we compare that risk—for example, that every online vote could be compromised—to the risk that some voters will be disenfranchised because they can't come to the polls, whether because they're overseas or didn't have enough time to return an absentee ballot, or their ballot was lost in the mail? In addition, voters with disabilities who might not be able to use traditional voting systems might be able to vote using assistive technology on their computers. My view, and that of nearly all information security and privacy people I've spoken with, is that the security risk is too high to move to Internet voting, but it's important to recognize that there are other considerations.
Accurately Attributing Failure
Risks might not be where we think they will be. A recent study showed that under a particular set of assumptions, the biggest risk to an accurate election wasn't voting-machine tampering but rather insider threat by election officials. Even when technology risks are high, nontechnology risks could be even higher. 4
And sometimes, even the experts don't agree. The 12 February Risks Digest includes two explanations why a recent Russian Mars probe failed. 5 Reliability experts won't be surprised that Discovery News reported that "investigators concluded that the primary cause of the failure was 'a programming error which led to a simultaneous reboot of two working channels of an onboard computer,' [… and] accusations that U.S. radars were responsible for the failure proved false." 6 Meanwhile, New Scientist reported that the problem was due to radiation, and said that the Discovery report was based on an outdated draft. 7 And IEEE Spectrum says that neither is correct and that the failure was due to use of memory chips designed for military, but not space, usage. 8
As I was preparing this column, an article with the provocative title "Ron Was Wrong, Whit Is Right" achieved a great deal of publicity (eprint.iacr.org/2012/064.pdf). The article, describing problems with RSA and PGP keys in use today, sent shock waves through industry and spurred headlines implying that Web commerce sites were at risk. However, further analysis showed that the risk was to obscure sites, mostly using self-signed certificates, and that commercial sites faced no risk. 9 Although the research points out legitimate technical issues with how random numbers are used to generate cryptographic keys, we need to avoid causing unnecessary (and inappropriate) panic.
The Human Interface
Offering users security choices is sometimes a good thing; other times, it backfires. For example, a Washington Post article described how a US immigration agent was killed in Mexico, despite being in a highly sophisticated armored vehicle. 10 When the agent stopped the car and shifted into park, the doors automatically unlocked, allowing criminals to abduct him. Risks forum discussions revealed that this is one of several options for the civilian versions of these vehicles when placed in park, and that perhaps the problem was that the settings hadn't been changed from the default. Or perhaps the driver, without analyzing the security implications, changed the setting. Blaming users for desktop computers' security failures isn't a good idea, but does it make sense here if, in fact, the driver could set the auto-unlock feature?
Avoiding Paralysis
What can we learn from these scenarios? Computers certainly cause security and reliability failures, but not all security and reliability failures are due to computers. Sometimes it's people, processes, physical phenomena, or poorly understood human interfaces. Analyzing complete systems is important, and it's worth looking up every so often to verify that the sky hasn't fallen.
Instead of just fretting, when we look at a system, we should consider several factors:

    • What are we protecting? How much is it worth? In some of the earlier examples, designers didn't consider what was important, or atypical but catastrophic failure modes. Among others, Bruce Schneier has repeatedly asserted that airport security focuses entirely on the wrong areas, looking for the threats we've already experienced rather than assuming an innovative adversary.

    • What is the threat we're concerned about? Is it a natural phenomenon, an attack, or an accident? In the case of voting systems, are we worrying about wholesale theft of a presidential election or retail theft of a local precinct? Were the Russian spacecraft designers considering the threat of nuclear radiation, software malfunction, or both?

    • What are the costs of the protection? Are we throwing the baby out with the bathwater? This is where not only privacy but also convenience fit in.

As I discussed at the beginning of this article, security is a continuum. We make decisions about security for almost every aspect of our lives. Finding the balance in software means not jumping to the conclusion that all security problems are caused by software, or that no risk can be tolerated.
I'm trying hard not to assume that when something could have failed because of software security or reliability, the software was in fact the cause of failure. After all, the dark cloud that is security in our world might just be a passing shadow.

References