Pages: p. 5
People are always finding things Johnny can't do. A Google search on "why Johnny can't" finds that he can't read, program, decode, encrypt, walk to school, save for retirement, or blog, among other things. I propose that he can't evaluate security and privacy risks either.
To be sure, risk analysis and risk management are mature fields; there's no shortage of expertise or documentation on risk and how to manage it in specific circumstances. So what exactly is Johnny's problem?
Let me briefly define what I mean by risk. Suppose you're in a situation with a variety of outcomes; for each outcome, you stand to gain or lose something. Your "risk" in a situation is your expected loss. Risk becomes interesting when you have a variety of strategies, each of which creates a different situation. To implement the simplest approach to risk management, you must know the probabilities and costs of various outcomes. This is where Johnny's problems begin.
Probabilities are obtained empirically by observing the outcomes' frequencies. This is the first challenge. Many of the threats we're dealing with are so rare that they might not have even occurred yet. For example, a catastrophic attack on our information system is a real possibility, but what's the quantified probability of it occurring, either now or under a future re-implementation? We might be able to estimate the loss resulting from an attack on, say, a global financial system, and by doing so, better estimate the cost of protecting it (similar to an insurance premium), but we can't make an informed decision about whether to invest in increased protection without the underlying probabilities.
Another problem Johnny has is "Internet time." Even if catastrophic attacks occur with enough frequency to make meaningful estimates of probabilities, there remains the issue of quickly evolving vulnerabilities and threats. In the life insurance business, life expectancy and life styles are always changing, albeit gradually; thus, we can model general trends. This isn't true in the software security and privacy domains, in which new products, versions, and service packs are highly discontinuous and, therefore, hard to predict. In our business, the past doesn't necessarily anticipate the future.
Challenges to evaluating risk are pervasive today. On the consumer level, we have a hard time deciding what to do to prevent identity theft and privacy threats. On the industry level, companies are struggling to determine which investments to make in computer security. On the national level, countries are struggling to determine not only which investments to make in homeland security, but also how to balance civil liberties and personal privacy with increased security. In each situation, the problems are that the frequencies, severities, and costs of incidents are difficult to estimate with the presently available information.
So what can Johnny do?
To start, we need more data about vulnerabilities and threats, especially about outliers, rare events, and exposures to catastrophic attacks. We must build up actuarial-class databases and, through extrapolation, make meaningful estimates. So much of what is done today is one extreme or the other—either burying our heads in the sand or responding to fear-mongering tactics. This observation isn't new, but in the past, organizations had severe disincentives to share such data.
Second, we must develop a new science for estimating, extrapolating, and inferring the probabilities and losses associated with security and privacy outcomes that we might not have seen yet. To make things even harder, we must do that for highly complex systems that contain many interacting components. This is a hard problem that the aerospace industry has partially tackled. We need to learn from them and go much farther. This will require new ideas and approaches because we've never had to deal with systems as complex, as quick to evolve, or as distributed as the global information network.