The Community for Technology Leaders

Editor's Message: The Long March

Pages: pp. 1, 4

As Albert Einstein wrote, "We cannot solve the problems that we have created with the same thinking that created them." We certainly have security and privacy problems today. How did we get here? How might we move forward?


Human history is a story of interdependency. From the earliest hunter-gathers to agrarian society to multinational corporations, humans and their enterprises have become increasingly specialized. The side effect of specialization is dependency. If you defend me; I'll feed you. You fix my roof; I'll shoe your horse. You manage my corporate merger; I'll perform your appendectomy. The complexity of technology has forced us to specialize. By the same token, the power of networking has enabled the complex social and economic interdependencies required to support that specialization.

Networking in this sense is virtual; not to be taken literally as in "Ethernet LAN." Governments, banks, utility companies, Internet service providers, and airlines implement this networking using legislation, money transfers, power grids, networking protocols, and airplanes. While interdependency is an old phenomenon, the networking technology that supports it has become remarkably faster and more powerful in the past century.

Today, communications, commerce, finance, power systems, transportation, and human services sustain society and its underlying global economy. We are so steeped in these interdependencies that the separation of network technology from human well-being, on a large scale, is no longer possible. Failures of these networks typically lead to immediate discomfort and imminent loss of life. To survive, our attitudes about and investments in the security of our network technologies need to become the same as they are towards our bodies. To paraphrase Marshall McLuhan, our technologies are extensions of ourselves ( Understanding Media, MIT Press, 1994).


How should we extend our self-preservation instincts to our networked infrastructures?

It is tempting to believe that our main focus ought to be on building more secure and reliable systems in the first place. However, it is important to recognize that most engineering designs are based on assumptions, models, and paradigms that do not scale well or age gracefully. For example, civil engineering scholar Henry Petrosky has observed that catastrophic bridge failures occur at regular intervals ( To Engineer is Human, Vintage Books, 1992).

Any given bridge design methodology is eventually pushed, by naiveté or error, beyond the validity of its modeling assumptions, literally to the breaking point—with failures occurring in the field, not on the drawing board. Similar cycles of paradigm failures seem to run in electric power grids, financial marketplaces, transportation systems, and the environment. The point is that good design is not enough; design assumes a context and that context will eventually morph into something we cannot predict.

Let me propose a new thinking based on the premise that our infrastructure systems will be compromised or fail outright eventually, regardless of our diligence in designing them. Whether due to malicious attacks or organic failures, our networks will always be vulnerable. If so, we should focus more effort on developing detection and control technologies that will intercept signs of failure modes early on, mitigate the effects, and respond aggressively with countermeasures, all on the time scale of the threats or failures themselves that are increasingly measured in milliseconds. This kind of thinking naturally raises issues in diversity (heterogeneity of platforms and protocols), privacy (early detection requires comprehensive monitoring), and liability (shutting down critical applications and services when compromised).

Not by coincidence, these same issues arise in modern health care, a system that has evolved over thousands of years. Health care is a tripod, whose three legs are basic biomedical research (done by academic and industry researchers), the delivery subsystems (consisting of physicians, clinics and hospitals), and the public health system (detecting epidemics and tracking trends in a population). Of these three components, the analog of the public health system for networked infrastructures is arguably the least developed. We have no effective counterparts to the Centers for Disease Control or state-based public health offices in the network technology domain, counterparts that can operate on the time-scale of the attacks themselves, not the time-scales of human analysts and software patches.

Regardless of our approaches, we must recognize that addressing the technical challenges of modern security and privacy will be a long march. There are no quick fixes, no silver bullets. Imagine, again by analogy with health care, that we increase funding for medical research tenfold over the next 10 years. Surely this would accelerate the discovery of new therapies for cancer, heart disease, and other illnesses, but few of us who survive that decade would expect to be immortal by the next.

New thinking that leads to long-term solutions in security and privacy won't be manifest in short-term hardware or software gizmos. No, the new thinking has to be a wholly different attitude about the role and importance of networked infrastructure in our lives. Only such thinking will lead to the long-term, sustainable institutions and investments in security and privacy that we deserve and that will ultimately make a difference.



George Cybenko is the Dorothy and Walter Gramm Professor of Engineering at Dartmouth College, NH. Contant him at
59 ms
(Ver 3.x)