Pages: pp. 4-5
Just as the May/June 2005 issue of IEEE Security & Privacy was being mailed to readers, the US House of Representatives' Committee on Science held a hearing concerning the status of US computer science research funding. The hearing included a discussion of the President's Information Technology Advisory Committee (PITAC) cybersecurity report, which we covered in some detail (May/June 2005, pp. 6–11). Testimony at the hearing concerned trends in US investment in computer science research and development, especially about how to balance basic university research with applied industry-based development, unclassified with classified research, and short- and long-term objectives.
Although the hearing was patently US-centric, our readership is not. I can't help but think that many of the issues facing US computer scientists are relevant across the globe. If any readers have any experiences they'd like to share, please drop us a line.
One particular exchange from the hearing stuck with me. Tony Tether, DARPA's director, responding to criticisms that DARPA wasn't investing enough funds in basic computer science research, challenged his critics to provide examples of specific problems, which, if solved, would lead to "fantastic" results—solutions that are supposedly stymied solely because there isn't enough research money. In essence, Tether was paraphrasing a line from the movie Jerry Maguire: show him the ideas.
This is a tricky challenge, since historically, it's been difficult to recognize good ideas, even when they're staring you right in the face. For example, Len Bosack and Sandy Lerner, the founders of Cisco Systems, pitched their idea to more than 75 venture capital firms back in the mid 1980s before landing a single investment. They had a good idea, but not many people at the time could recognize its impact or potential. At the other extreme, how many presumably great ideas garnered multimillion-dollar venture capital investments in the late 1990s, only to turn out to be not so great after all?
There is yet another model for funding research. Tim Berners-Lee and Marc Andreessen both developed separate components that were critical to the Internet's ubiquity and generality. They had great ideas and worked in funded projects, but their efforts weren't funded as standalone research projects. At the time of their seminal work, both Berners-Lee and Andreessen were bit players in large, complex mega-projects such as the European Center for Nuclear Research (CERN) and the National Center for Supercomputing Applications (NCSA), both of which had ambitious visions and goals of their own. In the late 1980s, Berners-Lee and Andreessen's specific, technical efforts likely couldn't be funded as standalone research, much less be considered legitimate.
In addition to ideas and funding, many externalities must fall into place before most ideas can be classified as great or not. The technology must be great not only in some future world but in the future world that actually occurs.
So what does all this have to do with security and privacy? Tether's challenge applies to us, too—where are the great ideas in security and privacy research? Will they be recognized next week, next year, a decade from now? What are the externalities that create the future in which those ideas are indeed great and have impact? If people have good ideas now but believe that they aren't being acknowledged, is there a problem with how we communicate those ideas?
I can't help thinking that, as a community, we might be setting our sights too low, too often. We need to spend more time and energy articulating possible solutions to "the big problem" of networked security and privacy, not just the bite-sized pieces that fit into research proposals or business plans. I don't just mean a laundry list of topics. Solutions to "the big problem" are mega-project-sized systems, not unlike CERN or NCSA in their heydays. They have ambitious visions and outcomes, roadmaps with feasible waypoints, and enough scale and scope to affect at least some of the externalities.
Candidate solutions don't have to be perfect but they must be realistic. As the saying goes, "In the land of the blind, the one-eyed man is king." Let's use these pages as a forum to generate some ideas.
This editorial has mentioned several recent developments and challenges. The following links provide additional background and content.
In Mehmet Sahinoglu's "Security Meter: A Practical Decision-Tree Model to Quantify Risk" (May/June, pp. 18–24), the vulnerability column in Table 1, row 3 should be v3 ( a=0.0, b=0.2); p. 21 should show the final risk equal to 0.09575; and on p. 21, the Monte Carlo simulation produces 0.239377 vs. an expected result of 0.239375. We regret the errors. — Eds.