Pages: pp. 7-8
In his From the Editors column, "Don't Bring a Knife to a Gunfight" (vol. 2, no. 2, p. 5), George Cybenko was quick (in a hurry, perhaps) to dismiss the graduate student's security efforts because the student lacked a bottom-up appreciation of the security threats. Cybenko is the professor, but he seems to demonstrate both a lack of top-down thought about the security problems and a disdain for such an approach. Sadly, this seems true of almost everyone who puts himself forward as a security expert.
I'm surely no security expert (although I think I could make a decent stab at answering the buffer-overflow exploit question) nor do I have that the bottom-up knowledge Cybenko seems to find indispensable. I can follow an SMTP dialog pretty well —but I'm not skilled enough in SMTP to write a message transfer agent (MTA)—nor do I ever intend to become that expert.
Starting in 2000, I ran and advocated open-relay honeypots to defend against spam. The ones I ran would surely make many people's blood boil: it was on an obsolete Vaxstation that ran an even more obsolete MTA. Nonetheless, I trapped spam when others did not. While filter users kept reworking and refining their filters to keep ahead of spammers, my simple honeypot (implemented using command files and system programs, by the way: no custom code) successfully trapped the spam that used a variety of filter-busting techniques. Why did my approach work when the filters often failed? Largely because I did some analysis before I coded. Spammers are the only ones in the world who seek open relays through which to send email (senders of ordinary, valid email just send it.) If anyone sets up something that listens on port 25 and then delivers the messages that spammers send to test for open relay, that something very likely will soon be receiving spam. The only traffic it receives is spam or spam-related, so no filter is needed: the spammer does the filtering. I didn't discriminate between spam and valid email: the spammers did that for me by sending only spam.
If the student Cybenko criticizes "brought a knife to a gunfight," then I submit that almost all who battle spam bring a blunderbuss to a machine-gun battle. Almost all who battle spam insist on doing battle at or after the destination server. The attitude is that spam becomes wrong or offensive only at that point. Before that point, the operators of the vulnerable systems that spammers exploit are equally guilty. (Often, the actual attitude appears to be that the systems' operators are more guilty.) What's the root cause of their guilt? That they lack the depth of understanding supposedly needed for what they are doing—the same lack that Cybenko sees in the graduate student. No guilt or responsibility attaches to the vendor of the vulnerable software or to the operators of the networks that efficiently and unquestioningly transmit the abuse packets to the vulnerable systems—only the operator has responsibility, and thus the guilt.
Is this an intelligent security model—one that puts all the responsibility on what surely is the weakest link, ordinary systems operators throughout the world? Before chastising the graduate student for his lack of knowledge, we ought to ask him what security model he's using. He's a graduate student—doubtless he needs some guidance tempering his enthusiasm. Shouldn't the initial guidance for those tackling any problem be that they should analyze the problem first? Finding specific weaknesses in the student's knowledge might have merit, and lead to needed learning—but if he's not taught to do analysis first, he'll very likely become yet another in-step marcher in the parade of "security experts" who aren't actually solving the security problem but instead are blocking solutions. Show me the analysis that has been done on the spam problem—analysis that recognizes spammers' vulnerabilities at the intermediate level. So far, I see it only from a select few, and those few are as quickly dismissed as the graduate student. Being the expert isn't enough. Expertise must be coupled with analysis, with a willingness to consider ideas that might be only half-formulated. That's how new ideas frequently appear: imperfect. The graduate student must learn how to evaluate ideas, polish the ones that have merit, and recognize and discard those that do not. The student appears to have been attacking the problem at a higher, block-diagram level. He might indeed need to learn the details of the protocols and the exploits used, but doesn't knowing those invalidate all he does a priori? If he's working at a higher level of abstraction, won't his work still be useful?
Ed note: In the May/June issue, Hong-Lok Li wrote to us about columnist Michael A. Caloyannides' recent piece, Online Monitoring: Security or Social Control (vol. 2, no. 1, pp. 81–83 ).Author Caloyannides responds:
In the case of Microsoft Outlook and Outlook Express, most of the virulent worms of the past few years have exploited users' list of correspondents stored in either of these two software packages to send out fraudulent and worm-laden emails, ostensibly coming from these hapless users; this has caused a massive amount of worldwide disruption whose cessation transcends users' "convenience." In the case of Internet Explorer, the official denouncement of that software in early July by a respected source lends further weight to my admonition against using it, convenience or no convenience ( www.cnn.com/2004/TECH/internet/07/02/alternative.browsers.ap/index.html).
The May/June issue contained an omission in the article "The Security and Privacy of Smart Vehicles" (vol. 2, no. 3, pp. 49–55). Levente Buttyan's name should have appeared in the Acknowledgments section. We apologize for any confusion this might have caused.