The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - March-April (2013 vol.11)
pp: 8-11
Published by the IEEE Computer Society
Gary McGraw , Cigital
ABSTRACT
Gary McGraw interviews Steve Bellovin, professor of computer science at Columbia University and CTO of the Federal Trade Commission. They discuss technology transfer, how code has gotten better but the threat model has changed, whether mobile security is just a repackaging of the same security problem, the very first days of Usenet, and the famed Evil Bit. Hear the full podcast at www.computer.org/silverbullet. Show links, notes, and an online discussion can be found at www.cigital.com/silverbullet.
Steve Bellovin, professor of computer science at Columbia University and CTO of the Federal Trade Commission, discusses technology transfer, how code has gotten better but the threat model has changed, whether mobile security is just a repackaging of the same security problem, the very first days of Usenet, and the famed Evil Bit.
You spent many years as a researcher at AT&T and then moved to academia. How often is academic research picked up and propagated in the world versus research at, say, a commercial lab?
Most companies have a great deal of trouble taking research ideas and turning them into products. That's always been a challenge at AT&T, and it's a classic challenge throughout the industry. In the academic world, you can toss something out there and hope it gets picked up—you can create your own start-up. But a lot of government contracts, especially from DARPA or the DHS [US Department of Homeland Security], will ask, "What are your plans for commercializing this technology, for getting it out there in the field?" There are different ways you would approach it in academia versus in an industrial research lab, but you always have to think about it.
What's your experience with tech transfer over the years? Have you had a success story, a colossal failure, or anything interesting happen?
The worst failure was with something Mike Merritt and I invented more than 20 years ago called Encrypted Key Exchange. It was a way to bootstrap a secure network session using only a password as a secret. It was great technically, and AT&T patented it, but nobody managed to license the patent. It would have been great if Lucent got the patent when it was spun off. It would have been great if the companies would have made it freely available for research use and made money from its commercial use. It didn't happen that way. People invented their own versions of it to work around the patent, which is itself an interesting story in how patents can foster innovation but perhaps not in the intended way.
Tell me about a success in tech transfer.
That would be the firewall book that Bill Cheswick and I did. Firewalls still would have happened—we didn't invent the concept, but a lot of things we put into that book had a large impact on the way the industry developed, for good and bad. We published that book in 1994: Windows 95 didn't exist. Windows machines as a rule didn't have Internet capability. They had no TCP/IP stack unless you added one. It's a very different world today, and one of my mantras is that the biggest single mistake you can make is if you give yesterday's answer to today's question.
Do you think we're making progress in computer security as a field? Are we in some sort of crazy hamster race, or is it cyclical? Are we moving forward?
Definitely maybe. I'm not trying to be noncommittal, but most security problems are due to buggy code, which all the crypto in the world won't fix. Back in '94, the purpose of a firewall was to keep the bad guys away from the bugs. But code has gotten a lot better. I looked at one of my machines the other day, a server, and it's been up for, at this point, about 530 days. Not all that many years ago, that simply wasn't imaginable.
Code has gotten better—that's the good news—but the bad news is that we're building much bigger and more complex systems than we ever used to. More complexity equates to more bugs and security holes, and the threat model has changed. Once upon a time, most hackers were "joy hackers," the stereotypical teenager living in a basement on pizza and soft drinks, no social life, just hacking into systems for the sheer joy of it. Today, they're the lowest on my scale of risk. You would have to go all the way up to major intelligence agencies at this point, plus a tremendous amount of hacking for profit. Attackers have gotten a lot better as well.
What's your view on the right mix of network security and software security?
With a few exceptions, most of the things we call "network security" are nothing of the sort. The network is the highway to the vulnerable host. When I worry about DNS or routing security, that's part of the network infrastructure, but if there's a flaw in some JavaScript app or in a PDF document I received over email, the flaw would still exist if I received it on a floppy disk in the mail. It's a host security issue, but because of the connectivity, you're getting these things over the network now.
So to answer your question, we need to fix protocol problems, but we also need to fix the buggy software problems. The network isn't always a convenient way to address host shortcomings. Let's take network printers, for example. Typically intended for departmental use—a fairly small number of people on a single LAN—there's rarely much need for somebody on the outside to get access. Printers don't have much in the way of authentication; they're embedded systems with notoriously buggy software because they've never been updated. So yeah, put an access control list on your router saying, "People on the outside can't talk to the printer; they have no reason to." That's a network response to a low-end host.
Ultimately, the problems need to be solved on the host, and more and more, we're dealing with mobile hosts. When Cheswick and I wrote that firewalls book in '94, laptops were rare. These days, I have two smartphones, a tablet, and a laptop in my bag. The tablet, one of the phones, and the laptop will be exposed to public networks in about two hours.
Do you think that mobile security is any different from normal security, or is it the same problem in a slightly different package?
I think a lot of it is the same problem in a different package. Location is a big privacy issue, and you typically cede some control of it to your carrier or your device manufacturer. The other big issue is that your devices are much more likely to be used on very different, very exposed networks, precisely because they're so portable. This increases the risk factor; you're exposed to more things. But you can't live without a phone, and it's kind of hard not to have a smartphone if you're trying to stay in touch with the office. Whether it's a Blackberry, an iPhone, or an Android, you need something if you're a business traveler.
From my perspective, which is pretty much grounded in the commercial sector, the government lags far behind the private sector when it comes to software security. How do you see the state of computer security in the government?
Some things it does very well; other things I'm not as thrilled with. The process tends to be very good, but that's security by checklist: "Install this antivirus. Run this software. Don't run that software. Centrally manage." The larger infrastructure questions are a lot harder to deal with: uncertainties and budget cycle procurement regulations make it more difficult to be agile and deal with these things proactively.
Again, I'm only seeing a small piece of one agency close up, so I can't really speak for the whole government, which is actually just a very large collection of agencies, each with its own policies. There are certain central standards like FISMA [Federal Information Security Management Act], but I'm not a big fan of checklists: they're a response to yesterday's problem. Like, "pick strong passwords." Excuse me, but what is the threat model you're trying to defend against here? Pimply faced hackers who've stolen the hash password file? Strong passwords do nothing against keystroke loggers, phishing sites, subverted servers, and so on. Besides, that advice dates back to 1979. The threat model has changed tremendously. You were dealing with terminals that weren't programmable. A power user would have three passwords, and I know by actual count that I have about 100 log-in passwords at this point. There's no way I can remember 100 strong passwords, especially when some of them have a very convoluted set of rules that requires letters, digits, and special characters.
Tell me about the invention of Usenet and the ensuing flame wars.
It started in 1979, when Bell Labs was replacing its sixth-edition Unix with the seventh edition. Jim Ellis and Tom Truscott were at Duke; they had helped UNC [University of North Carolina at Chapel Hill] get Unix up and running and had been running Unix longer than we had been doing it at Chapel Hill. They had a local application for administrative announcements and held a meeting to decide what to replace it with. They came up with the idea of a distributed system that was intended for local administrative use, but also for things you might want to share.
I guess you underestimated what people might want to share, huh?
Well, it gets better, Gary. The very first version of it—and I wrote it—had multiple newsgroups and cross posting, but the original notion was that you'd have a lot of local groups with one called "Net" that would go to other sites. It was a shell script, and we quickly realized that it didn't quite work, so I modified it so that anything beginning with "Net" was shared and anything else was kept locally. We never resolved the tension between who should see it or where it should be distributed versus who was interested in seeing it. This is why I didn't go into traffic forecasting: these were time-sharing computers, so with 50 to 100 computers maximum, one to two articles a day, mostly in the area we'd now call the newsgroup "comp.unix.wizards," I was off by many orders of magnitude.
When I started using Usenet, I had to use NN because Usenet was eating my life.
The first version was written in C by Steve Daniel, because a shell script couldn't keep up with the load. Shell scripts were really slow on a PDP1145, so the reader just tracked a high watermark. You've read all articles up to a certain date; you couldn't save an article and go back to it while having everything else marked as "read." So it evolved over the years to scale with the load.
I'll tell two other stories about it. One is the way it was originally spread. You didn't really have many autodialers then—they were expensive—so if you were using a PDP11, you needed not just a serial port but a dialer device as well. DEC [digital equipment corporation] called it a DN11, which talked to the autodialer that you leased from the phone company. You leased the modem from the phone company. UNC and Duke both had Acoustic Coupler modems, and we came up with two independent designs. I'll talk about mine: we used the control signal, the data terminal ready signal from the computer to the modem, which goes off-hook. If you time it going up and down through software, you can do pulse dialing. So we had software-timed pulse dialing on a 300-baud dial-up modem. It was before the AT command set—it was a crazy hack, but it worked. Then we adapted to the high-speed, 1,200-bit-per-second modems we got a year or two later.
We wondered about control messages in managing this system. We wanted to use public key crypto—we'd read the famous Martin Gardner column in Scientific American and the Communications of the ACM paper, but we didn't know about certificates and didn't invent them ourselves, so we had no idea how to do key management. We had no idea what the engineering parameters should be, so we gave up on it. No one had ever heard of the export rules, and the patents hadn't been issued yet, so it's interesting to think what would have happened if, in 1980, we had started shipping software worldwide that had used these algorithms legally before they were patented. It would have violated the Fraley Export Control Regiment, so I might have had to have long talks with lawyers.
When exactly was the Evil Bit invented?
I had been using the line for years in talks: "How does the firewall know what packets to drop? It just looks for the evil bits!" I took a flight at about the right time to write the RFC, so I dashed it off mostly on the plane, sent it to the RFC editor—I think it was Bob Braden at the time—a couple of days later. It's still one of the most cited things I've ever written. I got all kinds of phone calls and emails. I actually have a webpage that lists some of the responses I got. The best one was from a security engineer at Microsoft who wanted to know if this was serious or not.
Was your work with the Internet Architecture Board and the Internet Access Task Force rewarding? Was it fun, necessary, or some crazy mix?
It was a mix. Look, somebody's got to do these things. The IETF, at this point, is a large, ungainly organization, but someone's got to produce these standards, and it still does a better job than most. The problem I wanted to address was the lack of security attention in the protocols. I've said computer security is more a software security issue, but when there's a flaw in the protocol, it's a lot harder to fix because you have to change all the clients and all the servers. You want to get this right. That was my goal on the IAB, and later in the IESG [Internet Engineering Steering Group], when I was security area director. The really big thing I accomplished on the IESG was to start the process, and it continues and is much better today. The earlier you see the problem, the easier it is to fix. There was one RFC that I abstained on because it was several years' work of complicated RFCs. It was just not possible to fix it at that point. An area director can vote "yes" or "no objection." I wasn't going to vote "yes" because I didn't like it; I wasn't going to say "no objection" because I did object to it. But I couldn't block it in good conscience because it was throwing away several years' work. So I detailed as much as I could what the issues were and then abstained.
In the new age of Stuxnet, how important is the attribution problem? Should we change things around on the Net to address that?
I don't think you can change things around to address the attribution problem because so many of the problems are coming from compromised hosts. Let's look at Stuxnet; it used stolen private keys for trusted certificates. To this day, nobody knows how those keys were compromised, but the answer is somebody went around, either a subordinate employee or someone who hacked into the system of somebody who had a trusted key, and just stole the key for his or her own purposes. Technically, the attribution at the protocol level points to JMicron in Taiwan, but it had nothing to do with it.
One of your intellectual hobbies over the years involves public key crypto and nuclear weapons command and control systems, in particular, Permissive Action Links, called PALs, and nonrepudiation relative to national security memo 160. Tell us about that.
It started with a conference. I'll skip the details, but a retired cryptographer at the NSA [National Security Agency] said the basis for the NSA inventing public key crypto in the 1960s was national security action memorandum 160, which Kennedy had signed. Matt Blaze got the declassified version, got the Kennedy Library to sanitize, redact, and declassify the memo, but it said nothing particularly interesting, except maybe it hinted at nonrepudiation in the middle of a mostly blacked-out paragraph, with one sentence left intact that could be interpreted as nonrepudiation. If you think about that, if you ask the question the right way, it translates to digital signature, and that could have been what caused the NSA to do it. Look, we know that it took three civilian mathematicians two years to come up with digital signatures once the question was asked. How long would it take the NSA with its tens of millions of mathematicians to come up with it once somebody asked the question in the right way? I found it plausible.
I did a lot of reading, a lot of research to try to figure out how these PALs worked, to see if we could shed more light on the subject. I have a lot of interesting speculations, and some declassified documents, but ultimately, my best guess is that the NSA invented it—it was digital signatures inspired by reading this one sentence in the right way.
What's more fun: wood working or trains?
I'd have to put it at wood working at the moment. I haven't had a lot of time to play with my trains lately. The problem is I can only do one intensely creative thing at a time, whether it's working on a book—and I'm working on a new one—or building some furniture or a model railroad layout.
The Silver Bullet Podcast with Gary McGraw is cosponsored by Cigital and this magazine and is syndicated by SearchSecurity.
Gary McGraw is Cigital's chief technology officer. He's the author of Software Security: Building Security In (Addison-Wesley 2006) and eight other books. McGraw has a BA in philosophy from the University of Virginia and a dual PhD in computer science and cognitive science from Indiana University. Contact him at gem@cigital.com.
5 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool