Silver Bullet Talks with Matt Bishop
NOVEMBER/DECEMBER 2008 (Vol. 6, No. 6) pp. 6-10
1540-7993/08/$31.00 © 2008 IEEE

Published by the IEEE Computer Society
Silver Bullet Talks with Matt Bishop
Gary McGraw , Cigital
  Article Contents  
  Conclusion  
Download Citation
   
Download Content
 
PDFs Require Adobe Acrobat
 
Matt Bishop is a professor of computer science at the University of California, Davis. He has a PhD in computer security from Purdue University. Bishop is interested in secure design, vulnerability analysis, and other aspects of security. Bishop has also made significant inroads in the commercial side of security, lecturing for the SANS Institute, and focusing much of his writing on security education. He is the author of Computer Security: Art and Science (Addison-Wesley, 2002), an important textbook in the space.
Gary McGraw: You're one of the only professors of computer science with a PhD that's actually in security. Was Eugene Spafford your advisor?
Matt Bishop: No, Spaf wasn't. I should say the degree is in computer science, but the area I studied was computer security.
McGraw: That's pretty impressive because a lot of computer security people don't really study computer security in school, which is good and bad. What got you interested in computer security?
Bishop: I was studying applied mathematics at Berkeley, and I became very much interested in privacy and the law. I thought about going to law school for a little bit, but decided my talents weren't really in arguing cases. I looked for an area where law, mathematics, and privacy converged, and that led me to computer security. I wrote to several places to see if they had programs in it or if there were people who taught it and wound up going to Purdue in 1979.
McGraw: You anticipated the important intersection of privacy, computers, and the law that everybody else is getting huffy-puffy about.
Bishop: Well, what was so interesting is that most of the people I met who were in security were there to protect computer systems in national security matters. For a little bit, I felt that people weren't paying that much attention to privacy. Then I learned about cryptography, and how people [in that field] were paying a lot of attention to privacy.
Pretty soon, people began looking at privacy from a non-cryptographic standpoint, where databases were merging and so forth. Now I'm absolutely thrilled that it's probably—if not the dominant topic—one of the dominant topics of the field.
McGraw: It absolutely is, and it's something that we still have a lot of work to do in, unfortunately. One area of study very important to you is security education. Can you explain how you cover black hat activities—breaking systems—and white hat activities—building secure systems—together in a solid program?
Bishop: Well, two ways. The first way is to help people understand why you want to use the white hat measures, as you call them, to protect systems. They have to understand what nasty people can do, so you teach them what the black hats, as they're called, can do to systems and then how to defend the systems. That basically gives them motivation to learn how to defend.
What I've also found is that if you simply teach people how to defend, they don't really understand how attackers work. They often think of defense as sort of a cookbook. As a result, good attackers will look at a system, see what the people who are protecting it expect, and then do something entirely unexpected and in this way, they can often get around the defenses of the system. When you teach how to defend, you also have to teach how people attack in order for the defenders to be able to adapt very rapidly to attacks, and in some cases, detect them. When I do this, I also provide a very healthy dose of ethics. For example, in my undergraduate classes, one of the projects I run is to attack a server, which is on an isolated network. We protect the network, and the students have to log in to the network before they can hack the system.
In addition to that, we go through what the ethical rules are—what's good, what's bad, what's allowed, what's not. Every class, I spend the first five minutes working through a puzzle with the students, asking them, "How does this apply to computer security? What should you do in this situation?"—that sort of thing. A good half of the puzzles I give tend to be on ethics.
McGraw: Very nice. There's another aspect of it that I'm interested in hearing your opinion about. It seems like breaking systems is somehow sexier and more appealing to base human nature than building things properly. Do you see that, too?
Bishop: I'm not sure breaking systems is sexier, as you put it, than building something right. On the other hand, it is definitely a lot more fun to most people. I'm not quite sure why, but I think appealing to the base instincts is certainly one of them. If you do this in a controlled way, it's a nice way to be a crook without being a crook, so to speak.
The other thing is, it's rather like a puzzle. It's trying to figure out what the other person is thinking and how to get around them. People who like games—chess games, board games—things like that, often really enjoy the challenge of trying to figure out where the problems are. This is a good way to do that.
McGraw: You've also been very active in software security. Tell us a bit about this new scheme you're helping to spearhead to work security analysis and secure coding into the wider computer science curriculum for students.
Bishop: This grew out of something that I was working on a couple of years ago. One of the problems we have in the field is that code isn't really being written well. By well, I mean programs are typically nonrobust and there are often security problems with them. One approach is to inject security into every single class.
The number one problem is that the classes are already overloaded in terms of curriculum. Number two, it would require all faculty members to learn about security, and for many people, security simply is not interesting. It's not what they do well. It's not where their talents lie and so, personally, I would consider that quite unreasonable.
The other approach is to require an extra class in software security that everyone must take. Again, the curriculum is already so loaded in the computer science community that it's not clear that's going to work.
My parents are both literary people. My mom's a literary agent. My dad was a writer. I looked to the literary community because we all express ourselves using English, at least in the United States. One of the things colleges put a lot of emphasis on is writing well. However, if you look at the English curriculum, there's very little on actual writing. There's a lot of practice, but there's very little on how you actually do it. Same if you go to law school. There's a lot of writing, but there may be one short class in legal writing and that's it. The way they imbue people with writing well is they have clinics, where you can take your essay or your brief or whatever and run it by the clinicians. They won't judge the content, but what they will do is see whether it's structured well, whether you made your points. They'll point out grammatical errors. They'll help you figure out how to construct a much stronger argument, that sort of thing.
We don't have anything like that in computer science curriculum. When we learn to program, we learn all of the basic rules of robustness, do error checking, check your array bounds, stuff like that. When you go beyond that, the emphasis is on getting the algorithm right, getting the program right, getting the results right, and not so much about, "Did you construct this well?" One of my grad students was the TA [teaching assistant] for my operating systems class, and in that class we had students modify the MINIX kernel. As he graded the first assignment for all the groups, he said, "You know, this code works. [But] it's horribly written. It's not well done. If I see this again, I'm going to take 20 points off out of 100." So the students came and complained to me. What I said was, "Absolutely right. I agree the TA should not have said 20 points. I would have said 40, but we'll go with 20."
On the next assignment, the quality improved dramatically, and he didn't have to take anything off. This led me to think about combining that approach with the idea of a programming clinic, where people could come and have someone look through their programs and not look at whether it produced the right results for the assignment, but instead, "Is there a way I can easily crash this program? Is there a way I can breach security if security is involved?" and so forth, and give the students feedback that might improve the situation. A couple of years ago, our department applied for a capacity building grant from the NSA [US National Security Agency] to run this. We ran it and we had very small funding, but we were able to run it for two classes.
In one class, the quality of the assignments improved markedly. In the other class, which was an introductory class, the same thing happened. It seemed to be a reasonable approach, and it's worth trying. We need more metrics to determine how effective this is and under what conditions this will work. A lot of research has to be done. On the other hand, when we've tried it, it worked spectacularly.
McGraw: There are other disciplines that do that sort of thing as well. I'm thinking of architecture, for example, in juries and the way that you learn how to design buildings.
If you think about it, pair programming and some of the agile methods and maybe even Fagan reviews have some amount of that in there, too.
Bishop: It's a natural for companies that want to get involved with universities but don't have a lot of money to pay for things. What they might be able to do is have people who are good at this sort of thing within the company work at the clinic for an hour a week or something like that.
McGraw: That's a good recruiting maneuver.
Bishop: Extremely good recruiting maneuver and also would impress upon the students, "Look, this stuff isn't just academic, that you can forget when you leave the ivory tower. It's stuff you'll actually be expected to apply in the real world at a real job." I've found that that has an amazing effect on students.
McGraw: Let's get back to the ivory tower. As a scientist, you know the importance of literature and knowing about previous work. You put together a very cool collection of seminal papers in computer security. What are the top two forgotten papers, assuming we all remember Saltzer and Schroeder ["The Protection of Information in Computer Systems," Proceedings of the IEEE, 1975]?
Bishop: That would have been number one.
The first one is the Ware report [ www.rand.org/pubs/reports/R609-1/R609.1.html], which basically set the field in motion. Because it very clearly—even though the technology it talks about is dated now—lays out the problems in a way that's quite easy to understand. The second one is the Bell-LaPadula paper [ www.albany.edu/acc/courses/ia/classics/belllapadula1.pdf]. Pretty much everyone learns the lattice version of the model, which is a very nice exposition of it, but the original paper has a lot of very neat insights. The methodology they use to build the model is quite fascinating.
McGraw: Together with Mike Dilger, you did some very important work on time-of-check-to-time-of-use [TOCTOU], even building a tool to scan for TOCTOU problems. Time and state issues seem to be more and more important in computer security, especially when trust boundaries are crossed. What can we do to make progress against these very complex time-related, trust boundary-related issues given our headfirst dive into Web 2.0 and Ajax and all these things?
Bishop: You like the easy questions don't you? To be honest, at this point I'm not sure. The problem is that we have to come up with ways to understand the temporal ordering of things before we can start correcting problems. We have to understand how the order can be perturbed. With TOCTOU, it's very easy because there are really only two things involved.
McGraw: Your critical section is clear.
Bishop: Exactly. But the problem with doing this sort of work over the Web is that there is a critical section when you adopt one point of view (in other words, either the service point of view or the client's point of view or something like that) and study it from that point of view. Occasionally, it may be that you think you've corrected it at one end and you haven't corrected it at the other. Basically, it's a distributed programming problem and calling those hairy is putting it mildly, particularly from the security point of view.
McGraw: Yes. For me that is a class, a category of problems that's just going to take on more importance as these massively distributed systems come into play. You can already see this in the online gaming stuff that I've been screwing around with lately. I just think it's more important for people to focus some attention on that, getting past the usual bug parade and focusing some attention on time and state.
Bishop: Well, one thing we need to do is go back to focusing on the deep underlying problems. A lot of the work that I'm seeing is absolutely brilliant without question. It tends to bubble up and look at the problems that we know about. What I'm beginning to see more of and what I really hope we continue to push on very hard is that those problems are symptoms of much deeper problems.
We seem not to be paying as much attention to the deeper problems of, as you pointed out, time and so forth, and instead going for their manifestations like TOCTOU, race conditions, and things like that. On individual systems, the distinction is probably not that severe. On distributed systems or network-based systems, it's critical. We have to understand more of those underpinnings before we can really start solving the problems that are higher up. Currently, we're doing Band-Aids, but what we need is major surgery.
McGraw: I agree with you. As you know from our work together on the Fortify technical advisory board I'm pretty optimistic about the progress we appear to be making in software security. Do you feel the same way?
Bishop: I think we're definitely making progress. I think there's a long way to go, but when I look at the world as it is now compared to 10 years ago, my hope is beginning to be restored. Let's put it that way.
McGraw: That's good. What role should training play in large-scale software security initiatives?
Bishop: Training is a strange beast because typically you train for a particular system or in a particular environment or in a particular language. It's critical that people who are working there understand the threats, understand the limitations of their tools and of what they do. I would call training extremely important.
Equally important is education, where you focus on what underlies the training. Where you focus on the principles and the concepts and so forth, and then once you've got that, then you train for individual systems. What you'll find when you do that is the education will take longer than the training, but the training will be fairly quick. When you move to a new environment or to a new system, the retraining will be much quicker because you'll be able to relate what you're seeing to the underlying concepts.
McGraw: I'm glad you put it that way. You anticipated a question I was going to ask about certifying developers for software security—whether they're any good at software security—versus a computer science degree. I think you just said the answer.
Bishop: Yes. Both of them have their place. You simply don't want to confuse the two. They're very different.
McGraw: Current security hotshots like Jeremiah Grossman at WhiteHat Security and RSnake [Robert Hanson, CEO of Sec-Theory] bemoan the idea of trying to secure the Web browser as something that's pretty much impossible. They say, "It's so screwed up, we're never going be able to fix it." Some of us have been moaning about this since 1995. What can we do to address the "browser as a security disaster" problem? [ Ed. note: See Grossman's column in the Attack Trends department, p. 79. He will be an upcoming Silver Bullet guest.]
Bishop: Well, for current browsers, we can keep putting Band-Aids on them. Beyond that, my suspicion is the browsers will need a fundamental rewrite because we need to build assurance into these things, and the browsers were typically not designed with much security in mind. Some of them were designed with sandboxes for Java and so forth, but the security threats are very different now. I think, unfortunately, the only way to improve browsers drastically is to build new ones.
McGraw: A total reset.
Bishop: Do a reset and build them using—I'm not sure I'd call them high assurance techniques—but maybe medium assurance techniques.
McGraw: Right. The question is whether that's a Control-Alt-Delete approach or an L1-A-b approach. What do you think?
Bishop: I just call it an, "Okay, we built prototypes. We've learned from them. Now let's go do it right" approach.
McGraw: Switching gears, how many feet do you have in your menagerie, not including the children? Count up the feet of all of your beasts.
Bishop: Not counting the children, myself, or my wife, there are 26.
McGraw: That's more than most people have.
Bishop: Bear in mind some of them only have two. Oh no, actually—oops, I take it back. I'm sorry. 32 feet.
McGraw: You didn't count the chickens?
Bishop: Well, no. I thought all the chickens had one leg. I counted the number of chickens instead of the feet.
Conclusion
You can find additional podcasts in the series, including those featuring Bill Cheswick, Adam Shostack, Mikko Hyppönen, and Ross Anderson at www.computer.org/security/podcasts/ or www.cigital.com/silverbullet/.
Gary McGraw is Cigital's chief technology officer. His real-world experience is grounded in years of consulting with major corporations and software producers. McGraw is the author of Exploiting Online Games (Addison-Wesley, 2007), Software Security: Building Security In (Addison-Wesley, 2006), Exploiting Software (Addison-Wesley, 2004), Building Secure Software (Addison-Wesley, 2001), and five other books. McGraw has a BA in philosophy from the University of Virginia and a dual PhD in computer science and cognitive science from Indiana University. Contact him at gem@cigital.com.