Issue No. 03 - May-June (2012 vol. 10)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MSP.2012.77
Gary McGraw , Cigital
Giovanni Vigna is a professor of computer science at the University of California at Santa Barbara, where he works on Web security, vulnerability analysis, malware countermeasures, and intrusion detection. Vigna is also codirector of the UCSB Security Lab and a member of the Shellphish and Epic Fail hacker collectives. He cofounded Lastline (an antimalware company) and organizes the annual International Capture the Flag (iCTF) hacking competition.
After returning from my first-ever trip to Black Hat last year, and I was surprised and somewhat disappointed by the lack of security engineering—plenty of breaking and lots of really cool hacks, not much building. But one of the highlights of DEF CON, which is closely associated with Black Hat, is its capture-the-flag contest, which is a hacking contest. Can you explain how your team, Shellphish, won in 2005?
The basic idea is that all the teams participating receive the same virtualized server and the same vulnerable services. The game is to find those vulnerabilities and patch your version of the application, and then use your knowledge about the bug to break into everybody else's servers. I must say that our team has always been very, very good at attacking and developing exploits but not as good at developing defenses. Not because we don't know how to defend—it's that when you're there, and you're working, everyone wants to develop exploits and do reverse engineering. It takes a lot of discipline during a competition where you're trying to have fun to say, "Okay, I'm going to sit down and set up the firewall." Nobody wants to do that. Or, "Oh, let me trace this thing and make sure that it only executes the system calls that it's supposed to." It sounds like the person who has to clean up the room while everybody else is fighting with light sabers. You don't want to be that person.
You've established the largest hacking competition in the world, the International Capture the Flag contest. Tell us more about that and who participates.
It started in 2001 as a hacking competition in my grad class on vulnerability analysis. At DEF CON, you have 12 teams playing in one big room, and I wanted something that was a little easier so that students who had just gone through three months of security training could actually learn something. DEF CON is sort of hardcore. If you don't understand something, they say, "Well, that's your problem," whereas in the iCTF, it's more like, "Oh, let me help you with this. You need to set up a VPN link? Here's the person who can help you with that." The attitude is completely different, and the goal is not to prove that you're the best hacker but to really help you learn about security and how to attack and defend in a fun setting. It also lets teams connect remotely, so we can scale pretty easily. Our last competition had 78 teams, with 1,000 people playing for eight hours, and it was a great success.
I have to give a shout-out to this magazine [ IEEE Security & Privacy], which also sponsors iCTF, right?
Absolutely, and it recently provided cash prizes for the first time. Before, the only thing the winners got was bragging rights for having won this competition, but now they also get a check. If I were a grad student, I would be like, "Hey, forget the bragging rights. Give me the money!"
Getting back to this idea of building versus breaking, I know you're talking about defensive mechanisms in capture-the-flag games, but then you talk about firewall rules and system calls. Is there an analog of the capture-the-flag contest for building security in, the notion being that you're supposed to build something that's going to be attacked? How do we teach security engineering in the same way?
I think they're very different situations. For example, there's a competition run by the military for military academies in which the focus is only on defense. The basic idea is that the teams have to keep an operation going while a tiger team attacks them, so they have to build something that has security and can deal with unexpected failures. It's a different type of competition—one that requires human interaction, detailed feedback on paper, such as, "Oh, these guys did well." It's also longer. We try to do something that's more like eight hours and automatically scored, so there's no human involvement, meaning you can scale up as much as you want. iCTF also has both a defense and an attack component. We try to change the design often so instead of using the same design over and over again, like DEF CON in the past few years, we try to come up with completely new designs.
I have an idea—IPv6.
Exactly—that would be a double disaster. Imagine running a competition for eight hours, and if you have one glitch, you're blocked for four of those hours. You can't risk certain things, but there are definitely situations in which that could be done. For example, look at the recent call for proposals for security systems to protect Android phones. The thing that I really liked in that call that I haven't seen in other places is its mention of a tiger team. You had to build security or the ability to detect security problems in your solution, and an external team will attack your system and its detection capability to verify that it works, so you have to think defensively. I like that because when you work with malware, you're going against a human opponent, and that person isn't just, "Oh, I'm going to forward you a lot of code." Instead, it's like, "I'm going to throw at you the weirdest code you've ever seen." So it's really important to have that human in the loop there.
What's your favorite course to teach?
My grad course on vulnerability analysis is the one I love to teach the most, probably because it's always updated to the latest, greatest technologies and vulnerabilities and security tools. By upgrading it every year, I keep up with the new stuff. Five years ago, programmatic debugging wasn't as important, but now it's an integral part of exploit development.
I notice that you taught a course based on Douglas Hofstadter'sGödel, Escher, Bachbook, too. He was my advisor in grad school, and I wrote one of his AI programs a million years ago. How do we attract the big thinkers—people who are well-rounded from a liberal arts perspective and can communicate—to our field, and not just geeks?
That's a difficult question that I think every computer science department has struggled with for the past 20 years. The trend is actually very positive. I look at my grad students, and they aren't necessarily—at least, for the most part—a bunch of closed-up, nerdy kids. They're actually very open and even artistic and creative in ways that aren't directly related to computer science. I don't know if it's a characteristic of people who love security. It's much easier to sell a class on, "Hey, I'm going to teach you how to break into websites," than "Hey, I'm going to teach you to validate the complexity of an algorithm." We definitely have an unfair advantage from that point of view.
Is the malware problem still growing, and if so, why?
It's definitely still growing because there's always money to be made, and there are always people who use the Internet in a naïve fashion. The basic problem is that as we harden certain platforms—for example, Windows 7 is much harder to break into than Windows XP—new platforms like Android phones allow attacks in a new context. A few reports came out recently showing that in the past six months, the number of Trojans and Android malware in general has exploded. It's a new world out there, and it's going to change continually because you'll get your next fubar gizmo and say, "Oh, I want Internet on my fubar gizmo." But I'll put a TCP/IP stack in it that was written in the '70s because I don't have time to do otherwise. And suddenly we start all over again.
There seems to be a close link between software vulnerability and malicious code. Do you agree?
Absolutely. One of my classic presentations covers malware-riding badware. Malware spreads by exploiting vulnerabilities in vulnerable software. However, if all software were developed correctly, most malware would be based on social engineering. You can build all the protections you want, but if somebody decides, "I'm going to go buy a gun legally, put it in my mouth, and pull the trigger," you can't do much about it. It's terrible.
I coined this term "badness-ometer" to describe the way you have to be careful about certain kinds of black-box testing regimes. You wrote a great paper—"Why Johnny Can't Pentest"—that seems to come from the same philosophy. What role should black-box testing play in security?
I often look at black-box testing as a motivational exercise. It's sort of a hard and fast way to motivate people to understand that security is an issue. If somebody hires a company to perform black-box pentesting on a website and they find a bug, the only thing you can derive from that is that they found a bug. It tells you absolutely nothing about the stuff they didn't see.
We have two issues there to untangle. One of them is making sure you don't treat those things as security meters. That's pretty obvious—it's the whole badness-ometer idea. But the other point is exactly how much of the testing are you doing?
There's no real measure of coverage that you can use if you're really black box.
You can do code coverage analysis…
Yeah, but you're not black box anymore because you know what code you touched.
It turns out to be something like 10 or 12 percent if you use commercial tools.
Exactly, which is why it's a good way to start a discussion about security. You do black-box testing, you find problems, and you go to management with a request for a white-box analysis. Security engineers like you and other people in the community then sit down, look at the design, and try to understand what security tests need to be done to provide better assurance about the system's security properties. Of course, even with that, you'll never get to a point where you can say, "Yes, you're done."
What's appealing about black-box testing is you can do it consistently, and it's really cheap. If the program fails, it's really terrible.
Exactly, and it's better than nothing. If somebody says, "I ran these 10 different black-box vulnerability analysis tools against my website and they didn't find a problem," I think it's slightly better than saying, "I know I'm secure. Here are the other 10 data points."
You have to be careful about how you construe the results.
Correct. There's nothing worse than saying, "We've been pentested by three high school kids, so we're secure now," because they didn't find any problems.
You grew up in Italy and moved to the US 14 years ago. Can you compare and contrast the attitudes toward security found in both cultures?
Italians are much less technology oriented and sophisticated when they use computers, but they're a lot more sophisticated when they use their cell phones—Europeans in general have a better handle on cell-phone technology. Americans are nice, so they're more trusting of people; Italy has a lot more petty crime than most places over here, in my opinion, so if somebody approaches you on the street over there, your first reaction is, "Who's this guy, and why is he talking to me?" A little bit of that translates into how people use the Internet. I have absolutely no empirical data to support this. It's just a gut feeling.
Rumor has it that your daughter, who is two, is learning how to pick locks. How good is she at it? Has she been working on just rake systems, or has she moved all the way to multiple cores?
No, she's doing simple stuff now, but she's an iPad killer. She can use it faster than I can, and it's amazing to see this new generation of toddlers using technology. I can't imagine how this generation will turn out. They're going to look at us typing on our keyboards, and they'll just laugh, like we laugh when we think about punch cards.
The Silver Bullet is cosponsored by Cigital and this magazine and is syndicated by SearchSecurity.
Gary McGraw is Cigital's chief technology officer. He's the author of Software Security: Building Security In (Addison-Wesley, 2006) and eight other books. McGraw has a BA in philosophy from the University of Virginia and a dual PhD in computer science and cognitive science from Indiana University. Contact him at firstname.lastname@example.org.