The Community for Technology Leaders

Silver Bullet Talks with Neil Daswani

Gary McGraw, Cigital

Pages: pp. 11-14

Neil Daswani is a manager in Twitter's revenue engineering team. He was formerly the CTO and cofounder of Dasient, an Internet security company that Twitter purchased in January 2012. In the past, he worked for DoCoMo USA Labs, Yodlee, Bellcore, and Google. Daswani cofounded the Stanford Center for Professional Development's Software Security Certification Program and coauthored Foundations of Security: What Every Programmer Needs to Know. He has a PhD in computer science from Stanford University. Gary McGraw interviewed him as part of his popular Silver Bullet podcast on 30 March 2011.

How has your background in software security and knowing how to talk to developers about security informed product development at Dasient?

In our product development at Dasient, we have a number of software development life cycle processes in place. From when product requirements are put together, we figure out the security risks, the threat model, and the mitigations we need to put in place, and then we run that through the design, through the implementation stages, and out to operations. But even when you build in the appropriate crypto protections and security software features, all kinds of fun stuff will happen once it's in production.

One thing that we emphasize is that you can do as much as possible to prevent an attack, but we know from defense in depth that it's simply impossible to prevent certain kinds of attacks. When they happen, it's important to have measures in place so that they can be detected, contained, and recovered from in an expedient, efficient fashion.

Is it a challenge to work with the speed that you need to work with and hit the window of opportunity and also get security done at the same time? Do you have to think about those tradeoffs, or are they more natural if you just do it?

No, I absolutely think you need to think about the tradeoffs. If you're defending Fort Knox, you need Fort Knox defenses. But what's important as a security company is that you practice what you preach—put in good enough or a little bit better than good enough defenses so that you're defending yourself and your customers and their data. When we work with our customers, we also go through security reviews with them—it's important that they see all the appropriate defenses being put in place. One of the things that I've been glad to see is that larger companies that work with us spend more time getting to know what we do on security, and we get a chance to exchange best practices. It's good to see that progression in the field.

You spent several years at Google, and some people say Google's kind of like a startup in some ways. Contrast Google and Google's culture with actually doing a real startup.

I would say that Google is, in some ways, like a federation of startups. Different groups within Google are at different points in the startup life cycle. If you look at things in the search space at Google, they're much more mature than some new feature that Google rolled out as part of Google Apps.

An advantage at Google is that it has a lot of infrastructure. As particular startups within the federation of startups at Google progress, they can take advantage of a lot of infrastructure because it's already there. When you do a startup outside of Google, you have to build infrastructure along the way, but you can also take advantage of existing infrastructure even if it's outside your company. It doesn't make sense to try to build everything. You need to focus on your core—the key value that you bring to your customers, the key new pieces of security that you need to build—and maybe build those pieces up internally, especially if they're critical to your core intellectual property.

Your company combats malicious code, especially what you call malvertising (what I like to call bad ads) and the kind of malware that works in drive-by downloads. Can you please explain the problem in more detail?

One of the big changes in the last three to four years is that the way malware spreads on the Internet has fundamentally changed. It used to be that most malware would come to you as attachments to email, and it tried to take advantage of operating system vulnerabilities via Code Red or Nimda, spreading in a wormlike fashion.

But the way that malware spreads these days is via the Web, taking advantage of various Web channels. In drive-by downloads, cybercriminals can infect a webpage by taking advantage of some third-party widget that the webpage uses—some third-party application or ad network—and basically send the malware through that channel. Once users simply visit the infected webpage, they will get malware sent straight to their PC with no user interaction whatsoever.

With "bad ads"—the malvertising problem—the problem is much bigger. Because online commerce relies on advertising, and online advertising shows up on many sites, attackers have started using ad networks more and more aggressively as a distribution platform for their malware. They inject the malware into the ad network in a variety of ways ranging from creating accounts to putting legitimate ads in it initially and then substituting them out to launching drive-by downloads. What happens next is that large groups of publisher sites that happen to use that ad network end up infecting their users, resulting in the red screens of death. Popular search engines and browsers keep lists of those sites that either are directly infected or happen to use third-party resources that have been compromised.

So that's what some people call the Google blacklist?

Google provides a safe browsing API that many browsers and other sources use; it's basically a list of resources on the Internet that have been infected. In some sense, you can call it a blacklist, but Google does a lot to provide different options to users, browser companies, and consumers of that safe-browsing API.

From a customer's perspective, what is it that you do exactly to stop bad ads and drive-by downloads through corrupted widgets? How does that work?

When we work with ad networks, we're typically given access to their ad inventory. We then employ a very deep, cloud-based, server-side behavioral scan and identify when the malicious ads were created and entered into the network. Next, we generate automated alerts and report back to the various APIs so that those malicious ads get taken out of circulation. We're basically bringing security to the online advertising ecosystem.

The London Stock Exchange recently happened to be showing ads on part of its site, and some of those ads came from an ad network that had malicious ads inserted into it. Because the London Stock Exchange site can end up serving drive-by downloads through these ads, it got flagged by the Google safe-browsing API and other sources as well. When it realized the problem, it took appropriate steps to mitigate it, but ideally if you're a website and you're using an ad network or third-party widgets or third-party anything, you want to find out about these kinds of infections before the large search engines do. Otherwise, you stand to lose your users, traffic, and e-commerce revenue. Eight or nine years ago, if a site got defaced, it typically just used to be a nuisance. But now, there are very serious business ramifications.

Do these bad ads take advantage of vulnerabilities in a user's browser? Is that basically how they work?

Yes; they take advantage of vulnerabilities in browsers as well as in all kinds of third-party browser extensions. They take advantage of ActiveX controls, so that once an attacker has introduced a malicious piece of JavaScript or iFrame into a website and a user visits the site—this happened to the NASDAQ director's desk website—that piece of JavaScript or that iFrame will, in an online, real-time fashion, fingerprint the user's browser, fingerprint all the different third-party plug-ins that the user happens to be using, and figure out exactly which versions of those things are vulnerable. It'll consult an online exploit database in real time and figure out which shell code to send down to take advantage of a buffer overflow. It all happens in moments, and it's definitely targeted.

I can put on my purist hat and say, gosh, why don't we just build better software so that the targeting is either harder or impossible, but whenever I say that in public, I get in trouble. So I'll ask you: Why not attack this problem though better software security?

I think the challenge is that there's so much software out there and, from a practical standpoint, it's impossible to secure it all. So while we should build better software, we have to keep in mind that cybercriminals only need one exploit on that client. Their tools are completely automated, so we also need to employ defense in depth—we can't simply rely on prevention, on building better software. We need to deploy countermeasures that will detect these issues when they occur, monitor important resources on the Web for them, and contain and recover from these attacks in an automated way so that at least we keep up with, if not stay ahead of, the attackers. But where we are right now, most organizations are in catch-up mode—some leading organizations are staying ahead of attackers, but we need to get more organizations into that camp.

You worked on software security certification at Stanford. Does certification work?

With regard to the goals of building better software and putting additional defenses in place, one of the things that I've noticed is that we don't have enough security professionals in the field to fight cybercriminals. Part of the reason that we created the software security certification program at Stanford was to provide a career path for those folks who aren't necessarily in security right now but need to know more about it and also to provide a path for those interested in becoming security professionals in the future. We also created an advanced security certification program where we bring people up to speed on emerging threats and defenses, what security means for Web 2.0, and what security means for managers of various kinds, whether they be project managers, product managers, and so on.

I guess one of my concerns is that it's very hard to tell whether somebody can code if they're just taking some courses—that is, whether someone can code at all—and this certification more or less purports that you can code and do it securely at the same time, right?

Stanford's security certification program caters to a number of different disciplines. While we do need more people who can code to help us fight security problems and build more secure software, a lot of other folks need to help with the operational aspect and the management aspects. With the Stanford security certification program, we don't just target people who can code and work to create software security coders. Security is very often a process and not simply a product, so in line with that thinking, we need to bring multiple disciplines into it.

Going back to your Google days, tell us about combating Clickbot.A.

Once online advertising became mainstream, cybercriminals started targeting it, just like they've started targeting social networking sites today. Clickbot.A happened to be a botnet that grew to over 100,000 machines and was clicking on not only Google ads but ads in other ad networks as well. What we did at Google was to publish the anatomy of Clickbot.A, to make the industry more aware of click fraud and other related threats. Basically, the Clickbot.A malware binary, once it was on a user's machine, sent a click once every hour or so on some ad on some network.

Just to generate revenue from the click through?

Exactly. While any one particular bot probably didn't generate a lot of revenue, if you aggregate all of them together, they could potentially generate quite a lot of revenue for the attackers because they set up their own publisher sites that paid them for ad clicks. Google took a very aggressive approach to dealing with click fraud and basically focused on how to make the attacks economically unprofitable. Some of the defenses that Google has in place identify clicks that might have fraudulent intent and simply do not charge advertisers for that. Click fraud still occurs, but it's much less profitable. And the industry has done a decent job putting defenses in place to mitigate threats due to click fraud.

In my view, the Google philosophy is pretty heavy on testing. Why?

Mature software companies typically spend approximately half of their time in development and half in testing. Less mature organizations try to cut out the testing, but I think that what you realize over time is that you need to spend just as much—or maybe even more—time testing than actual development simply because for each new line of code, the attack surface may increase much more significantly than what you'd imagined just a typical line of code would.

I think the other nice thing about testing is that you can use automated penetration testing as well as manual. If you run the automated test and you don't see anything, you should then progress to manual testing. On the other hand, if you run the automated tests and you start seeing vulnerabilities come up, it means you've probably hit just the tip of the iceberg. We absolutely should invest in secure software development, but part of that is also testing, right? You can't just assume that the design and implementation processes will go as you expect. Good engineers know that the only way to get validation is to test.

Hear the full podcast at Show links, notes, and an online discussion can be found at

About Neil Daswani



Neil Daswani is a manager in Twitter's revenue engineering team. He was formerly the CTO and cofounder of Dasient, an Internet security company that Twitter purchased in January 2012. He also cofounded the Stanford Center for Professional Development Software Security Certification Program and coauthored Foundations of Security: What Every Programmer Needs to Know. Daswani has a PhD in computer science from Stanford University and recently became a father. Contact him at

About the Authors

Bio Graphic
Gary McGraw is Cigital's chief technology officer. He's the author of Software Security: Building Security In (Addison-Wesley, 2006) and eight other books. McGraw has a BA in philosophy from the University of Virginia and a dual PhD in computer science and cognitive science from Indiana University. Contact him at
77 ms
(Ver 3.x)