Issue No. 03 - May-June (2012 vol. 10)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MSP.2012.66
Steven M. Bellovin , Columbia University
If there's one security rule everyone knows, it's this: use "strong" passwords. Articles in the popular press give advice on how to choose them. Websites enforce rules about them. You'd think that by now, people would follow that advice well enough that we'd have seen a notable decrease in security problems.
We're fighting the last war. Worse yet, we're refighting a war that we lost, years ago, and not changing our tactics, let alone our strategy.
The single biggest mistake one can make in a fast-moving business like ours is to blithely give yesterday's answers to today's questions. (The second-biggest mistake, of course, is to blithely discard yesterday's answers, simply because they're yesterday's.) Focusing on password strength ignores not just how and why the maxim came to be, but also the environment in which it was forged.
The notion that weak passwords are bad came from Morris and Thompson's classic 1979 paper. It's important to remember the context. Almost no one had more than a handful of accounts and passwords. The accounts that did exist were for general-purpose timesharing machines; if you had a login, you almost certainly had full shell access, and hence access to most of the files on that machine. Furthermore, the hashed password file was world readable. A significant percentage of users still employed electromechanical hard-copy terminals.
None of that is true today. Most people have very many accounts (I personally have well over 100), generally to Web servers. The accounts give access to the offered Web service, not a shell. Other files are (supposedly) not reachable via the Web server; the password file may reside on a separate database server. The passwords are often unhashed, to permit password recovery. And no one uses hard-copy terminals; accounts and passwords are typed into hideously complex Web servers running on computer and operating systems that are orders of magnitude more complex than the largest timesharing systems of 1979. Why, then, should we think that the advice is still valid? It could be—but let's reexamine the question de novo.
The bad guys also have new and better ways to steal passwords. If they compromise the server, they can collect passwords as they're entered. Of course, they also have access to any database of unhashed passwords. Nor is the client side exempt; phishing attacks and keystroke loggers are other potent ways to collect logins and passwords. What's more, in none of these cases will strong passwords help; to malware, all passwords are just strings of bytes. No guessing attacks required.
There's a flip side, too. The stronger a password is, the harder it is to remember. The result—the inevitable result, given how many logins a typical user has—is password reuse. This, however, vastly increases the risk of password compromise, because the user's account is now vulnerable on many systems at once.
Other password truisms are similarly inapplicable. Don't write down passwords? Not possible, when hundreds are needed. Change them frequently? If I have 100 passwords and I have to change each one annually, that comes to more than one per week—and research has shown that new passwords are very frequently easily derivable from old ones.
It would be nice to get rid of passwords entirely, but that isn't going to happen any time soon. I personally wrote, more than 20 years ago, that they were on their way out. It hasn't happened, and I no longer expect it to. What we need are better ways of entering, storing, and using passwords, ways that respond to today's threats instead of yesterday's. Sticking with checklists based on yesterday's technology is not the way to secure today's systems.
Steven M. Bellovin is a professor of computer science at Columbia University. Contact him via www.cs.columbia.edu/~smb.