The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - May/June (2007 vol.5)
pp: 3-4
Published by the IEEE Computer Society
ABSTRACT
Better information can improve a marketplace. An evaluation/certification process that leveraged modern programming languages and analytic tools could accelerate both the development and the adoption of less vulnerable and more effective programming practices, products, and systems. *This article also includes a letter to the editor, regarding "Alien vs. Quine" by Vanessa Gratzer and David Naccache from the March/April 2007 issue.
Wisconsin's milk market had a problem in the 1880s: some farmers produced milk with higher butterfat content than others, but because it was difficult for sellers and buyers to measure it, the lower-quality milk brought the same price as the higher-quality. This inequity encouraged practices such as watering the milk, and it held back the industry as a whole.
Happily, a University of Wisconsin professor, Stephen M. Babcock, invented a relatively simple and inexpensive process for determining the butterfat content of milk, and he made it available without patent. This invention enabled the market for milk to function better, allowing consumers to reward high-quality milk producers and avoid low-quality ones. The industry thrived, and today Wisconsin is known as "America's Dairyland." The story recurred in 1970s India, and the country has subsequently become a leading milk producer.
This example shows how better information can improve a market-place. George Akerlof 's famous paper on "The Market for 'Lemons'" argues that asymmetrical information (that is, the seller knows more than the buyer about the offered product)—such as was present prior to the ability to measure milk fat—can easily cause a market to decline.
Today's market for software also exhibits information asymmetry. The security properties of a piece of software are hard to specify and still harder to assure. Though producers might not know precisely how trustworthy their products are, they can know their ingredients—the sources of the software and hardware, the competence of the individuals involved, the assurance procedures used, and the time and effort invested.
But consumers, lacking this information, have difficulty establishing whether one product is less vulnerable than another, so it's difficult for them to reward stronger products in the marketplace. The available measurement tools are either crude and ineffective (as with many of the checklists applied in system-certification exercises) or complex to apply (as in the Common Criteria evaluation process). Both of these approaches are labor-intensive, hence costly, and must be reapplied as systems change.
Can we imagine a tool that a modern day Babcock might develop that could improve the security marketplace?
Although many sources of vulnerability exist in our computer systems today, the ones that continue to provide the largest source of exploitations are fundamentally programming errors—errors of implementation or relatively low-level design, such as unchecked buffers and unvalidated inputs that enable stack smashing and cross-site scripting attacks. In the past 20 years, considerable progress has been made in developing tools that can detect such errors at the source-code level and increasingly even at the object-code level. Type checkers, taint checkers, model checkers, and verifiers are among the tools now available for this purpose.
We've also developed programming languages that make it impossible to commit broad classes of errors. Java and C# are well-known examples, with their strong typing, which prevents buffer overflows and a wide variety of other code attacks.
Unfortunately, the assurance processes and procedures that are in use in the world at large don't generally make systematic use of these tools to provide any kind of guarantee about the code that winds up running on our systems. The Common Criteria evaluation processes don't even require direct examination of the source code until you reach Evaluation Assurance Level (EAL) 4—the highest level generally applied to commercial products.
An interesting question is, using mechanical means alone, how much assurance can we get that a software component is free of a reasonably broad set of vulnerabilities? We'll never be sure there are no residual vulnerabilities, of course, but our current processes don't provide that assurance, either. Couldn't we imagine a much less labor-intensive, yet more effective, approach to assuring that our software makes exploitations difficult?
A clear answer to this question might require some assumptions or constraints on software or system development processes. For example, we might be able to support a claim that there are no buffer-over-flow vulnerabilities in a piece of source code, either by examining the code mechanically or writing the software in a language in which such errors are impossible to make. Either way, we could have a system that could simply be recompiled or automatically reanalyzed, with a minimum of human labor, when changes were made.
Conclusion
An evaluation/certification process that leveraged modern programming languages and analytic tools could accelerate both the development and the adoption of less vulnerable and more effective programming practices, products, and systems.
14 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool