The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.06 - November/December (2008 vol.6)
pp: 4-5
Published by the IEEE Computer Society
Bret Michael , Naval Postgraduate School
ABSTRACT
China may embark on a full-scale program to assess the trustworthiness of software-intensive electronic products used by consumers and businesses. The proposed program goes well beyond attempts in other countries to accredit applications and systems. This article raises the questions: Is the program overly ambitious? Will the program actually improve our ability to introduce trusted products into the marketplace?
The 19 September 2008 edition of the Japanese newspaper The Daily Yomiuri reported that the Chinese government plans to institute a mandatory security accreditation program for consumer and business digital electronics products developed by foreign firms for export to or manufacture in China ( www.yomiuri.co.jp/dy/business/20080919TDY01306.htm). Specifically, the program would require foreign companies to disclose their products' software source code.
Such a program raises many concerns regarding the trustworthiness of products in a global marketplace and the results of the accreditation. Aside from the political, international trade, and legal issues, we need to ask what underlying fear of digital electronics products drives nation states to propose such programs? According to the article, the Chinese government claims that it needs access to source code—an admittedly sensitive form of intellectual property—to assess the vulnerability of products to malware, such as computer viruses. The ubiquity of such products in our daily lives, as well as the interplay among them, raises national security concerns—not just in China but around the globe. For instance, in 2004, The New York Times reported on the 1980s "Farewell Dossier" campaign in which the US government supposedly orchestrated the insertion of a Trojan horse into a commercially available computer chip; the Soviet Union used this chip to control the electromechanical devices that regulate the flow of natural gas through the Trans-Siberian pipeline. 1 The Trojan horse software adjusted the output control signals to the electromechanical devices to increase pressure in the pipeline, resulting in the equivalent of a 3-kiloton explosion. The attack's purpose was to make it financially difficult for the Soviets to pursue defense research that they had planned to fund with natural gas revenues.
Similarly, companies and private citizens also assume unknown risk in using digital electronics products, such as the introduction into the supply chain of counterfeit Cisco Systems routers 2 and mass-marketed digital picture frames manufactured in China containing malware. 3 These and the large number of reported incidents that digital electronics products have shipped with embedded malware lend credence to the concerns China has expressed.
Like China, other nations are developing strategies to mitigate the risk of using digital electronics products. For instance, the September 2007 Report of the Defense Science Board Task Force on Mission Impact of Foreign Influence on DoD Software describes some of the thinking going on in the US on accreditation of such products, but the report doesn't go so far as to recommend requiring disclosure of intellectual property to address supply-chain issues related to trust ( www.acq.osd.mil/dsb/reports/2007-09-Mission_Impact_of_Foreign_Influence_on_DoD_Software.pdf).
However, even with good intentions, can a government-run accreditation program successfully improve the trustworthiness of digital electronics? There are several obstacles to success. To be able to access the source code is one thing. To understand the source code well enough to gauge the product's own behavior in addition to its influence on other products (such as emergent behavior in a system or system-of-systems context) is technically challenging, even for a single product platform or context of use. Software shapes much of the functionality of modern digital electronics products. Some of the concerns David Parnas expressed in the 1980s about our ability to engineer and assess the trustworthiness of software-intensive systems hold today (we should adequately test applications or systems under realistic conditions; http://en.wikipedia.org/wiki/David_Parnas), as do the many technical difficulties of detecting malware embedded in microcode, the bitmaps of field-programmable gate arrays, and even hardware components; Ken Thompson pointed out such difficulties in his Turing Award lecture. 4 Past en masse attempts by the public and private sectors to certify software-intensive products, such as those subscribing to the Common Criteria for Information Technology Security Evaluation, have shown marginal success at a high cost, portending that China is undertaking an initiative of an incredible scale.
The assessments' results must consider the engineering trade-space of dependability attributes, the intended usage, and the context of the product's use. These contextual use scenarios can vary from country to country, or even from user to user. It might be worthwhile for government and private-sector accreditors to consider focusing their accreditation programs on safety- or mission-critical contexts of use. However, consider a general-purpose application such as a Web browser—it's unlikely that certification resources would be allocated to it unless we know a priori that the browser will be employed in the user interface for a life-critical medical device, even though this would be a natural deployment. The browser metaphor and user interface are well known and browser code is widely used (and hence extensively field tested).
Even if the technical challenges can be overcome, there are practical considerations, such as the combination of the shortened development life cycle for digital electronics products (less than nine months), the rapid rate of release of new products, and the throw-away-product society we live in. What level of assessment is justifiable in terms of the risk to society of the untrustworthiness of these products? Furthermore, many examples of digital electronics products are released before they've undergone thorough validation and verification, with companies citing the need to be first to market as the predominant requirement. A further complication is chain of custody—there's a need to ensure that what's accredited is what actually gets put into the product to counter the ever- present financial and other incentives to introduce malware into legitimate and counterfeit products.
Conclusion
Finally, it's unclear what China or any other nation would do if it identifies an untrustworthy aspect such as an exploitable security flaw in the software. Would China require the developer to make changes to the product, and how would these changes guarantee greater trustworthiness? Beyond accreditation, would there be any value in applying forensics techniques to trace the introduction of malware counterfeit products of unknown pedigree to their originator? Would China, unbeknownst to the intellectual property's owner, make changes to the product themselves? Trust in a product is a two-way street between the product developers and accreditation team, regardless of who administers the program. Trust is a necessity for the well being of our world, and it is not clear if this proposal offers it.

References

21 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool