In the past two years, we've witnessed remarkable failures in the certificate authority (CA) regime. Although this regime purports to protect Internet users' communications from malicious man-in-the-middle attacks, the trust model is premised on unconstrained authentication authority that's granted to thousands of entities scattered across the globe. Recent events have highlighted how difficult it can be to maintain a trustworthy system that's based on this premise.
The CA system exists to authenticate one party to another in a public-key infrastructure (PKI). Although client software ultimately carries out the authentication, CAs issue the digital certificates that make the authentication possible. Software vendors, at their discretion, build into their products a list of "root" CAs that are trusted to perform authentication on behalf of users. The most common business for root CAs is the sale of SSL/TLS certificates to website operators. These domain validation
(DV) certificates indicate that the CA has verified that the website operator owns the domain name in question. Some CAs contract with other companies, called registration authorities
(RAs), to perform the actual verification of a certificate applicant's domain name ownership. Some root CAs don't issue SSL certificates directly but instead cryptographically delegate that authority to a third party via a subordinate CA
(SubCA) certificate chain. 1
If the browser successfully "chains" the certificate to a trusted root CA, it indicates to the user that it's communicating with the domain name's true owner rather than a man in the middle.
Security researchers have frequently lamented the CA trust model's known weaknesses and perennially announce new vulnerabilities in the underlying technology. Although these revelations have met with some fanfare, the core system has remained largely unchanged.
Systemic Technical Weaknesses
The computer security community has long focused on the CA trust model's technical shortcomings, and recent breaches (see the "Recent High-Profile Compromises" sidebar) have amplified efforts to strengthen the system. The race to discover core cryptographic vulnerabilities and design better algorithms will no doubt continue, but that dynamic is fairly well known. Instead, we briefly outline some of the more systemic technical weaknesses of the CA trust model as it stands.
Recent compromises have helped highlight the diverse set of entities that hold broad-brush authority to issue certificates. The universe of root CAs includes companies from around the world, governments, and defunct CAs that have re-sold their keys (see https://bugzilla.mozilla.org/show_bug.cgi?id=242610#c7). The Comodo incident in 2011, in which a hacker caused an RA to trigger the CA Comodo to issue unauthorized certificates for several high-value domains, heightened awareness of the much larger number of RAs to which CAs outsource critical operations. Researchers have also begun to reveal the extent to which CAs have turned over the cryptographic keys to the kingdom by delegating chains of trust to others. 2
As it stands, nearly every user of a given software package trusts the same list of root CAs, and they trust each of them with the ability to authenticate any website. For instance, no practical means exists for users to restrict the CA of a national government to issue certificates only for entities within its borders. RFC 5280 includes optional "name constraints" that would limit the domains for which a given CA can issue certificates. 1
However, this feature remains largely unsupported.
Over time, new facts emerge that change the assessment of CA trustworthiness. In current software, however, the list of root CAs resembles a write-only data structure in which incumbents retain their spots, excepting DigiNotar. To effectively remove DigiNotar in the wake of that CA's compromise, browsers and operating system vendors had to ship security updates or completely new binaries. This combination of technical, operational, and political stasis stands in opposition to what Moxie Marlinspike has termed "trust agility." 3
However, empowering users with greater agility in their trust decisions can present usability challenges.
Studies have repeatedly demonstrated that users don't understand the concept of trusted CAs, or even heed strongly worded security warnings that appear when authentication fails. Some researchers have concluded that it might be better to completely prevent users from engaging in dangerous behavior than to try to design for choice. Usability concerns can conflict with attempts to give users more control over their root CA list's surface area, constraints, and trust agility.
Legal, Economic, and Organizational Flaws
An implementation of the CA trust model that conforms perfectly to the technical specifications can nevertheless manifest deep flaws. Augmenting or replacing the technical infrastructure might similarly fail if it doesn't also address some of the more fundamental problems and assumptions that underlie today's model.
CA Liability and Economic Incentives
Third-party trust problems are nothing new. Steve Bellovin has noted that in the early days of electric communication, the telegraph company's liability and economic incentives were unsettling. As one author at the time noted,
"On the Continent it is frequently the case that the signatures of messages involving, for instance, money payments or delivery of valuable documents, purport to be certified by the telegraph operator …" but the telegraph company will not "back up [a guarantee] with an admission of their own liability in the event of a fraud occurring." 4
Unfortunately, the documents that serve as the legal architecture of the CA trust model today — the certification practice statement (CPS), certificate policy, subscriber agreement, and relying party agreement — reflect a strikingly parallel situation. The CAs don't seem to have much faith in the product that they provide.
For instance, a CPS customarily includes a total disclaimer of all liability for any claim or loss arising out of a certificate "that was issued as a result of errors, misrepresentations, or other acts or omissions of a subscriber or any other person, entity, or organization." 5
This means that if a bad actor obtains a certificate by either tricking or hacking the CA, an RA, or a SubCA, and then uses that certificate for a successful man-in-the-middle attack against an end user, the CPS says that the CA, RA, and SubCA have no liability. To the extent that the CPS leaves room for any liability, it often includes substantial caps on aggregate liability, typically on a "per certificate" basis apportioned among those claims that are filed first. 6
In fact, it's unclear whether anyone has ever successfully brought any such claim.
These types of disclaimers are unsurprising, given the "baseline guidelines" supplied by the leading CA industry trade group, the CA Browser Forum, which state the following:
If the CA has not issued or managed the certificate in compliance with [the CA Browser Forum's Requirements] and its certificate policy and/or certification practice statement, the CA may seek to limit its liability to the subscriber and to relying parties, regardless of the cause of action or legal theory involved, for any and all claims, losses, or damages suffered as a result of the use or reliance on such Certificate by any appropriate means that the CA desires. 7
These provisions let the CA sell certificates while seemingly offloading all of the significant downside legal risk associated with the sale.
The CA legal documents often purport to legally bind end users (also referred to as "relying parties" in the model) merely because the user's client software relies on the CA's certificates. Due to the obvious absence of notice, assent, and meeting of the minds, it seems a relatively sure bet that both the CPS and the relying party agreement are unenforceable as contracts against relying parties. So, why does this purported legal architecture persist? Perhaps because the CA audit framework published by the American Institute of Certified Public Accountants and the Canadian Institute of Chartered Accountants (the WebTrust Framework) actively encourages CAs to post their CPS documents, but doesn't require actual notice to, or assent of, the relying party. 8,9
RFC 3647 takes the same approach and states that CAs can have disclaimers of warranties, disclaimers of liability, and other legal provisions appear in their legal documents and that mere "publication and posting to a repository" is sufficient "for the purpose of communicating to a wide audience of recipients, such as all relying parties." 10
The CAs have embraced this approach. They routinely copy WebTrust's "illustrative disclosures" into their CPS and relying party agreements. These model provisions address indemnity, disclaimer of fiduciary duties, governing law, mandatory dispute resolution, and supposed relying party obligations. Many CAs no doubt believe their CPS is actually enforceable as a result of the CA Browser Forum, WebTrust, and RFC guidance. Unfortunately for the model, no court decision in the US holds that any of the CA documents are enforceable against relying parties based on the mere posting of online documents or that CAs are excused from the standard precepts governing contract law.
The problems with the model's legal architecture create economic incentives for CAs that are at best uncertain and at worse perverse. Those CAs that believe their CPS is enforceable might be incentivized to emphasize higher sales volume over quality business practices. These CAs could perceive that the CPS has minimized or eliminated the downside risk associated with aggressive reselling via RAs or SubCAs. This tendency is reinforced by the highly priced competitive market for certificates in which volume is paramount for survival, and penalties for untrustworthy behavior have been virtually nonexistent. Furthermore, CA customers — website operators — gain no benefit from purchasing certificates from a more trustworthy CA, because any standard certificate looks and works the same in all client software. Certificates have become unregulated commodities. These factors conspire to create an unfortunate "race to the bottom" in CA security practices.
Audits and Transparency
The WebTrust Framework and the CA Browser Forum baseline requirements for issuing and managing publicly trusted certificates, together with individual software vendors' requirements, form the de facto compliance regime for CAs. Many of the requirements are sound and uncontroversial. However, the current regime falls far short of covering certain entities that carry out critical CA functions. The regime also fails to require that these entities' identities be disclosed to the public. Consequently, CAs structure their businesses in a way that creates significant zones of unaudited and undisclosed certificate-granting authority.
One area of concern involves companies, RAs, that are external to the CA but that have partial or complete ability to conduct identity verification. Although these RAs don't typically hold private-key material, they verify identity and then submit a request to the CA, which results in the CA issuing a certificate, often in an automated fashion. WebTrust decided to "carve out" RA operations from the scope of CA audits. It admitted that "some end users" might not find this satisfactory but claimed that it had "concluded that the issuance and use of [the Webtrust Framework] was desirable and that the impact of a third-party registration function was beyond the scope of this document." 8
The WebTrust Framework went unmodified for more than a decade, until version 2.0 was abruptly published without fanfare in mid 2011. 9
This new version continued to leave the vast majority of RAs and RA functions beyond the reach of any external audit. Although an auditor isn't technically forbidden from auditing RA operations, WebTrust 2.0 considers such audits to be "rare situations" warranted only in circumstances in which "the CA exercises extensive monitoring controls (including onsite audit) over all aspects of the RA operations, and the CA is willing to assert to the effectiveness of the controls performed by the external RAs." In this statement, WebTrust 2.0 has in fact laid bare the severity of the RA problem by implying that it's "rare" that a CA would exercise "extensive monitoring controls … over RA operations" or "be willing to assert to the effectiveness of the controls performed by the external RAs." However, because RAs perform identity verification, they're often the first and last line of defense against fraudulently obtained certificates.
The US National Institute of Standards and Technology's (NIST's) Information Technology Laboratory Bulletin
for July 2012 identified the four overarching categories of CA compromise, two of which focused almost entirely on the RA: impersonation
, or those circumstances in which a certificate applicant fools the RA into causing the CA to issue a fraudulent certificate, and RA compromise
, or circumstances in which the RA's certificate request process to the CA is compromised, and the hacker can make certificate requests to the CA, as if the hacker were the RA. 11
Moreover, it appears that even in those "rare" situations when audit activity might occur with respect to the RA, the auditor doesn't appear to be able to unilaterally require an RA audit. The WebTrust 2.0 guidelines state that "the CA and the auditor need to agree in advance with this approach, including the extent and sufficiency of controls being exercised." Thus, the WebTrust 2.0 criteria appear to let the CA set the terms of RA "audits," if any, and to shop for an auditor that agrees to take their preferred approach. Compounding the problem with the audit regime is perhaps a more fundamental issue: CAs don't have to disclose their RAs' identity or track record. A relying party or user has no choice but to trust the RA as much as the CA, yet the RAs are unknown. This makes managing trust almost impossible. NIST's bulletin exhorts companies and other organizations to "remove any trust anchors that should not be trusted," but how can an organization as a relying party even begin that exercise without knowing the identity of all of the RAs used by any particular CA?
Another problematic practice is the cryptographic delegation of complete certificate-granting powers by CAs to third parties via a certificate chain. WebTrust 2.0 doesn't require that these so-called SubCAs be audited or disclosed to the public. Several CAs sell costly SubCA certificates, even though they have no technical means for monitoring the certificates' use.
These SubCAs are typically intended for an enterprise user who wishes to generate a large number of SSL certificates or email (S/MIME) certificates for its domains. Many CAs will also "cross sign" other CAs' certificates such that a user who doesn't trust the cross-signed CA directly will nevertheless trust it via the signer's authority. These relationships likewise often aren't disclosed when software vendors approve or consider removing the signing CA from the root CA.
In February 2012, CA Trustwave admitted to issuing a SubCA certificate to a company so that the latter could perform a man-in-the-middle attack on all it employees' HTTPS browsing activity. Trustwave revoked the certificate, and pledged that it would issue no similar certificates in the future. 12
At the same time, it claimed that, "It has been common practice for Trusted CAs to issue subordinate roots for enterprises for the purpose of transparently managing encrypted traffic." In January 2013, a different CA, Turktrust, was found to have issued a SubCA certificate to a Turkish government office, which subsequently installed it on a man-in-the-middle proxy. Turktrust claimed the issuance was an error — it had intended to issue an SSL certificate — and that the proxy had affected only that office's employees (see http://turktrust.com.tr/en/kamuoyu-aciklamasi-en.html).
These practices essentially create a "trust darknet" with a risk surface area that far exceeds the size of the audited CA universe. Note also that audits themselves are far from perfectly suited silver bullets that ensure trustworthy practices. Initially, the audit simply confirms that the processes stated in the CPS are in place. The audit process's public output is typically a pro forma one- or two-page attestation to this effect. DigiNotar, audited by PricewaterhouseCoopers under the ETSI 101.456 standard and the WebTrust Extended Validation Audit Criteria, reminds us that simply obtaining an audit attestation doesn't guarantee trustworthy operations.
Jurisdiction and Communities of Trust
The jurisdiction in which a CA is located and where its affiliates and delegates operate affects whether an individual should trust it. For instance, because governments can compel CAs within their jurisdiction to issue unauthorized SubCA certificates to spy on encrypted traffic such as email, citizens of autocratic or untrustworthy political regimes might wish to trust only CAs located beyond their governments' reach. 13
Similarly, companies might wish to avoid trusting CAs that are either affiliated with or potentially controlled by governments that they believe would facilitate industrial espionage on behalf of state or private competitors in that jurisdiction. However, CAs don't currently disclose enough information for even vigilant users to know which jurisdictions have influence over the certificates that users rely on — especially certificates emanating from RAs, SubCAs, and cross-signed CAs. Currently, the CA Browser Forum guidelines require that only the CA's country be disclosed. RAs' identities, together with the jurisdictions in which they reside, are completely invisible in the CA trust model. If a relying party wishes to avoid trust being anchored in an entity located in jurisdiction X, the current model offers no way to enforce that choice. CAs that purport to be located in jurisdiction Y might also have RAs in jurisdiction X.
Location — the location of the CA, RAs, SubCAs, cross-signed CAs, and the relying party — is only one of many possibly relevant trust factors. Others include track record, parent/subsidiary affiliation, number of outstanding certificates, and global reach. One technical-structural approach to consider might be enabling like-minded relying parties to curate their own root CA lists. Inspired by the success of customized "ad block" lists, a few dedicated users might create and maintain tailored root CA lists for the larger community's benefit. Greater CA transparency might go a long way to enabling such tools. More research should be done on how to enable trust agility for users that have different trust profiles while also facilitating a low barrier "set it and forget it" user experience.
Strategies for Improvement
The problems with the CA trust model haven't placed it beyond redemption. Three categories of discrete improvements could make the model significantly better. First, transparency could enable meaningful choice by relying parties. The current lack of transparency impairs relying parties' ability to know the identity of RAs, the identity of all SubCAs and cross-signed CAs, and the jurisdiction in which the RAs, SubCAs, and cross-signed CAs reside and carry out operations. This lack of transparency prevents software developers from having sufficient data sources to provide solutions that would allow end users to trust or un-trust CAs based on this information. To improve transparency and choice, CAs should
• be required to make complete online disclosure of the identity and legal jurisdiction of all of their RAs, SubCAs, and cross-signed CAs;
• be required to disclose governmental affiliation, ownership, and control of themselves, their RAs, SubCAs, and cross-signed CAs; and
• be advised by self-regulatory bodies that blanket liability disclaimers in CPs, CPSs, and RPAs should be accompanied by some degree of at least one-time actual notice to relying parties.
The second problem area is audits. The CA audit regime could be improved in the following ways:
• Any party that performs identity verification or can cause the CA to issue certificates should be audited at the same level as a root CA.
• Self-regulatory bodies such as the CA Browser Forum should require more detailed information regarding audit results to be made public (that is, something beyond a pro forma two-page attestation).
The third area relates to the self-regulatory process. Although the CA Browser Forum has made some significant improvements in its requirements for certificate issuance, its internal processes are burdened by opacity and limited participation. Accordingly, self-regulatory bodies should
• conduct their work in a manner more consistent with disclosure security; and
• continue to broaden participatory scope, especially by representatives of the relying party community.
The CA trust model has global reach and pervasive deployment. Although systems have been proposed to help enhance this model's reliability, no comprehensive replacements are on the horizon. Moreover, the model has much to recommend it in terms of scalability, elegance, capacity for evolution, and collaborative solutions. It also has substantial institutional commitments from the software and vendor industries. If its transparency, audits, and self-regulation improved in the ways noted, it might be structurally sound enough to survive as the foundation of trust.
Steven B. Roosa
is a partner in Holland & Knight's New York office and cochair of its Data Privacy and Security team. His practice focuses on advising companies on mobile app privacy compliance, Internet tracking, Web security, geo-fencing, certification authority matters pertaining to online trust, and Web-based reputation issues. Roosa holds a law degree from Rutgers School of Law and is a fellow at the Center for Information Technology Policy at Princeton University. Contact him at email@example.com.
is associate director at the Center for Information Technology Policy at Princeton University. His work includes Internet privacy, computer security, government transparency, and telecommunications policy. Schultze has a BA in computer science from Calvin College and a masters in comparative media studies from the Massachusetts Institute of Technology. He served as a fellow at the Berkman Center for Internet and Society at Harvard University. Contact him at firstname.lastname@example.org.