For those, like me, wondering who the author might be, it appears to be this guy: "Adam Langley works on both Google’s HTTPS serving infrastructure and Google Chrome’s network stack. From the point of view of a browser, Langley has seen many HTTPS sites getting it dreadfully wrong and, from the point of view of a server, he’s part of what is probably the largest HTTPS serving system in the world - See more at: http://www.rsaconference.com/speakers/adam-langley#sthash.HM...
He's also the author of Golang's native crypto/tls TLS stack, a longtime contributor to the IETF TLS WG, and the author of some of OpenSSL's curve software. He's not messing around.
On the other hand we found that as of Friday, Chrome DID NOT recognize that one of our wildcard certs for Efficito had been revoked. We sent out an email to our customers saying to enable cert revocation checking.
Revovation isn't perfect and I would not suggest the current status quo is OK but the intermediary approach Chrome takes cannot be trusted as they have now shown.
If Chome will not show our cert as revoked what is the point of revoking the cert? The author has points but the approach Google ie taking is a cure worse than the disease...
Honestly, I don't see a point in certificate revocations anymore, i.e., your implicit conclusion seems to be correct. And I don't blame Google for our broken revocation system – especially because the even the best revocation system couldn't fix the current certification system that is broken in its core.
Google's problem is they decide which revocations are worth passing on to the browser. That's at least as broken by design.....
Believe me I am aware of the limits of soft-fail, but the answer cannot be even in the short-run to let a browser vendor tell us which revocations are worth knowing about.
> but the answer cannot be even in the short-run to let a browser vendor tell us which revocations are worth knowing about.
So you trust the browser vendor to ship you executable native code but you don't trust the browser vendor to apply reasonably decent criteria for the top x% most-needed cert revocations on the Internet?
Well, if they were pointless Google wouldn't even hand you a subset of revoked certificates. The fact that they hand you a subset of revoked certificates from participating CA's makes their solution worse than the disease, frankly.
It might be ok if used in addition to checking revocation lists. However why should a bank get to have their certificate in the crlset but a saas provider not? Or do you really trust Google there?
Frankly Adam doesn't really believe revocation is pointless. If he did, he wouldn't even suggest that sending a valuable subset of certificates to the browser in a batch is any sort of solution at all. All that does, though, is create a two-class secure internet: those entities Google deems worth distributing revocation information for and those not. That isn't a solution to anything.
Online revocation is pointless. It sounds like you didn't actually read the article, but are happy to slam the one team on the Internet that has given serious consideration to the obviously-broken SSL revocation system. Can I ask you to take a breath and reread the article?
So is getting a subset of revoked certs Google deems "valuable." In fact, that may be even more dangerous since it establishes first class secure sites vs everyone else.
Why should Yahoo's cert revocatins get in the CRLsets but not less well known sites? How is that less broken than online revocation?
Keep in mind, my big objection is:
Google did not distribute our certificate vocation in their CRLSet, presumably because we weren't large enough. That is not a fix for anything.
Ok, fair enough. I am just making sure my objection to Google's approach is clear.
I would be OK if they guaranteed complete CRLsets from all participating CA's. Since they don't, their solution is more broken than what they are replacing.
So I acknowledge that online revocation is problematic. I just think the crlset approach is an order of magnitude worse when the crlset is a subset of revoked entries sent by the ca.
Respectfully, I think an accurate summary of your argument is that you would rather pretend to be secure using broken online revocation checks than to have to stomach the Chromium team providing a marginal amount of actual security by deciding which sites are and aren't worthy of protection.
The implied-preference-set shouldn't be restricted to the false binary choice of "the broken standardized system" and "Google's half-fixed proprietary approach".
With the talent & resources that Google has, or the talent & resources that Mozilla has, or the talent & resources that Microsoft has, this should have been better solved, in a way that works for all TLS-reliant applications, years ago.
Using Chrome's built-in auto-updates to make a subset of "high-value revocations" work, at a daily frequency, for Chrome users only, is not a very web-friendly solution.
It's like a gated community hiring its own rent-a-cops... maybe that's an improvement for the fortunate ones on the inside, and maybe a necessary stopgap. But to people outside that perimeter – like someone whose revocation doesn't make it into the Google CRLSet – it feels like an abdication of duty by the web's stewards.
Christ, this kind of reactionary armchair criticism really is a cancer of HN these days.
Adam has been pushing the state of the art in cryptography in the practical realm for years, and your criticism is "They should have just solved this problem better! They should just pull their finger out and get working on it."
Making a real difference in the chaotic realm of standards bodies and browser vendors is a lot harder than it looks. Adam has an impressive track record for actually improving internet security for users.
Where's your suggestion for how revocation could be solved better and implemented in a practical way? I don't see you rolling up your sleeves to get the actual work done.
Yes. It has been for a long time. Long term, we want to figure out ways to improve thread quality. Not so much in terms of the toxicity problem, which has seen some progress lately (we hope), but the arguably harder problem of voluminous uninformed commentary clustering around the mean. If you (or anyone) have any suggestions, I'd love to hear them. hn@ycombinator.com is the best place to send them.
(This is not about any particular comments in the current thread, only the problem in general.)
The criticism isn't specifically for Langley, but rather for all of Google-Mozilla-Microsoft. They've all been derelict on this, knowing the certificate-revocation standards are hopelessly broken, but not fixing it from their positions of power and responsibility.
(Might it take a product-liability lawsuit, where somebody suffers financial loss because Chrome is deceptively showing the "secure" indicator even for certificates revoked days/weeks/longer ago?)
Consider the CRLSet approach. Maybe it's the best any browser-maker has done, and a useful learning-exercise for a future industry-wide fix. But it still sucks. It's capped at 250KB, assembled via an opaque editorial process, only refreshed daily, only protective of Chrome users and some chosen subset of "high priority" certificate-revocations, and still vulnerable to an arbitrarily-long attacker embargo against the Chrome update servers. Langley himself mentions it can't scale to the recent need for higher-volume revocations.
Compare it with Google's own "Safe Browsing" blacklist system. That delivers megabytes of compressed privacy-protecting blacklists, constantly refreshed via incremental updates to maintain a freshness of under 45 minutes, in a manner available to all browser makers.
So a potential version 1 of a better approach: leverage Safe Browsing to deliver fresher certificate blacklists to more end-users, and also warn users (not just silent-fail) if they're making 'secure' connections but are too-many-hours behind the best available revocation information.
However, the implication that I, as a lone individual, must "roll up my sleeves to get the actual work done" for my words to have weight here is both nonsensical and (speaking of rhetorical cancers on HN) unnecessarily personal. Google, with plentiful expertise and cash, is shipping the world's most popular browser – but that browser has multiple inadequate certificate-revocation systems. Yet I should be contributing my expertise to fix this before I may speak? "Christ", that.
Google, with its plentiful expertise and cash, counts as its two fiercest competitors two of the three organizations that would need to agree with any new solution to revocation.
Meanwhile, you've offered a "sketch of a solution" that involves CAs (a) cooperating with Google and (b) employing computer science in the pursuit of writing actual code. Why not talk to Comodo and see how likely that is to happen?
You know who thinks CRLsets suck? Adam Langley. Certificate revocation is broken. They did what they could. Google didn't design SSL revocation.
All Google has done is orchestrate the adoption of TLS forward secrecy, fixed security vulnerabilities (for many years running) in OpenSSL (along with virtually every other component to modern web browsers, along with ffmpeg and the open source video stack), invented and deployed browser certificate pinning, spearheaded the deployment of Certificate Transparency, oh, and found/fixed Heartbleed.
But, I'm sure your comments are great. Why not take them to the IETF TLS WG? I'm sure you'll find an eager audience. I am not kidding.
Cooperation of competitors is not necessary — that's an excuse for inaction. As with 'Safe Browsing' (or CRLSet), one browser could lead the way, letting the others follow the same model or improve later.
Similarly, CA buy-in is not a blocking prerequisite for a better approach — it's just another excuse for inaction. Exactly as with CRLSet, the browser vendor can say, "we'll scrape your revocations where we can find them, or you can provide them this way". Then, if an incompetent or recalcitrant CA hides their revocations, it's an issue between the CA and their harmed customers.
It's great you, me, and Adam Langley all agree that revocation is broken, including Google's stopgap proprietary solution. But hasn't everyone understood that for 15+ years? Why isn't there a fix?
The browser makers absolutely own responsibility for this, because they're the ones that show end-users a security indicator. They're shipping the software that creates a risk, they can unilaterally fix this on their own initiative, and they're not so poor or stupid that fixing it should be beyond their capabilities.
Yes, Google has done a lot for security. Their 'web security karma' is very net-positive. They still deserve a demerit, along with Mozilla and Microsoft, on this particular issue.
Referring this to the IETF for standardization is just another way of excusing more inaction and delay.
Why are TLS block cipher constructions still MAC-then-encrypt, 13 years after Bellare and Namprempre proved that was the wrong way to do it? Because standards are hard.
You can blame the browser makers as much as you want, but among them as a group, nobody has worked harder on making TLS better and safer than Google. But here you are berating them for the effort.
Here's the thing. I have been advertising the impact of this decision by Chrome on our SaaS business. It just isn't acceptable that cert revocation means one thing if you are Yahoo but another if you are a startup SaaS business.
As I have said repeatedly, this is a way to ensure that things are comfortable enough for the people at the top that everyone else is sacrificed in the name of it being too much trouble, working for free, etc. But as long as the big sites are protected by Google, nothing will get fixed and us smaller competitors will be screwed.
I am sorry, but that's just morally wrong. And it is the major reason I now recommend Firefox over Chrome.
Again, standards don't need to be a blocker for the revocation issue (as CRLSet itself demonstrates). "Standards are hard" is an excuse for inaction.
I can applaud Google's efforts in general yet still point out when there's one egregious, embarrassing gap. They are one of the only three institutions worldwide that could possibly fix this for users, and I'm not picking on them over the others.
If Amazon, Yahoo, and the small SaaS provider are in the same boat security-wise, then you have the incentive to get the root problems fixed. Google's approach takes away that incentive from the large providers.
I don't think it is just a question of pretending. It is a question of making sure that everyone is in the same boat security-wise so that the root problems in fact get addressed.
What Google does is make Amazon more secure and the small SaaS provider less so. And it makes sure that the big providers have less incentive to fix the underlying concerns.
The Chromium team didn't choose how big to make the lifeboat. They have the one they have. If they've got to choose between putting Amazon in it or you, then, as a user, I'm glad they chose rationally.
It looks to me like the Chromium team did choose the size of the lifeboat: it's a Google-designed feature (CRLSet) based on another Google-designed feature (auto updates).
Isn't CRLSet a Google invention? Doesn't it depend on Chrome software updates?
All I know about it, I read from Langley's writings. Is there a better reference?
Who set the implemntation limits, if not Google?
Maybe those limits are justifiable, but that doesn't leave someone who's left unprotected, by what seem like arbitrary policy cutoffs, feeling any better.
Your feelings aren't material to the issue at hand. You're not supposed to feel good about CRLsets.
Google didn't invent this idea. It was suggested by the CABForum.
CRLsets are static, practically hardcoded revocations that every installation of Chrome receives. The idea that https://yourblog.com should expect specific consideration in Chrome updates is about as reasonable as suggesting that we revert back from the DNS to host files.
Yes, feelings are immaterial, but you shouldn't be trying to convince with bluster when you've erred on the facts.
The more I read, the more it seems CRLSet implementation choices were entirely Google's. For example, when CABForum members want information about how CRLSets work, Langley suggests the best (and only!) reference is the Chrome source code:
I am of course open to better information. But for now it still looks like Google did indeed "choose how big to make the lifeboat", unlike your assertion to the contrary.
Also, it looks like the 250KB cap is in Google's unpublished server-side source that constructs the CRLSets. So Google could conceivably "expand the lifeboat" unilaterally with a tiny edit!
For reference, it appears the current 'Safe Browsing' blacklists, never more stale than 45 minutes, are about 2.3MB in size. So the CRLSet cap (250KB) and freshness (1 day) aren't very generous to users.
I said that the idea came from CABForum because that's what Langley said. Moving from 250k to 2.3M still wouldn't put your blog in the lifeboat. Also: the CRL entries in the CRLsets are manually curated; someone is doing that work for you, for free.
And I understood correctly about who chose the "size of the lifeboat", also because of what Langley said.
The person upthread unhappy that Chrome didn't pick up their revocation (einhverfr) isn't worried about a measly blog, but their SaaS business.
If 2.3MB isn't enough to protect everybody, make it 23MB or take whatever other design steps are necessary. The world's most popular browser, from the world's most profitable internet company, in 2014 shouldn't be showing the lock-icon and "valid certificate" hours/days/weeks after a publicly-available revocation.
"Manual curation", rather than being impressive, is a design-smell here. And none of Google's work to outcompete other browsers, using proprietary Chrome features, is being done for me "for free".
So this complete infrastructure is crap. OpenSSL, a software half the internet uses but no one cares about because it's crap. CA's not revoking keys even though they know they're compromised. Revocation being worthless because it's too much of a hassle for anyone to bother.
Great. Maybe now, when half the internet is already compromised and all our certificates are not worth the bytes they're made of ... maybe we should try to come up with something better.
edit:
Actually, this whole heartbleed affair has been quite eyeopening for me, so I'm thankful for that.
But it certainly didn't help with the paranoia I feel the last couple of years while using services on the internet.
Yes! Now it's time for us to generate a whole new broken infrastructure! I'm sure if we just rewrite all the Internet's crypto in Rust, everything will be great 10 years from now. No way will a radically different new transport cryptosystem grant researchers 100 new bugs to play with; after all, we'll have option types.
You're right to mock the attitude people have that the only thing wrong with OpenSSL is the language it's written in, but memory unsafety has nevertheless been a factor in many security flaws.
I'm no security expert but I guess there could be ways to keep TLS as a protocol more or less unchanged while fixing the obviously broken stuff surrounding it.
Don't forget that 90% of the world's certificates are issued by five commercial CAs, who happen to be friendly with various national security agencies.
Still seeing lots of explanation about why the current system sucks, and not much about how a more robust system might be created and promptly adopted. Langley (the author) mentions short-lived certificates (either rapid expiration or via a 'must staple')... how soon can we enforce that? How short can that make the danger-period where the CA, and Google, and the "connected web" all know that a certificate is invalid, but a user-at-risk does not?
Why not other ways to rapid-broadcast invalidity in censorship-proof ways, so that a browser encircled by an enemy can quickly figure out something's wrong? (Or, why can't security professionals get around interdiction as effectively as copyright pirates do?)
how a more robust system might be created and promptly adopted
I'm quite fond of how the SSH host key system works.
Prompt me the first time I see a new key, provide me with supporting evidence (e.g. show me how many people have previously accepted this fingerprint for this domain) and alert me the same way in the future if the key ever changes.
If the 'supporting evidence' was plugin-based then this system could quickly become more user-friendly and trustworthy than the current centralised system can ever be.
There could be plugins to automatically trigger a SMS challenge on first contact with particularly sensitive sites. Multiple competing P2P web-of-trust plugins, plugins that let you follow trust-lists from third parties, etc.
In the current system you rely on a single, very questionable opinion on the trustworthiness of a given certificate. In the new system you'd be presented with a trust-score compiled from a whole range of opinions. The sources of which you chose before-hand.
Of course this approach doesn't include a license to print money for corrupt CA organisations and is not going to happen for that reason alone.
As said it should be pluggable. Yes you will still need seed fingerprints for a few independent plugins (like the CA list of today), and you still need to trust or vet the browser itself (also like today).
The point is that once this seed trust has been established, which ideally would need to happen only once in your lifetime (given proper sync/backup facilities), you gain actual control over who you want to delegate your trust to, if at all. On a site-per-site basis.
If an american, a chinese and an EU database independently agree on a fingerprint for a site then that would be an actual trust indicator. Very much unlike the perpetually compromised zoo of certificate authorities of today.
And obviously once there's a market for plugins you'd quickly see plugins going far beyond what we get to know today (read: essentially nothing). There could be subscription-based plugins providing detailed information about the remote party, down to credit ratings, company history, you name it.
All interesting ideas... but don't directly address rapid trust revocation, as in the case of recent relevance: a site's private keys are assumed to have been compromised (as if by the heartbleed bug).
Or are you suggesting every browser will contact many of its personal web-of-trust sources on every secure-connection? Without additional innovation, that seems just as prone to the performance bottlenecks or soft-failure (on stale data or blocked connections) as the current system.
Definitely agree that prompt revocation shouldn't be as hard as these apologetics-for-Chrome imply, and this hasn't festered as an unsolved problem for so long.
See my other post. Systems with shorter term authentication have been around for decades. The problem with X.509 is that it centralizes authorization with the browser vendor.
Yes, revoke is broken by design, especially with mobile and Chrome browser. I'd say it's broken everywhere except Firefox with OCSP Hard Fail enabled.
Thanks to this flaw StartSSL business model has become somewhat outdated IMHO with the free certs and paid revocations.
I'm dreaming that we can fix the revocations issue with 24hour valid certificates. Suggested at the end of my post.
But I must be naive on this as it's too simple, just haven't found the flaw in this myself. Yes, it needs technical orchestration, but at least it does not add extra layer of single point of failure for every session.
EDIT: Just finished the OP post and it does indeed also mention "short-lived certificates" in the end as a potential solution.
Indeed, short-lived certificates do seem like a solution to this problem. One downside might be the fact that (anecdotally) many users have inaccurate clocks. I read somewhere recently that a large web site has to back-date their new certificates, because, otherwise, certificate rotation/revocation causes a large spike in support tickets.
I think the author is a little disingenuous with the term "security theatre". Basically he argues that OCSP doesn't work because hard fail might cause DOS -- but fails to conclude that without OCSP SSL/TLS is useless. It's a long argument for saying that the CA system is broken (you can only trust the white-list chrome provides) -- and the sensible conclusion is that you cannot trust any other certificate chains (without OCSP) is left out.
Without certificates, SSL/TLS falls apart.
Perhaps a better use of CAs would be to always delegate authority to the domain owner -- we'd only need OCSP for the CAs, and domain owners could issue hour/day-valid certs via a cert infrastructure. That would push a lot of complexity down to domain owners, it would probably lead to a lot of errors in implementation -- but those errors would only affect the domains -- not the main CA trust chain as such.
I'm not sure if that would be an improvement or not -- but at least you could know that if a domain was run correctly, a valid certificate could actually be trusted...
Sure, if "making the DNS totally unreliable", "baking 1990s crypto into the core of the Internet", and "conceding the CA PKI to world governments" is your idea of "better use of CAs".
> "conceding the CA PKI to world governments" is your idea of "better use of CAs".
How much different is this than the current CA situation? Just recently a subordinate CA of ANSSI (the French Network and Information Security Agency) issued a wildcard cert that could MITM just about anything.[^1] Firefox's list of trusted CAs includes:[^2]
China Internet Network Information Center (CNNIC)
Government of France
Government of Hong Kong (SAR), Hongkong Post
Government of Japan, Ministry of Internal Affairs and Communications
Government of Spain, Autoritat de Certificació de la Comunitat Valenciana (ACCV)
Government of Spain (CAV), Izenpe S.A.
Government of The Netherlands, PKIoverheid
Government of Taiwan, Government Root Certification Authority (GRCA)
Government of Turkey, Kamu Sertifikasyon Merkezi (Kamu SM)
Hong Kong
Firefox's list of pending CAs includes additional government CAs.[^3] Things are no different in Redmond. There are at least 56 government CAs (56 of the certs start with government probably others with less obvious names) in Microsoft's Root Certificate Program.[^4]
That was my point, too: the theorized system already exists. I'm definitely not advocating its use.
Ultimately, the URL bar needs to go away. More fundamentally, the asymmetric relationship between very large organizations that authenticate their identity with browser CA certs, and individuals who authenticate their identity with passwords needs to change.
Cryptographically generated addressing schemes like Telehash can do the automate-able stuff better than the current CA situation. The problem (and solution) I'm struggling to articulate involves the fact that granular authorization systems and trust databases need better UI before we can really fix this.
I suspect cheaper hardware tokens will play a significant role.
No, not really. DNSSEC secures DNS, allowing it to be used (among other things) as a secure transport for delivering other certificates.
CAs already provide the same trust infrastructure, but due to incentives do not sign delegating certs by default, but typically charge extra for certs that allow the owner of a domain to set up their own internal CA.
DNSSEC only works (reasonably) for TLDs where DNSSEC is implemented, and when DNS resolvers implement checking -- many ISPs don't. Delegating CAs are already part of SSL/TLS.
DNSSEC requires fidling with the DNS information -- delegating CAs, while requiring issuing certs, only require configuration of the various services (web, imap, smtp etc) -- as per usual.
For delegating CAs to improve the security of any given domain -- some infrastructure would be needed to set up an intermediate CA. I guess for small organizations, OCSP wouldn't really be needed, as the CRLs would be small (assuming different CA for public facing services and stuff like personal certs for users). Another option would be to simply roll certificates frequently.
Certificates bind a public key and an identity (commonly a DNS name) together.
...when issuing certificates a CA validates ownership of a domain by sending an email, or looking for a specially formed page on the site.
So, if "DNSSEC secures DNS," as you say, why do we need certificates at all? Considering that the CA already depends on DNS to issue (many) certificates, why not cut out the middle man and simply publish the domain's public key in a DNS record? What actual value does a certificate offer other than that? I'm genuinely asking, because I'm curious if DNSSEC provides a way for a domain owner to establish trust in the domain's DNS records, that cannot be tampered with by the domain's registrar.
I think the fundamental problem with DNSSEC is it doesn't go far enough. DNS really is the original directory service of the internet. If we could generalize it a bit, and allow TCP queries secured by Kerberos, with far better row-level security, there's no reason we couldn't replace LDAP with something a lot simpler.
If we could replace LDAP with something a lot simpler, we could as replace X.509 with something a lot simpler (LDAP is a mildly simplified version of X.500, of which X.509 is a very closely related OSI legacy standard).
So DNSSEC doesn't go far enough. IMO, we should be working on ensuring that it can be extended to allow for certification of hosts, etc. However, at present it doesn't do this very well.
This sounds very much like the directory system I want to build on top of Telehash.
The problem I still see is creating a global directory of Kerberos realms. There still needs to be a sneakernet component for private key distribution. (Maybe that would be a better use of the armored cars I see making regular stops at the banks around town.)
Ok, so suppose registering a domain also required registering a public key for the domain name. This would require a change for the root realm. Suppose we were to extend the DNS protocol so that when you do an NS lookup, you get the keys of the NS's as well (might require SRV records or the like in the root tree). You'd need a mechanism to revoke or rotate keys, but that could be done.
From there it should be quite possible to tie public keys to hosts all the way down since you now have a chain of trust. Should be trivial, but the problem is that I don't see how to get the root zone to publish domain keys.
Yes, that replaces CA's with domain registrars, but that is a healthy trade. Since you would get keys (all allowed keys!) all the way down with your query, you wouldn't need to check revocation because you'd know the keys before making the connection.
What you're suggesting doesn't sound too far from DNSSEC as it is currently implemented to me (ignoring adoption rate). I'm questioning the need of an authoritative global naming system at all.
From a user's perspective, "my bank" and "your bank" might be the same thing, or they might be different. When I care about verifying the identity of these things, why not just go to the source? I can do key exchange every time I visit an ATM.
I can imagine using DNS with multiple contextual root namespaces, with the trust anchors being managed by more direct human relationships. This wasn't feasible when the systems were originally designed, but now we can put public keys in jewelry. Keychains on our literal keychain.
>> Certificates bind a public key and an identity (commonly a DNS name) together.
>
> So, if "DNSSEC secures DNS," as you say, why do we need certificates at all?
Yes and no. Note that a cert binds an identity, not just a DNS name (but
that is what is needed for web servers).
DNSSEC doesn't work without resolvers checking for the DNS keys, and
it's not immediately clear (to me at least) if the various higher level
clients can transparently detect if a DNS name is secure or not (similar
to how a web browser can't tell if it's accessing a resource over a
secure IP based VPN and can therefore safely transmit credentials via
plain auth).
For trust to work, there needs to be integration of the chain of trust
all the way from the user to the server. TLS/SSL already provides this
-- and with delegation the infrastructure is in place for owners to
manage trust for their own domain (and it is already possible, but
typically expensive).
In it's barest form DNSSEC only makes DNS secure, which prevents DNS
spoofing. If you also place a cert (could be self-signed) in DNS, then
you have a "full" solution to securing communications. You would be able
to download the cert without DNSSEC, but unless the chain of trust of
the cert could be verified some other way, you wouldn't be able to use
that cert for secure communications.
It is true that current CAs bind a cert to a domain name, but it's not
really the domain name part that is interesting, it's the entity
identifed by that name. So your browser can say, I don't care where this
authenticated (and encrypted) data stream is comming from, I just care
that it is backed by example.com (that is backed by example-ca.com) --
and if the user thinks that Example corp. owns the example.com domain,
one can then infer that the browser is really talking to a web site set
up by Example corp -- regardless of which IPs and DNS records are
involved.
Keep in mind that the same CA infrastructure allows a user to indenify
to a server as user@example.org -- from any ip or doman name -- just
as securely, via mutal trust in "Example CA". I think it's somewhat
unfortunate that DNS is so tightly integrated into the user interfaces
for the web -- asserting things about IP adresses and DNS names isn't
really all that interesting -- it's asserting things about entities that
is interesting.
While I'm no fan of the current CA system, I'm not convinced DNSSEC is
securing the right things at the right protocol level(s).
I just did and if you were okay with a 0.001 probability of false positive you could list all 500,000 (possibly way off) certificates potentially exposed through heartbleed in only 877.5KB of space. The current Chrome CRL contains 24,161 serial numbers and takes up 305.3KB of space. While it isn't a perfect fix for the revocation problem it would certainly be much better than the status quo.
One problem might be that the 0.1% of sites hit by the false positive effectively couldn't use OCSP stapling but Chrome could just first call back to Google as a CRL proxy to avoid making an OCSP request when the site stapled a valid but potentially revoked OCSP response. Then just store that response from Google for current version of CRL in the cache. End result is that the unlucky false positive sites don't have tons of unnecessary (unnecessary as far as the OCSP spec is concerned) OCSP requests going to the CAs and the only thing they would notice is that a new visitior takes 100ms longer to make the first page load.
And through the magic of bloom filters if you wanted to bump the false positive rate down to 1 in 10,000 it only bloats the list to 1.14MB. Furthermore, there are methods to make the bloom filter scale-able such that a client doesn't have to necessarily download the whole bloom filter again if a bunch of elements are added to it and instead just download a portion of the data required for a full update.
The more I think about it the more I wonder why this isn't already in Chrome in some form or another. The only downside is weird networks where OCSP might be filtered, but not https, and access to Google is filtered.
Edit: One thing I feel stupid for overlooking is that Bloom filters aren't cryptographically secure so an attacker could theoretically find a serial number for some CA that would cause a site to always be a false positive but I don't think any CAs are still giving out serial numbers in a predictable way after the MD5 debacle and even if they were it would seem to be impractical to me. The fix would just be do a SHA256 hash of the serial instead of the serial itself.
Oh, I wasn't thinking about having this be a perfect oracle; just a better and smaller first pass (edit:) for CRL, not for OCSP.
I got the idea from Squid and the network of caches.[1] That body of experience may be helpful.
For shrinking the size, RLE might work (most entries would be 0), and rsync may reduce bandwidth. It looks like the Squid network just used http requests for refreshes. There's probably a sweet spot for bandwidth, and I'd guess that 90-99% would work fine; you're balancing the size of the continually updated bloom filter vs. the requests for certificates that match it. I didn't worry about false positives, because it could just send an OCSP query in that case.
Your numbers for revocations sounded very low, but I just used crlset-tools[2] and checked, and it's about right. Which is weird, because someone else[3] mentioned a size of "4.107Kb" at version 1567, but that's somehow different - compression, perhaps. I thought I'd heard about CRLs megabytes long, but Google Chrome seems heavily curated re: CRLs.
I'd hash over signatures instead of the oft-predictable serial numbers, as you noted.
Just make sure you still check revocation of code signing certificates. Otherwise you will end up running malware that is signed with a legit key they got off my stolen Windows laptop.
This argument only holds if the attacker controls every internet connection you use. If you're on a portable device or you're otherwise connecting through various networks, only a subset of which are compromised, revocations are still useful.
Exactly. If I'm on my trusted network at home and receive a big revocation list, and a few weeks later go to, say, Egypt, and someone tries to MITM me there with a stolen certificate, then it would show up as invalid.
When I hear these arguments, I always look for what is wrong with OCSP Must Staple. The author says that at the bottom it might be a solution with short lived certs, but I dont see the need for super short lived certs, only short lived OCSP staples. The author presents this as the problem:
> if the attacker still has control of the site, they can hop from CA to CA getting certificates. (And they will have the full OCSP validity period to use after each revocation.)
The solution here is to not allow OCSP stapling to request a new certificate and use a full OCSP check to verify that the cert wasnt revoked.
I'm honestly kind of surprised how little action there has been to assist with a migration away from the CA model. The technology is there, but people just don't seem interested enough to leverage it.
Systems like Namecoin could serve this purpose marvelously. Powerful devices have direct access to the entire cryptographically authenticated DNS and certificate database. Weak devices can specify whom they trust to provide them with DNS/certificate data, and even those devices get some cryptographic security guarantees thanks to technologies like SPV.
Why have a single entity at all? Moxie Marlinspike proposed Convergence (https://www.youtube.com/watch?v=Z7Wl2FW2TcA) as a solution - I think that something like that has far more potential wheels to travel than a Namecoin based system.
I should be able to choose who I trust, a notary system would allow me to do just that. No central CA systems.
The biggest concern I can see is Identity management, but, as mentioned by Moxie, most of these CA don't do anything close to proper Identity management any more - I have a number of certificates bought from quite a few different CA's all made out to my rabbit, at no fixed address.
Notaries can, of course, do additional verification - they could even advertise this as a premium.
I don't see why this can't be extended to DNS lookup's either. I trust X notaries and pin the results I get, I can choose to trust a majority, or be hyper paranoid and require everyone to agree. No need to run a power hungry blockchain, no single point of technology failure.
Technically, all of that is feasible today. And I imagine we will see a number of different technologies combined to form a proper, decentralised, system.
Most of the energy in this space has gone into http://tack.io/ which has been called "a non-controversial first step" - I believe it is making its way though standards talks at the moment although I have not looked into it for a while.
Personally I think now is good time to revisit assumptions made a few years ago - security and privacy and in particular non-government controlled systems are on many peoples lips.
If I ever clear my current plate, I would be interested in diving into the problem.
There should be a clear statement about the status of Convergence on the web site. IIRC, the Firefox extension has been broken for more than a year now. Why? If Mozilla broke their APIs and made it impossible for the extension to work, then we should know about that. Otherwise, what's the excuse for the extension being broken for so long?
Convergence had the momentum, and there was a small but vocal group of people willing to support it. But, due to project mismanagement and lack of communication, that momentum has been lost.
I don't understand this. Can someone weight in with an explanation? Convergence works just fine without TACK: I can set up two or more notaries on some VPS somewhere, and my browser would check if the notaries see the same certificate on that server I am trying to connect to as my browser. Seems secure to me: no external CA involved, the certificate on the web server can be self signed, and a MITM attack would need to hack two or more external servers to be successful. How does TACK fit in all of this?
I remember reading something about that before. I just had a quick look around and, while I don't believe I have fully groked the concept it would seem to me that Telehash is solving a different, but related problem.
The web, as a technology is probably not going anywhere for a few more decades at least - people have gotten very used to opening up a web browser - very few actually understand the technology beneath.
The CA/DNS issue is one based solely around them - can I type the domain name I saw on the tv/ my friend gave me/I heard about into a web browser (and these days) and it can direct me (securely) to the page where I can do business.
Telehash seems to fit in on another level. Perhaps one which we are heading towards - a world of machines securely finding and communicating with each other to achieve a goal set for them by some human actor.
This space is becoming more crowded and no good contender has emerged - and I think there is a good reason - they are either too radical as so they can't find a footing, or they are too conservative.
The documentation is slightly lax, but I feel telehash is the latter - it doesn't seem to be solving any problems already solved:
* Space/Storage/Data Transfer - I don't care what anyone says, the blockchain model is simply no scalable, any system where are full client has to hold onto/download gigabytes of information is a non-starter for me.
But still, in any new system - hopefully decentralised, we need to distribute information. Any kind of system we build must be tolerant of partitioning - I think the solution to this is injecting some trust (ala Convergence)
* Speed - Computers work in nanoseconds, the web currently operates in seconds (some sites in milliseconds) - we can't beat the speed of light, but we can certainly start removing the cruft from our communications - HTML, XML, JSON, CSV - are all formats designed for people. We need tools that let us manipulate formats designed for machines.
Our networking protocols are like this as well - as much as people hate ASN.1 it solved some problems decades ago allowing the phone system to scale on just duct tape and wd40
* Power - Blockchain bashing time again - we live in a world of limited, expensive power. We are getting much better at producing low power devices, people like wireless devices. Why should our networks be so power-hungry?
Just to be clear, Telehash is a protocol, not an application. The bulk of the documentation is on Github, and so far it's mostly for people implementing the protocol in different languages.
There's no blockchain involved in Telehash. It accomodates various cipher sets, including one suitable for ultra low power devices (there's a partially working implementation for Arduino). And you're correct, it isn't really aimed at enabling anything like trusting a URL from a television commercial.
Telehash is conservative in the sense that it solves useful problems, even within the current DNS infrastructure. No one's currently doing this, but you could easily map a DNS name to a Telehash address. But it also offers global resilience to partitioning, because the logical mesh can operate on any lower level network transport.
I like the multiple notary model of Convergence, but I think any of these trust models still need to separate the "human memorable names" component.
I was mixing a number of different criticisms of various technologies in my post...I never meant to confer that Telehash has a blockchain.
I guess, I still don't understand the point of Telehash. Even having read through the documentation. "Establishing private communication channels" is definitely a big problem, one with a huge threat model, and the solution is probably multi-faceted - I don't see where a system like Telehash fits in v.s. something like tor or i2p for example - does anonymity fit into the threat model?
Before dragging this thread off the page I will follow up with an email. :)
Telehash's design may simplify the future design of Tor-like protocols, but anonymity is not an intended core feature.
Partition resistance is probably the highest priority. If any possible insecure network path exists, encrypted communication between endpoints should also be possible (and automatic).
sayI appears to be the directory service designed for MinimaLT / Ethos. CurveCP looks like it fits in the same use case as MinimaLT. That's where I'd say Telehash lives, too (but I've only skimmed any of these papers so far).
Telehash started out life as a more generalized global DHT-for-your-apps design circa 2010, and the spec has since evolved significantly to include the same kind of wire-level crypto.
There is one blockchain. The security of the blockchain requires everyone working very hard to maintain it - while it is made out of many parts it is 1 entity (like an ant colony).
Contrast with something like Convergence, where, while they share a common protocol (maybe...not necessarily) each part is responsible for itself, and not tied to any particular larger whole.
And do you think that makes it inherently less secure than a "free-for-all" system? I think the point of the blockchain is to remove trust and become trustless, while the point of something like this is to keep the trust system, but actually give you some choice of who you trust. It seems a little better, but I think trustless authentication (as in no 3rd party required) would be preferred.
Trustless is great for so many things but try answering this question in a trustless environment: "Before I give you my credit card info, how do I know you are who you say you are?"
You can know that you are talking to the same named digital identity that you think you are talking to without trust; that's a significant amount of the value of Namecoin. Validating that a digital identity is tied to a specific real world identity is a separate problem.
> Validating that a digital identity is tied to a specific real world identity is a separate problem.
But it isn't for the main areas of SSL usage (e-commerce, ensuring your passwords are sent to the right party, etc). Those require trust. I don't know how you get around that.
I.e. I can imagine the concern being that X.509 ties together validating identity with public key infrastructure but since one use of a public key is to validate identity I am not convinced that is a bad thing, and to be honest, I can't see a trustless alternative for most of the current uses.
I can imagine many better alternatives to X.509 (anything that starts with a letter . three digit number is OSI legacy crap), but I don't see how to get rid of the identity vouching aspect of it.
As a general rule, people care that their connection is secure, because they've been told to worry about people stealing their card numbers on insecure sites. They've generally established trust in other ways - more commonly, they simply trust it because they've heard about it elsewhere or it ranks highly on Google, and they use a trusted payment provider such as Paypal.
Most people honestly don't go to the effort of verifying that a certificate matches the real-world identity they think it does. It's difficult, especially with smaller stores that don't use EV certificates.
For cases where people think third-party attestation is a necessary thing for their purposes, frankly, we have nothing better than the CA model right now; but that can easily be integrated with Namecoin, allowing for only those who need it to use it, and the rest to have access to secure communications and proofs of digital identity without having to pay up.
There's been plenty of action, but you can't turn the whole world on a dime.
Namecoin is fantastic in theory, but has the fatal flaw of using Bitcoin: the fastest number cruncher wins. Some would argue that the strength of Bitcoin's tech is that numerous currencies with different genesis blocks can flourish. That doesn't get us anywhere with naming, though.
Dead horse flog: the CA model's problem is that you can't do federated (global) naming and federated trust in the same system.
What's the issue with that? It's reasonable to assume that the good number crunchers will always have more power than the bad number crunchers, and if that assumption ever fails, it's easy to detect and we're simply back to CA levels of security.
Philosophically, "good number crunchers" just means "tyranny of the majority."
Environmentally, that number crunching is a colossal waste of energy. We don't need to base our entire economy on that kind of energy footprint just because we occasionally want to make anonymous global barter convenient.
The migration away from the CA model is called "certificate pinning". Chrome uses it for high-value sites, and you use it whenever you ssh somewhere and the key's fingerprint is in your .ssh/known_hosts file.
>The migration away from the CA model is called "certificate pinning".
TOFU/POP is not an effective model for the web. There are simply too many sites for it to be useful. It's pretty much an everyday occurrence that I go to a site I've never been to before, and certificate pinning won't help at all there.
First, "TOFU/POP" has a real name; it's "key continuity". Second, certificate pinning as implemented in Chrome doesn't depend directly on key continuity. Third, key continuity destroys the incentive to attack sites by compromising CAs, because even if you're hitting a site for the first time, many of the 10,000 other people hitting it from the same browser at around the same time aren't, and they'll detect the bogus cert. That only has to happen once for Google to put a gun to the rogue CA's temple.
Well, what Google does is it gets a list of revoked certs from CA's, decides which ones are "really important" and sends those to the browser. So yes, in effect, Google decides which certificates are revoked. It's all covered in the article.
No. You're not following me. You think I'm describing agl's point. I'm not. I'm saying that beyond CRLsets, certificate pins also allow Google to detect misbehaving CAs. CAs have power only to the extent that Google allows them to have power by keeping them in Chrome's root CA key store. Google can pick among most of the current CA's and put them out of business on a whim.
> certificate pinning as implemented in Chrome doesn't depend directly on key continuity.
But it's unsuitable for the entirety of the web. You can't hardcode all certificate fingerprints of the whole internet inside the browser.
>> The migration away from the CA model is called "certificate pinning".
> key continuity destroys the incentive to attack sites by compromising CAs
We need to ELIMINATE CAs (CA as in some third party (google, Verysign, GoDaddy, ...) who you have to trust). The whole concept of trusting a CA is broken, and pinning does nothing to address that, at least not in the proposed TACK implementation.
Yes, but TACK, even if it's a big step forward, does not address the main problem: we need to get rid of a central authority we have to trust. And TACK does nothing to address that. And neither does Certificate Transparency as proposed by Google.
I really feel the correct step forward is Convergence or Perspectives. If just browser vendors would jump in, we could use it right away. Mozilla/Google could set up a few notaries and set them as trusted by default in the browser. They choose the CAs they put in our browsers anyway, so we trust them already. That trust could be implemented with notaries instead of the CA model, so if someone wants to setup their own notaries they can.
What's your take on this? I value your opinion on security matters.
By moving towards decoupling the CAs from the Internet trust model, TACK is a step towards getting something like Convergence bootstrapped. Once we accept that the CAs are a utility player and not the ultimate arbiter of security, it's not hard to get to a place where we can start verifying "pins" with sites run by EFF or ACLU.
The biggest security problem on the Internet isn't protocols and it isn't cryptography. It's that the UX the browsers have for managing/configuring Internet trust hasn't changed since the late 1990s, and it's buried 3-4 levels deep in the "no user serviceable parts" section of the config UI. There are a lot of very productive things you could do for Internet security simply by revamping that UX, without making a single wire-level change to the TLS or HTTP protocols.
If Namecoin is anywhere near as insecure as Bitcoin, it's a nonstarter. Yes, I know the cryptography underlying Bitcoin is secure, but as a matter of practical fact, Bitcoin itself as an end-user technology is hopelessly insecure. It's one thing having an endless stream of people waking up to find their bitcoins are irrevocably gone because someone hacked the computer, but we can't have domain names being irrevocably lost in the same way.
Does Namecoin actually work like that? If so, is there a similar alternative that doesn't?
I've wondered many times why OCSP isn't distributed as DNS is. When we talk about websites, surely there's no more than one certificate per hostname (or less, i.e. wildcards). I don't think we're talking here of something impossible to do or not feasible with our current technology and computing power.
Also, certificate "whitelisting" could be a part of the DNS protocol itself (return the IP address of the requested hostname and the hash of its current, valid certificate).
Just to clarify: OCSP is distributed, but I can't ask my local ISP OCSP server about your certificates. I have to ask your OCSP server about your certificates.
It seems the only problem with hard-fail is the risk of DoS attacks by targeting OCSP servers. However, if you include OCSP stapling you won't be affected. So a solution may be to encourage all users to enable revocation checking with hard-fail, and all servers to support OCSP stapling.
It's not the Internet, just the CA system. There are better systems for handling trust out there, for example, people have been signing each other's PGP keys at key signing parties for decades.
Ok, so it is just the portion of the internet that involves purchasing things with credit cards and requiring passwords to access sites. The rest of the internet is just fine.
Great. I thought for a moment that the commercial basis of the internet might be in danger. Now to determine what percentage of the internet is not dependent on the CA system.....
I am thinking that a HSTS option enabling hard-fail OCSP plus OCSP stapling is probably a good idea, though probably less secure than putting it in the certificate.
Let me get this straight. Sites across the internet are (hopefully) revoking their CAs and issuing new ones to address Heartbleed but Mr. Langley is suggesting that we shouldn't check for revoked CAs because it might not do anything and it's slow?
Sorry, but after the last few weeks I'll happily accept a little slowness for the security revocation checking provides in the cases where it does work, even if it's not 100% of the cases.
> That's why I claim that online revocation checking is useless - because it doesn't stop attacks.
Doesn't mean there are "no" cases where it works. It just means any attacker dedicated enough can work around the CRLs.
I don't see any reason why one should throw the baby with the water. In this case, I just see Chrome guilty of FUD and hiding behind an intractable problem to justify their incorrect position.
Well, his argument is also that the attacker can easily circumvent it, which is true, but it is still makes it slightly harder to do, because the attacker needs to remember it.
Well, this is what "security theater" is. If you said that exact thing in the context of a TSA screening program there would be no one here going "yes, that makes perfect sense", and it's even easier for network attackers; they have to fix their attack scripts once and they work for good until the next countermeasure.
The article gives two reasons for why 'soft-fail' is required: Captive-portals, and OCSP server failure.
To deal with captive portals: have an SSL signed 'subdomain.google.com/you_are_on_the_internet' site/page that Google Chrome can use to check to see if it's captive or not. If it's captive, enable soft-fail. If internet access is available, set to hard-fail.
Websites these days are complex, with many (digital) moving parts - the database server(s), the static image server(s), dynamic response server(s), gateway server, probably a memcache server or something similar. If any one of those goes down, the site is unusable. Why then, should the OCSP server going down be considered any differently? Is a black-hat rented bot-net running a DDoS going to care if it's the main gateway server or the OCSP server?
But let's say we do consider disabled OCSP servers to be a client-side issue. Google could query and cache the OCSP server status, either with OCSP stapling or via some side-channel they build into Google Chrome.
The combination of both would allow hard-fail to be an option in Google Chrome.
Why not hard-fail by default and give the user the option to ignore/override it? Similar to the way other certificate warnings are shown to the end-user.
I guess that's true when a hard-fail causes the connection to be refused immediately by the client with no user input. In that case a DoS on the OCSP servers breaks things badly.
However what I meant to suggest is a third option. Something like hard-fail with a latch. The client should opt to fail but give the user the choice to proceed.
This would seem more desirable than the current soft-fail implementations when seem to be entirely silent to the end user.
Users make terrible security decisions. ~95% of users click through certificate failure pages, ~99% of users don't notice if a website transparently downgrades to HTTP. Delegating the choice, which would be borderline impossible to explain to the user is another way of saying 'Always say yes to proceed'.
And I'm sure you are wrong.
I only see a Validation option(not Revocation), which has 2 more options on how to check OCSP, and those are both checked using the defaults.
The correct path on my OS is Edit -> Preferences -> Advanced -> Certificates -> Validation -> OCSP options(both checked)
The author appears to entirely ignore attack vectors where the malicious party can record but not modify/block traffic.
Edit: I get it, I missed that for sites where the key has been changed the stolen key no longer allows such eavesdropping. Thank you to yuhong for helping point this out rather than just laughing at my ignorance while pushing me down the page.
I'm having a lot of trouble getting past: "Certificates bind a public key and an identity (commonly a DNS name) together."
X.509 certificates bind a public key and a human recognizable string (a "common name") together to create a verifiable digital identity. Over-simplified, X.509 is about solving the "I'm Spartacus" problem.
CRLs solve the "He was Spartacus" problem. I agree with the broad conclusion that CRLs aren't effective for human trust, but they are perfectly reasonable for machine trust.
Why didn't the author mention Kerberos? The default lifetime of a Kerberos ticket is designed around humans: roughly the length of a work shift in front of a computer terminal.