We don't. If we did, we'd have it by now. It's been over 25 years of making appeals like this.
It's a fun site! I'm not entirely sure why the protagonist is a green taco, but I can see why a DNS provider would make a cartoon protocol explainer. It's just that this particular protocol is not as important as the name makes it sound.
It is important. This is unfortunate rhetoric that is harming the safety of the internet.
"For instance, in April 2018, a Russian provider announced a number of IP prefixes (groups of IP addresses) that actually belong to Route53 Amazon DNS servers."
By BGP hijacking Route53, attackers were not only able to redirect a website to different IPs, globally, but also generate SSL certificates for that website. They used this to steal $152,000 in cryptocurrency. (I know I know, "crypto", but this can happen to any site: banking, medical, infrastructure)
Also, before you say, RPKI doesn't solve this either, although a step in the right direction. DNSSEC is a step in the right direction as well.
The idea is important. What it aims to protect is important. The current implementation is horrible, far too complex and fraught with so many landminds that no one wants to touch it.
A common reason, if not the vast majority of cases, is that people mix up which key they publish and which key they are actually using. I don't doubt there are a lot of things they could do to improve the protocol, but this very common problem is fairly difficult to solve on a protocol level.
I remember back in the days when people discouraged people from using encrypted disks because of the situation that could happen if the user lost their passwords. No disk encryption algorithm can solve the issue if the user does not have the correct password, and so the recommendation was to not use it. Nowadays people usually have TPMs or key management software to manage keys, so people can forget the password and still access their encrypted disks.
DNSSEC software is still not really that developed that they automatically include basic tests and verification tools to make sure people don't simply mix up keys. They assume that people write those themselves. Too many times this happens after incidents rather than before (heard this in so many war stories). It also doesn't help that dns is full of caching and caching invalidation. A lot of the insane step-by-step plans comes from working around TTL's, lack of verification, basic tooling, and that much of the work is done manually.
> No disk encryption algorithm can solve the issue if the user does not have the correct password, and so the recommendation was to not use it.
This problem is accurate, but it's the framing that makes it wrong.
No disk encryption algorithm can simultaneously protect and not protect something encrypted, what you're missing is the protocol/practices around that, and those are far less limited.
There is heaps of encryption around these days, there are people losing access to their regular keys, and yet procedures that recover access to their data while not entirely removing the utility of having encrypted data/disks.
A TPM is absolutely not a reliable way to store your key. Think about how often you get asked for a BitLocker recovery code, and imagine if every time that happened, you lost all your data.
What parts do you agree about? Someone making an argument that we should return to the drawing board and come up with a new protocol, one that doesn't make the "offline signers and authenticated denial" tradeoffs DNSSEC makes, would probably be saying something everybody here agrees with --- though I still don't think it would be one of the 5 most important security things to work on.
But the person you're replying to believes we should hasten deployment of DNSSEC, the protocol we have now.
I would love to go to back to the drawing board and solve the security pitfalls in BGP & DNS. I wish the organizations and committees involved did a better job back then.
Sadly, we live in this reality for now, so we do what we can with what we have. We have DNSSEC.
You understand that it is a little difficult for people to take seriously a claim that you're interested in going back to the drawing board while at the same time very stridently arguing that hundreds of millions of dollars of work should go in to getting a 1994 protocol design from 4% deployment to 40% deployment. The time to return to the drawing board is now.
I don't read that reply as them saying we should hasten deployment of DNSSEC. If that was the intention of the comment then no, I don't agree with that aspect of it.
I saying say I agree with the statement "I am saying it is dishonest to discount the real security threat of not having DNSSEC."
I believe we do need some way to secure/harden DNS against attacks, we can't pretend that DNS as it stands is OK. DNSSEC is trying to solve a real problem - I do think we need to go back to the drawing board on how we solve it though.
They definitely believe we should hasten deployment of DNSSEC --- read across the thread. For instance: Slack was taken down for a half a day owing to a deployment of DNSSEC that a government contract obligated them to undertake, and that commenter celebrated the contract.
It's fine that we all agree on some things and disagree on others! I don't think DNS security is a priority issue, but I'm fine with it conceptually. My opposition is to the DNSSEC protocol itself, which is a dangerous relic of premodern cryptography designed at a government-funded lab in the 1990s. The other commenter on this thread disagrees with that assessment.
slightly later
(My point here is just clarity about what we do and don't agree about. "Resolving" this conflict is pointless --- we're not making the calls, the market is. But from an intellectual perspective, understanding our distinctive positions on Internet security, even if that means recognizing intractable disputes, is more useful than just pretending we agree.)
The eventual plan is to limit certs to 48 hours (AFAIR), right now they're already allowing 6-day certs: https://letsencrypt.org/2025/02/20/first-short-lived-cert-is... In this scenario, if Let's Encrypt goes down for just a couple of days, a lot of certs will expire.
There are also operational risks, as Let's Encrypt has to have their secret key material in close proximity to web-facing services. Of course, they use HSMs, but it might not be enough of a barrier for nation-state level attackers.
The offline signing feature of DNSSEC allows the root zone and, possibly, the TLDs to be signed fully offline.
That's why in my ideal world I want to keep DNSSEC as-is for the root zone and the TLD delegation records, but use something like DoH/DoT for the second-level domains. The privacy impact of TLD resolution is pretty much none, and everything else can be protected fully.
That is not why DNSSEC has offline signers. DNSSEC has offline signers because when the protocol was designed, its authors didn't believe computers would be able to keep up with the signing work. Starting sometime in the middle of the oughts, people started to retcon security rationales onto it, but that's not the purpose of the design.
Do you have links for that? I don't really doubt that, since the work was done mid 90-s. But I'm genuinely curious about the early history of failed protocols (like IPv6 and DNSSEC), and I read most of the early archived discussions about IPv6.
Yes, somewhere I do; I wrote a complete history of the protocol, including archives I found of 90s-vintage mailing lists. I'll have to dig it up, though.
The proposal is to make LE certs 9 days long or something. Which means if LE is down for even a short time thousands and millions of certs will expire.
You don't wait till the last second to renew. 9 day certificates would mean 7 day renewals for example. And at that point you could have 2 or 3 ACME-compatible services configured as a backup.
That's ok, this scales with requirements / experience / money on the line. Most people won't care about a day of downtime that much. And those who really don't know anything about SSL will be using a platform solving this for them.
There are two things mixed up. "We need secure DNS" != "we need DNSSEC".
There is a huge demand for securing DNS-related things, but DNSSEC seems to be a poor answer. DoH is a somehow better answer, with any shortcomings it may have, and it's widely deployed.
I suspect that a contraption that would wrap the existing DNS protocol into TLS in a way that would be trivial to put in front of an existing DNS server and an existing DNS client (like TLS was trivial to put in front of an HTTP server), might be a runaway success. A solution that wins is a solution which is damn easy to deploy, and not easy to screw up. DNSSEC is not it, alas.
Yes. But DoH was built in a way which is reasonably easy to adopt, and offers obvious benefits, hence it was adopted. DNSSEC lacks this quality, and I think this quality is essential.
TLS internally does not depend on a domain in the DNS sense, it basically certifies a chain of signatures bound to a name. That chain can be verified, starting from the root servers.
The problem is more in the fact that TLS assumes creation of a long-living connection with an ephemeral key pair, while DNS is usually a one-shot interaction.
Encrypting DNS would require caching of such key pairs for some time, and refreshing them regularly but not too often. Same for querying and verifying certificates.
I'm sorry, this is just such an incredibly fine-tuned threat model for me to take it seriously.
You start with a BGP hijack, which lets you impersonate anybody, but assume that the hijacker is only so powerful as being able to impersonate a specific DNS server and not the server that the DNS server tells you about. You then use that specific control to get a CA to forge a certificate for you (and if the CA is capable of using any information to detect that this might be a forgery, the attack breaks).
And of course, the proposed solution doesn't do anything to protect against other kinds of DNS hijacking--impersonating somebody to the nameserver and getting the account switched over to them.
> I'm sorry, this is just such an incredibly fine-tuned threat model for me to take it seriously.
You claim it is fine-tuned, but it has happened in the real world. It is actually even better for attackers that it is "obscure", because that means it is harder to detect.
> but assume that the hijacker is only so powerful as being able to impersonate a specific DNS server and not the server that the DNS server tells you about.
Yes, all layers of the stack need to be secure. I am not making assumptions about the other layers - this thread is about DNS.
> if the CA is capable of using any information to detect that this might be a forgery
They are not. The only mitigation is "multi-perspective validation", which only addresses a subset of this attack.
> And of course, the proposed solution doesn't do anything to protect against other kinds of DNS hijacking
Yes, because other kinds of DNS hijacking are solved by HTTPS TLS. If TLS and CAs are broken, nothing is secure.
> You claim it is fine-tuned, but it has happened in the real world.
Sure, but it seems like his comment is still responsive; if DNSSEC is deployed, they perform a BGP hijack & can impersonate everyone, and they just impersonate the server after the DNS step?
If that's the threat model you want to mitigate, it seems like DNSSEC won't address it.
> and they just impersonate the server after the DNS step?
Yes, there are different mitigations to prevent BGP hijacking the webserver itself. Preventing a rogue TLS certificate from being issued is the most important factor. CAA DNS records can help a bit with this. DNS itself however is easiest solved by DNSSEC.
There are a lot of mitigations to prevent BGP hijacks that I won't get too much into. None are 100%, but they are good enough to ensure multi-perspective validation refuses to issue a TLS certificate. The problem is that if those same mitigations are not deployed on your DNS servers (or you outsource DNS and they have not deployed these mitigations) it is a weak link.
I don't see you responding to the question. You're fixating on protections for DNS servers, because that is the only circumstance in which DNSSEC could matter for these threat actors, not because they can't target the address space of the TLS servers themselves (they can), but because if you concede that they can do this, DNSSEC doesn't do anything anymore; attackers will just leave DNS records intact, and intercept the "authentic" server IPs.
So far your response to this has been "attackers can't do this to Cloudflare". I mean, stipulated? Good note? Now, can you draw the rest of the owl?
I am focusing on DNS because this thread is about DNSSEC. The topic of doing it in to the TLS servers themselves is a tangent not relevant to this thread.
No, I'm sorry, that's not the case. You're focusing on DNS servers as the target for BGP4 attacks because if you didn't, you wouldn't have a rebuttal for the very obvious question of "why wouldn't BGP4 attackers just use BGP4 to intercept legitimate ALPN challenges".
> You start with a BGP hijack, which lets you impersonate anybody, but assume that the hijacker is only so powerful as being able to impersonate a specific DNS server and not the server that the DNS server tells you about.
An attacker impersonating a DNS server still won't be able to forge the DNSSEC signatures.
An attack against BGP where the attacker takes over traffic for an IP address isn't at all prevented by DNSSEC.
The sequence there is:
1. I hijack traffic destined for an IP address
2. Anything whose DNS resolves to that IP, regardless of whether or not they use DNSSEC, starts coming to me
In this model, I don't bother trying to hijack the IP of a DNS server: that's a pain because with multi-perspective validation, I plausibly have to hijack a bunch of different IPs in a bunch of different spots. So instead I just hijack the IP of the service I want to get a malicious cert for, and serve up responses to let me pass the ALPN ACME challenge.
Sure. But you won't have a TLS certificate for that address, if the host uses a DNS-based ACME challenge and prohibits the plain HTTP challenge: https://letsencrypt.org/docs/caa/
Ok, so deploying DNSSEC would specifically solve the threat model of an attacker who can perform a BGP hijack of IP addresses, but doesn’t want to hijack multiple DNS server IPs because that’s more work, for a domain that has CAA records and disallows validation by ALPN.
That feels like a pretty narrow gain to justify strapping all this to all my zones and eating the operational cost and risk that if I mess it up, my site stops existing for a while
> but doesn’t want to hijack multiple DNS server IPs because that’s more work
No. I'm saying that you can _not_ hijack a DNSSEC-enabled DNS name, even if you have a full control over the network.
The DNSSEC public keys for the domain are stored in the top-level domain zone. Which in turn is protected by a signature with the key from the root zone.
I don’t think you’re grokking what a BGP hijack looks like. The attacker steals traffic destined to an IP address at the routing layer. They aren’t hijacking a name, they’re hijacking traffic to the IP that name resolves to.
In the case of attacking the ALPN ACME validation, they hijack the IP address of the site they want a TLS certificate for: example.org resolves to 1.2.3.4, I hijack traffic to 1.2.3.4, the DNS flow is unchanged, the verification traffic comes to me, and I get a certificate for example.org
The DNS server hijack works the same way: I don’t try to change what ns1.example.org resolves to. I hijack traffic to the real IP that it resolves to and serve up responses for the site I want to hijack saying “yea, these are the records you want and don’t worry, the DS bit is set to true”.
Though it’s worth remembering that both DNS and BGP attacks are basically a rounding error compared to the instances of ATO-based attacks
I know exactly how BGP works, I actually implemented a BGP reflector long time ago. My home has two DIA circuits and my home network is announced via BGP.
> In the case of attacking the ALPN ACME validation, they hijack the IP address of the site they want a TLS certificate for: example.org resolves to 1.2.3.4, I hijack traffic to 1.2.3.4, the DNS flow is unchanged, the verification traffic comes to me, and I get a certificate for example.org
As I said, a CAA record in DNS will prohibit this, instructing the ACME CA to use the DNS challenge.
> I hijack traffic to the real IP that it resolves to and serve up responses for the site I want to hijack saying “yea, these are the records you want and don’t worry, the DS bit is set to true”.
And then your faked DNS replies will have a wrong signature because you don't have the private key for the DNS zone.
And DNSSEC-validating clients will detect this because the top-level domain will have a DNSKEY entry for the hijacked domain. You can't fake the replies from the top-level domain DNS because it in turn will have a DNSKEY entry in the root zone.
> It is important. This is unfortunate rhetoric that is harming the safety of the internet.
DNSSEC was built for exactly one use case: we have to put root/TLD authoritative servers in non-Western countries. It is simply a method for attesting that a mirror of a DNS server is serving what the zone author intended.
What people actually want and need is transport security. DNSCrypt solved this problem, but people were bamboozled by DNSSEC. Later people realized what they wanted was transport security and DoH and friends came into fashion.
DNSSEC is about authentication & integrity. DNSCRYPT/DOH is about privacy. They solve completely different problems and have nothing to do with one another.
If you have secure channels from recursers all the way back to authority servers (you don't, but you could) then in fact DoH-like protocols do address most of the problems --- which I contend are pretty marginal, but whatever --- that DNSSEC solves.
What's more, it's a software-only infrastructure upgrade: it wouldn't, in the simplest base case, require zone owners to reconfigure their zones, the way DNSSEC does. It doesn't require policy decisionmaking. DNS infrastructure operators could just enable it, and it would work --- unlike DNSSEC.
(Actually getting it to work reliably without downgrade attacks would be more work, but notably, that's work DNSSEC would have had to do too --- precisely the work that caused DANE-stapling to founder in tls-wg.)
I'd love to see DoH/DoT that uses a stapled DNSSEC-authenticated reply containing the DANE entry.
There's still a chicken-and-egg problem with getting a valid TLS certificate for the DNS server, and limiting DNSSEC just for that role might be a valid approach. Just forget that it exists for all other entry types.
Stapling is dead: nobody could agree on a threat model, and they ultimately ended up at an HPKP-style cached "this endpoint must staple DANE" model that TLS people rejected (reasonably).
But if you have DoH chaining all the way from the recurser to the authority, it's tricky to say what stapled DANE signatures are even buying you. The first consumers of that system would be the CAs themselves.
BGP attacks change the semantic meaning of IP addresses themselves. DNSSEC operates at a level above that. The one place this matters in a post-HTTPS-everywhere world is at the CAs, which are now all moving to multi-perspective validation.
As you should be aware, multi-perspective validation does not solve anything if your BGP hijack is accepted to be global. You will receive 100% of the traffic.
DNSSEC does greatly assist with this issue: It would have prevented the cited incident.
1. Hijack the HTTP/HTTPS server. For some IP ranges, this is completely infeasible. For example, hijacking a CloudFlare HTTP/HTTPS range would be almost impossible theoretically based on technical details that I won't go through listing.
2. Hijack the DNS server. Because there's a complete apathy towards DNS server security (as you are showing) this attack is very frequently overlooked. Which is exactly why in the cited incident attackers were capable of hijacking Amazon Route53 with ease. *DNSSEC solves this.*
If either 1 or 2 work, you have yourself a successful hijack of the site. Both need to be secure for you to prevent this.
In summation, you propose a forklift upgrade of the DNS requiring hundreds of millions of dollars of effort from operators around the world, introducing a system that routinely takes some of the most sophisticated platforms off the Internet entirely when its brittle configuration breaks, to address the problem of someone pulling off a global hijack of all the Route53 addresses.
At this point, you might as well just have the CABForum come up with a new blessed verification method based on RDAP. That might actually happen, unlike DNSSEC, which will not. DNSSEC has lost signed zones in North America over some recent intervals.
I do like that the threat model you propose is coherent only for sites behind Cloudflare, though.
"I do like that the threat model you propose is coherent only for sites behind Cloudflare, though."
The threat model I proposed is coherent for Cloudflare because they have done a lot of engineering to make it almost impossible to globally BGP hijack their IPs. This makes the multi-perspective validation actually help. Yes, other ISPs are much more vulnerable than Cloudflare, is there a point?
You are not saying DNSSEC doesn't serve a real purpose. You are saying it is annoying to implement and not widely deployed as such. That alone makes me believe your argument is a bit dishonest and I will abstain from additional discussion.
No, I'm saying it doesn't serve a real purpose. I've spent 30 years doing security work professionally and one of the basic things I've come to understand is that security is at bottom an economic problem. The job of the defender is to asymmetrically raise costs for attackers. Look at how DNS zones and certificates are hijacked today. You are proposing to drastically raise defender costs in a way that doesn't significantly alter attacker costs, because they aren't in the main using the exotic attack you're fixated on.
If we really wanted to address this particular attack vector in a decisive way, we'd move away, at the CA level, from relying on the DNS protocol browsers use to look up hostnames altogether, and replace it with direct attestation from registrars, which could be made _arbitrarily_ secure without the weird gesticulations DNSSEC makes to simultaneously serve mass lookups from browsers and this CA use case.
But this isn't about real threat models. It's about a tiny minority of technologists having a parasocial relationship with an obsolete protocol.
yeah, the same for the rest.
your fanboys are happy and the rest is just tired, because everyone who does not share your point of view has a invalid opinion.
>It’s full of rhetoric and bluster, appeals to authority and dismissal of arguments not from what he considers an authority, and when he runs out of arguments entirely, he stops responding.
Or his broken record commentary on how Signal absolutely needs to ask people for their mobile phone numbers in order to at all be able to provide a functional service, and how doing so does not at all provide Signal with a highly valuable social network map. Exact same story as soon as the arguments are dismantled.
Counterpoint: no it isn't, which is why virtually nobody uses it. Even the attack this thread centers on --- BGP hijacking of targeted DNSSEC servers to spoof CA signatures --- is a rounding error sidenote compared to the way DNS zones actually get hijacked in practice (ATO attacks against DNS providers).
If people were serious about this, they'd start by demanding that every DNS provider accept U2F and/or Passkeys, rather than the halfhearted TOTP many of them do right now. But it's not serious; it's just motivated reasoning in defense of DNSSEC, which some people have a weird stake in keeping alive.
> Counterpoint: no it isn't, which is why virtually nobody uses it. Even the attack this thread centers on --- BGP hijacking of targeted DNSSEC servers to spoof CA signatures
Wait, wait, wait. How can you hijack a DNSSEC server? Its keys are enrolled in the TLD, and you can't spoof the TLD server, because its keys in turn are enrolled in the root zone. And the root zone trust anchor is statically configured on all machines.
DNS is an industry that has a race to bottom in terms of quality and price, and will generally avoid any costs in order to keep as much of the small margin they get between the costs from the TLD and the end user price. For many registrars and domains the margin sits around $0-1 per domain and year, and the real profit comes from uppsells and dark patterns that hikes up the price until the customer manage to leave.
High-end registrars that focus on portfolios and brand management generally do have any kind of authentication that the customers want, and the price will reflect that. Costumers are not just buying the service of a middle man that acts as a go-between the TLD and the domain owner.
You are again ignoring the fact that DNSSEC would have prevented a $152,000 hack. Yes, we are aware organizations are not always serious about security. For those that are though, DNSSEC is a helpful tool.
No, it isn't. It attempts and mostly fails to address one ultra-exotic attack, at absolutely enormous expense, principally because the Internet standards community is so path-dependent they can't take a bad cryptosystem designed in the mid-1990s back to the drawing board. You can't just name call your way to getting this protocol adopted; people have been trying to do that for years, and the net result is that North American adoption fell.
The companies you're deriding as unserious about security in general spend drastically more on security than the companies that have adopted it. No part of your argument holds up.
Citation? A BGP hijack can be done for less than $100.
"You can't just name call your way to getting this protocol adopted"
I do not care if you adopt this protocol. I care that you accurately inform others of the documented risks of not adopting DNSSEC. There are organizations that can tolerate the risk. There are also organizations that are unaware because they are not accurately informed (due to individuals like yourself), and it is not covered by their security audits. That is unfortunate.
Slack's house literally did burn down for 24 hours because of DNSSEC back into 2021.
When you frame the risk as "marginal benefit against one specific threat" Vs "removes us from the internet for 24 hours", the big players pass and move on. This is the sort of event the phrase "sev 1" gets applied to.
Some fun companies have a reg requirement to provide service on a minimum SLA, otherwise their license to operate is withdrawn. Those guys run the other way screaming when they hear things like "DNSSEC" (ask me how I know).
What percentage of the fortune 500 is served over DNSSEC?
Oh. I thought it burned down because of their engineers not having fully acquainted themselves with the tool before applying it. It's misguided to hold DNSSEC culpable for Slack's engineers' goof-up. Like advising people against ever going near scissors because they might run with one in their hands.
That is an extremely uncharitable take for an outage that involved two of the most poorly defined and difficult to (correctly) implement DNS features (wildcards and NSEC records) that the three major DNS operators mentioned in the Slack post-mortem (AWS, Cloudflare, and Google) all implemented differently.
IIRC, Slack backed out of their DNSSEC deployment because of a bug with wildcards and NSEC records (in Route53, not Slack), but the problem Slack subsequently experienced was not caused by that bug, but was instead caused by the boneheaded way in which Slack tried to back out of DNSSEC. I.e. Slack’s problem was entirely their own doing, and completely avoidable if they had had any idea of what they were doing.
Having read the post mortem, I disagree. Slack engineers did something dumb, under pressure during an outage. Even if they hadn't, they still would have been in a degraded state until they could properly remove their DNSSEC records and/or get the Route53 bug they hit fixed. In other words, they still would have had a 24+ hour outage, albeit with a smaller blast radius.
The design of DNSSEC is simply not fit for purpose for zone operators. It is far too easy to screw your zone up for far too marginal a benefit, to say nothing of the huge increase in CPU resource required to authenticate DNSSEC record chains.
The story for implementers is just as bad - the specifications are baroque, filled with lousy crypto and poorly thought-out options.
To give just one example, consider the NSEC3 iteration limit field. NSEC3 itself was designed mostly[0] to prevent zone enumeration when validating negative responses (which is trivial to perform with NSEC.) The iteration count was designed to give zone operators the ability to increase the cost to an attacker of generating a dictionary of nsec3 names[1]. Of course, a high iteration count also raises the cost to a non-malicious resolver of validating a negative response for a nsec3-enabled zone.
In good old DNSSEC fashion, the iterations field is a single number that is subject to a... wide variety of potential limits:
* 16 bits by the wire protocol
* 150 for 1,024 bit keys (RFC 5155 10.3[2])
* 500 for 2,048 bit keys (RFC 5155 10.3[2])
* 2,500 for 4,096 bit keys (RFC 5155 10.3[2])
* 0 (RFC 9276 3.2)
Why 0? It was noted -- after publishing the NSEC3 spec -- that high iterations just don't provide that much benefit, and come with a high cost to throughput. Appendix B of RFC 9276 shows a roughly 50% performance degradation with an iteration count of 100. So, RFC 9276 3.2 says:
Validating resolvers MAY also return a SERVFAIL response when processing NSEC3 records with iterations larger than 0.
Of course, their guidance to implementers is to set the limits a bit higher, returning insecure responses at 100 iterations and SERVFAIL at 500. That said, if you want to be maximally interoperable, as a zone operator, you should pretend like the iteration count field doesn't exist: it is standards compliant for a validating resolver to refuse an nsec3 response with more than a single hash round.
As I said, this is one example, but I'm not cherry picking here. The whole of the DNSSEC spec corpus is filled with incomprehensible verbiage and opportunities for conflicting interpretations, far beyond what you see in most protocol specs.
0 - also to reduce the size of signed top-level zones
1 - all NSEC and NSEC3 records, while responsive to queries about names that don't exist, consist of obfuscated names that do exist.
2 - According to the letter of the standard, the limits applied to the iterations field should be 149, 499, and 2,499. Implementations are inconsistent about this.
IIUC, if Slack had done the correct thing, only wildcard DNS records (if any) would have been affected. They would certainly not have had a complete DNS blackout. I would classify that as significant.
> The story for implementers is just as bad - the specifications are baroque, filled with lousy crypto and poorly thought-out options.
I don’t care. So is almost every other standard, but until something better comes along, DNSSEC is what we have. Arguing that a working and implemented solution should not be used since it is worse than a non-existing theoretical perfect solution is both:
1. True
2. Completely and utterly useless, except as a way to waste everyone’s time and drain their energy.
> IIUC, if Slack had done the correct thing, only wildcard DNS records (if any) would have been affected
There's the problem - you DON'T understand. Straight from the post-portem that you clearly have not read:
One microsecond later, app.slack.com fails to resolve with a ‘ERR_NAME_NOT_RESOLVED’ error:
[screenshot of error ]
This indicated there was likely a problem with the ‘*.slack.com’ wildcard record since we didn’t have a wildcard record in any of the other domains where we had rolled out DNSSEC on. Yes, it was an oversight that we did not test a domain with a wildcard record before attempting slack.com — learn from our mistakes!
> I don’t care. So is almost every other standard,
Cool story. I do care. I'd like to see greater protection of the DNS infrastructure. DNSSEC adoption is hovering around 4%. TLS for HTTP is around 90%. At least part of that discrepancy is due to how broken DNSSEC is.
They could have done a quick fix by adding an explicit app.slack.com record. But instead they removed the DNSSEC signing from the whole domain, thereby invalidating all records, not just the wildcard ones.
> I do care.
I will care once something else comes around with any promise of being implemented and rolled out. Until then, I see no need to discourage the adoption of DNSSEC, or disparage its design, except when designing its newer version or replacement.
> I'd like to see greater protection of the DNS infrastructure. DNSSEC adoption is hovering around 4%.
I work at a registrar and DNS hosting provider for more than 10.000 domains. More than 70% of them have DNSSEC.
> They could have done a quick fix by adding an explicit app.slack.com record. But instead they removed the DNSSEC signing from the whole domain, thereby invalidating all records, not just the wildcard ones.
1) That would do nothing to fix resolvers that had already cached NSEC responses lacking type maps.
2) That presumes the wildcard record was superfluous and could have been replaced with a simple A record for a single or small number of records. Would love to see a citation supporting that.
3) That presumes the Slack team could have quickly identified that the problem they were having was caused by the fact that app.slack.com (and whatever other hosts resolve from that wildcard) was caused by the fact the record was configured as a wildcard and would have been resolved by eliminating the wildcard record. If you read the postmortem, it is clear they zeroed in on the wildcard record as being suspect, but had to work with AWS to figure out the exact cause. I doubt that was an instantaneous process.
Any way you slice it, there was no quick way to fully recover from this bug once they hit it, and my argument is that the design of DNSSEC makes these issues a) likely to happen and b) difficult to model ahead of time, while providing fairly marginal security benefit.
At this point, I really don't care if you agree or disagree.
> I will care once something else comes around with any promise of being implemented and rolled out.
Yeah. DNSSEC is going to be widely deployed any day now. The year after the year of Linux on the desktop.
> I work at a registrar and DNS hosting provider for more than 10.000 domains. More than 70% of them have DNSSEC.
Cool. There are, what, 750 million domains registered worldwide? We are at nowhere near 10% adoption worldwide, let alone 70%. Of the top 100 domains -- the operators you would assume would be the most concerned about DNS response poisoning -- *six* have turned DNSSEC on.
Internally at slack the general consensus was that dnssec was a giant waste of time and money from a security perspective. We did it for compliance to sell into the Federal govt and federal contractors.
I'm not sure what this has to do with anything I've said on this thread, but we don't have to keep pressing these arguments; I'm pretty satisfied with the case I've made so far.
And at least Let's encrypt actually verifies DNSSEC before issuing certificates. IIRC it will become mandatory for all CA's soon. DNSSEC for a domain plus restrictive CAA rules should ensure that no reputable CA would issue a rogue cert.
"Most domains". Yes, it is possible that nobody bothers to DNS hijack your domains. Sadly I've worked for organizations where it did happen, and now they have DNSSEC.
I invite anybody who thinks this is a mic drop to pull down the Tranco research list of most popular/important domains on the Internet --- it's just a text file of zones, one per line --- and write the trivial bash `for` loop to `dig +short ds` each of those zones and count how many have DNSSEC.
For starters you could try `dig +short ds google.com`. It'll give you a flavor of what to expect.
And you still can't seem to make your mind up on whether this is because DNSSEC is still in its infancy or if it's because they all somehow already studied DNSSEC and ended up with the exact same opinion as you. I'm gonna go out on a limb and say that it's not the latter.
What do I have to make my mind up about? I worked on the same floor as the TIS Labs people at Network Associates back in the 1990s. They designed DNSSEC and set the service model: offline signers, authenticated denial. We then went through DNSSEC-bis (with the typecode roll that allowed for scalable signing, something that hadn't been worked out as late as the mid-1990s) and DNSSEC-ter (NSEC3, whitelies). From 1994 through 2025 the protocol has never seen double-digit percentage adoption in North America or in the top 1000 zones, and its adoption has declined in recent years.
You're not going to take my word for it, but you could take Geoff Huston's, who recently recorded a whole podcast about this.
That quote is interesting because all of the period reporting I’ve seen says that the attackers did NOT successfully get an HTTPS certificate and the only people affected were those who ignored their browsers’ warnings.
How about another incident in 2022? Attackers BGP hijacked a dependency hosting a JS file, generated a rogue TLS certificate, and stole $2 million. Keep in mind: these are incidents we know about, not including incidents that went undetected.
Noteworthy: "Additionally, some BGP attacks can still fool all of a CA’s vantage points. To reduce the impact of BGP attacks, we need security improvements in the routing infrastructure as well. In the short term, deployed routing technologies like the Resource Public Key Infrastructure (RPKI) could significantly limit the spread of BGP attacks and make them much less likely to be successful. ... In the long run, we need a much more secure underlying routing layer for the Internet."
You know why I'm not coming back at you with links about registrar ATOs? Because they're so common that nobody writes research reports about them. I remember after Laurent Joncheray wrote his paper about off-path TCP hijacking back in 1995; for awhile, you'd have thought the whole Internet was going to fall to off-path TCP hijacking. (It did not.)
The argument counter dnssec is that if you are trying to find some random A record for a server, to know if it is the right one, TLS does that fine provided you reasonably trust domain control validation works i.e. CAs see authentic DNS.
An argument for DNSSEC is any service configured by SRV records. It might be totally legitimate for the srv record of some thing or other to point to an A record in a totally different zone. From a TLS perspective you can't tell, because the delegation happened by SRV records and you only know if that is authentic if you either have a signed record, or a direct encrypted connection to the authoritative server (the TLS connection to evil.service.example would be valid).
Yes but it is still possible to execute BGP hijacks that capture 100% of traffic, rendering multi-perspective validation useless. RPKI sadly only solves naive "accidental" BGP hijacks, not malicious BGP hijacks. That's a different discussion though.
I agree and apparently so does the CA/B forum: SC085: Require DNSSEC for CAA and DCV Lookups is currently in intellectual property review.
DCV is CA/B speak for domain-control validation; CAA = these are my approved CAs.
This seems to be optional in the sense that: if a DNS zone has DNSSEC, then validation must succeed. But if DNSSEC is not configured it is not required.
Not saying they are malicious actors, but easy answer would be any Public WiFi anywhere. They all intercept DNS, less than 1% intercept SNI.
It is also public knowledge that certain ISPs (including Xfinity) sniff and log all DNS queries, even to other DNS servers. TLS SNI is less common, although it may be more widespread now, I haven't kept up with the times.
Popular web browsers send SNI by default regardless of whether it is actually needed. For example, HTTPS-enabled websites not hosted at a CDNs may have no need for SNI. But popular web browsers will send it anyway.
every single ISP in the world. it was a well documented abused channel.
they not only intercepted your traffic for profiling but also injected redirects to their branded search. honestly curious if you're just too young or was one of the maybe 10 people who never experienced this.
sending traffic to a third party like quad9 is much safer than to a company who have your name/address/credit card.
If the threat model is BGP hijacking, is DNSSEC actually the answer? If you can hijack a leaf server, can't you hijack a root server? As far as I can tell, the root of DNSSEC trust starts with DNSKEY records for the "." zone that are rotated quarterly. This means every DNSSEC-validating resolver has to fetch updates to those records periodically, and if I can hijack even one route to one of the fixed anycast IPs of [a-m].root-servers.net then I can start poisoning the entire DNSSEC trust hierarchy for some clients, no?
Now, this kind of attack would likely be more visible and thus detected sooner than one targeted at a specific site, and just because there is a threat like this doesn't mean other threats should be ignored or downplayed. But it seems to me that BGP security needs to be tackled in its own right, and DNSSEC would just be a leaky band-aid for that much bigger issue.
The much-more-important problem is that the most important zones on the Internet are, in the DNSSEC PKI, irrevocably coupled to zones owned by governments that actively manipulate the DNS to achieve policy ends.
This is a problem with TLS too, of course, because of domain validation. But the WebPKI responded to that problem with Certificate Transparency, so you can detect and revoke misissued certificates. Not only does nothing like that exist for the DNS, nothing like it will exist for the DNS. How I know that is, Google and Mozilla had to stretch the bounds of antitrust law to force CAs to publish on CT logs. No such market force exists to force DNS TLD operators to do anything.
My (ahem) "advocacy" against DNSSEC is understandably annoying, but it really is the case that every corner of this system you look around, there's some new goblin. It's just bad.
I agree that the problem lies with BGP and we definitely need a solution. You can also say the problem is with TLS CA verification not being built on a solid foundation. Even with that said, solving those problems will take time, and DNSSEC is a valid precaution for today.
RPKI plus ASPA does solve the hijack problem by securing both the origin of a prefix and the AS path of a route.
Yes ASPA is new. Reference implementations in open source routing daemons and RPKI tools are being developed and rolled out. If you want to be a pioneer you can run a bird routing daemon and secure the routes with ASPA. Only experimenters have created ASPA records at this point, however once upon a time we were in the same position with RPKI.
Unfortunately the basic thought process is that "it is a security feature, therefore it must be enabled" it is very hard to argue against that, but it is pretty similar to "we must have the latest version of everything as that is secure" and "we should add more passwords with more requirements and expire it often". One of those security theatres that optimizes for reducing accountability, but may end up with almost no security gains and huge tradeoffs that may end up even compromising security through secondary effects (how secure is a system that is down?)
We very definitely do have IPv6. I'm using IPv6 right now. Last numbers I saw, over 50% of North American hits to Google were IPv6. DNSSEC adoption in North America is below 4%, and that's by counting zones, most of which don't matter --- the number gets much lower if you filter it down to the top 1000 zones.
One can hope that someone will give the ISPs in my country a metaphorical hefty kick up the arse, especially as some of the more niche ones have been happily providing IPv6, and business customers can get IPv6, and of course other countries are happily embracing IPv6. So I wouldn't say never.
But the clear evidence is that past promises of it arriving at those major ISPs are very hollow indeed.
It's not the same with DNSSEC in the U.K., though. Many WWW hosting services (claim to) support that right now. And if anything, rather than there being years-old ineffective petition sites clamouring for IPv6 to be turned on, it is, even in 2025, the received wisdom to look to turning DNSSEC off in order to fix problems.
One has to roll one's eyes at how many times the-corporation-disables-the-thread-where-customers-repeatedly-ask-for-simple-moderen-stuff-for-10-years is the answer. It was the answer for Google Chrome not getting SRV lookup support. Although that was a mere 5 years.
If you want the net to support end to end connectivity we need IPv6. Otherwise you'll end up with layers and layers of NAT and it will become borderline impossible.
A lot of protocols get unstable behind layers of NAT too, even if they're not trying to do end to end / P2P. It adds all kinds of unpredictable timeouts and other nonsense.
As a joke, it’s not easily distinguishable from trolling and since IPv6 is approaching half of all traffic, more in many areas, the humor value is limited.
> […] since IPv6 is approaching half of all traffic, more in many areas, the humor value is limited.
And yet every article on IPv6 has entire brigades of folks of folks going on about IPv6 is DOA, and "I've been here about it for thirty years, where is it?", and "they should have done IPv4 just with larger addresses, adoption would have been much faster and more compatible".
I'm simply pointing out the parallel argument that was made between DNSSEC and IPv6.
Having also been online for thirty years, I’ve seen those jokes soooo many times but only one of them is funny. DNSSEC’s 0.3% usage[1] is within a rounding error of zero but IPv6 is close to half of all internet traffic in many countries. It’s not funny so much as not updating your priors for decades, like joking about how Windows crashes constantly or saying Python is a hobbyist language.
It's a parallel argument, but it's not a good one, because IPv6 is now around ~50% of traffic depending on the service and details, and DNSSEC was introduced earlier and doesn't seem to be going anywhere.
IPv6 probably could have been done better and rolled out faster, and whoever works on IPvNEXT should study what went wrong, but eventually it became better than alternative ways of dealing with a lack of IPv4 addresses, and it started getting real deployment.
Note that without DNS security, whoever controls your DNS server, or is reliably in the path to your DNS server, can issue certificates for your domain. The only countermeasure against this is certificate transparency, which lets you yell loudly that someone's impersonating you but doesn't stop them from actually doing it.
In this case, there's an avalanche of money and resources backing up the problem domain DNSSEC attempts to make contributions in, and the fact that it's deployed in practically 0% of organizations with large security teams is telling.
I would say it is more a testament to the unfortunate state of cybersecurity. These "theoretical" attacks happen. Everyone just thinks it won't be them.
My rebuttal is that the DNSSEC root keys could hit Pastebin tonight and in almost every organization in the world nobody would need to be paged. That's not hyperbole.
You are mostly right, but I would hope that certain core security companies and organizations would get paged. Root CAs and domain registrars and such should have DNSSEC validation.
Unfortunately, DNSSEC is a bit expensive in terms of support burden, additional bugs, reduced performance, etc. It will take someone like Apple turning DNSSEC validation on by default to shake out all the problems. Or it will take an exploitable vulnerability akin to SIM-swapping to maybe convince Let's Encrypt! and similar services reliant on proof-by-dns that they must require DNSSEC signing.
SIM-swapping is a much more important attack vector than on-path/off-path traffic interception, and are closer to how DNS hijacking happens in practice (by account takeover at registrars).
It does in fact make sense to address the most important attacks before the exotic ones, especially when addressing the exotic attacks involves drastically more work than the common ones. I think you're making my case for me.
If that happened, we'd revert to pre-DNSSEC security levels: an attack would still be hard to pull off (unless you own a root DNS server or are reliably in the path to one). It's like knowing the private key for news.ycombinator.com - it still doesn't do anything unless I can impersonate the Hacker News server. But that was still enough of a risk to justify TLS on the web. Mostly because ISPs were doing it to inject ads.
If the problem is path between registrar and CA, then deploying the fix to clients seems like an absolute overkill.
Just create a secure path from CA to registrar. RDAP-based or DoH-based, or something from scratch, does not really matter. It will only need to cover few thousand CAs and TLDs, so it will be vastly simpler that upgrading billions of internet devices.
One could argue the primary (not the only) risk addressed by DNSSEC is third party DNS service, i.e., shared caches accessible from the internet
If this is true, then one might assume DNSSEC is generally unnecessary if one is running their own unshared cache only accessible from the loopback or the LAN
Software like djb's dnscache, a personal favourite, has no support for DNSSEC
NLNet's unbound places a strong emphasis on supporting DNSSEC. The unbound documentation authors recommend using it
Dan Kaminsky showed us why we need DNSSEC. Without it, it's quite easy to MITM and/or spoof network traffic. Some governments like to do this, so they'll continue to make it difficult for DNSSEC to be fully adopted.
The original registrar, Network Solutions, doesn't even fully support DNSSEC. You can only get it if you pay them an extra $5/mo and let them serve your DNS records for you. So for $5/mo you get DNSSEC, but you defer control of your records to them, which isn't really secure.
It's trivial to spoof DNS even with DNSSEC set up, because DNSSEC is a server-to-server protocol. Your browser doesn't speak DNSSEC; it speaks plaintext DNS, and trusts a single bit in the response header that says whether the upstream caching resolver actually checked signatures.
It is so trivial to do so you wonder why more people don’t. I’d imagine the passionate “dnssec is bad” rants some people go out of their way to post on every comment on this site might be a factor.
DNSSEC induced outages aside, I will start signing my zones when:
- DNSSEC auto-signing is tightly integrated into all authoritative DNS daemons instead of being a set of scripts, cron jobs and other bolt-on concepts. i.e. I never see a key, a script, etc... The daemon logs everything it is parsing and loading when set to verbose or debug.
- My primary and secondary servers and other peoples servers all present a DNSSEC-autosign(ed) capability during AXFR/IXFR negotiation and the server knows what to do with it and what additional sanity checks to perform.
- Zone transfers are universally encrypted by all DNS daemons. All of them. Every secondary service one could stumble upon must support XoT Encrypted Zone Transfer. Currently supported by NSD and Bind. RFC 9103. Otherwise this is just a LARP. Optionally also DNS over QUIC RFC 9250
- Primary and Secondary servers do sanity checks to determine if I am about to step on my own landmine and will rudely and hopefully quite offensively refuse to activate any changes if something seems off.
- Optionally and optimally I would like to see all of the ROOT servers support DoT with a long lived cert and all that implies. This could be a separate set of physical servers that intercept/DNAT port 853 to cache the load off the actual ROOT servers.
I’m not sure what that would even mean. What is the network operation scenario where you want the credentials for your machines to be tied to a global PKI tree whose roots are all either multinational companies or world governments.
Optional, alternative standards don't have visibility and don't get used.
Without a way to measure, nothing happens. There was once a few, UX-hostile DNSSEC & DANE browser extensions but these never worked well and were discontinued.
I’ve honestly never known which sites use DNSSEC and which don’t. Browsers don’t warn you when it’s missing, and most people probably wouldn’t even know where to look.
It’s hard to care about something like that, even if it really does matter behind the scenes.
I think you're confusing DNSSEC with HTTPS. DNSSEC is how you go from an internet name to an IP address, so it happens before your browser starts talking to the website.
We don't technically need ICANN the whole DNS system anymore.
Anyone could quickly build a public cryptographically secure blockchain-based DNS system where people could optionally sync and query their own nodes (without even going over the wire). People could buy and own their domain names on-chain using cryptocurrency instead of repeatedly renting them from some centralized entity.
You could easily build this today by creating a Chrome Extension with a custom URL/address bar which bypasses the main one and makes a call to blockchain nodes instead of a DNS resolver; it would convert a domain name into an IP address by looking up the blockchain. This system could scale without limit in terms of reads as you can just spin up more nodes.
I mean it'd be so easy it's basically a weekend project if you use an existing blockchain as the base. Actually Ethereum already did something like this with .ETH domains but I don't think anyone built a Chrome Extension yet; or at least I haven't heard, though it's possible to enable in Brave browser via settings (kind of hidden away). Also, there is Unstoppable Domains.
People have been doing that since roughly 2010, so the failures are important to learn why it’s not a weekend project.
Adoption is critical for alternate roots, so the first question has to be how something gets enough users for anyone to feel it’s worth the trouble of using: the failure mode of DNS is that links break, email bounces, and tons of things which do server-side validation reject it, so this really limits usage.
The other big problem is abuse. Names are long term investments, so there are the usual blockchain problems of treating security as an afterthought but you also have the problem that third-parties have a valid need to override the blockchain (e.g. someone registers Disney.bit and points it at a porn site or serves malware from GoogleChrome.eth). Solving that means that you’re back to trusting the entity which created the system or maybe a group of operators, so the primary appeal is going to be if you can make it cheaper than owning a traditional domain.
This would theoretically be possible if browsers did DANE and didn't, because of middlebox fuckery, have to have a fallback path to the X.509 WebPKI because DNSSEC requests get dropped like 5% of the time. But because that is the case, no browser does DANE validation today, and when they did, many years ago, those DANE CA certs were effectively yet another CA; they actually expanded your attack surface rather than constricting it.
Even if that wasn't the case --- and it emphatically is --- you'd still be contending with a "personal CA" that in most cases would have its root of trust in a PKI operated by world governments, most of which have a demonstrated aptitude for manipulating the DNS.
This seems like the wrong trade-off; and, if it really mattered, we--including you--should be working to push DNSSEC/DANE to improve, not for people to double down on WebPKI. Maybe we need to be advocating for DNS transparency, for example; but like, the road to there is through DANE, not through WebPKI: staunchly shilling for WebPKI isn't helping either the situation, the ecosystem, or the landscape.
OK, even so, let's examine it? The premise of CT is already a bit dubious: it involves being able to figure out, after the fact, that there was a certificate issued which should not have been issued, and the attack has already been performed. The only reason this makes any sense at all is because it (supposedly) provides an incentive to people not to participate in the attack in the first place...
...but, the assumption of this is pretty much always that the party that is at fault in such a scenario is always the CA. Only, the premise of ACME undermines that: if the government decides to take over the DNS record, Let's Encrypt absolutely will issue the certificate, and it absolutely wouldn't be their fault. If we did punish them for it, then what we are really saying is that ACME is a bad idea within WebPKI.
(BTW: a big part of your argument against DNSSEC always relies on "people aren't able to do it correctly", and if we apply that to CT the situation also sucks: as the expiration time for certificates goes lower and lower, the number of certificates I'm being issued goes up and up, and unless I have a durable central log of my requests--and no normal installation does--I can't verify any of the information from CT anyway.)
(Yes: we could build that software, so that Apache won't re-issue my certificates without first making sure that the request has been logged durably into a database, and there is software that also goes around scavenging CT to check if my certificate is mis-issued, but I hesitate to be that charitable, given that you have spent years ignoring people who assert that clients, not resolvers, must verify DNSSEC.)
(And, to take this as a moment to push back here on another of your refrains: yes, people can get DNSSEC wrong and accidentally publish their keys... but, the likelihood they will do something that dumb--despite not doing so for TLS--is way lower than the almost-certainty that they aren't going to even know what CT is, much less track it--and, critically, report after-the-fact "oops too late" attacks--for mis-issuance.)
The problem you're up against in writing persuasively to me is that I have a bunch of reasons to dislike DNSSEC. The transparency and root-of-trust issues are the ones that, in my experience, are most legible to people who aren't deep into cryptography. But they're not my biggest issue.
I think the fundamental design of DNSSEC is wrong. Its role in Internet security is mostly as a vector to keep 1990s cryptography in common use. Offline signing was wrong. Signing and not encrypting was wrong. The model we ended up with for authenticated denial is wrong. The trust model is wrong. Top-down deployment, rather than bottom-up with incremental value: wrong. It's really hard to look at DNSSEC as a design and find anything right with it.
I don't blame the DNSSEC authors for this; they were working with constraints and assumptions that got rewritten in the 2000s. I do blame the DNSSEC non-authors who refuse to accept the idea that the IETF could have gotten something wrong and are still trying to push this contraption on people.
Finally, as I keep saying: we're never going to get "DNS transparency". Governments run the roots of the DNS hierarchy. Governments don't run CAs, and even then, a single browser had to amass so much market power that they could dictate terms to the CAs (and, in the process, kill some of the largest CAs) in order to make CT happen. That simply cannot happen with the DNS.
The dumbest thing about all of this is that the only purpose DNSSEC still serves is as an extra layer of authentication for domain-based WebPKI challenges. Nobody needs to change their zones to get better security than that! We can just run a protocol between the CAs and the registrars (it exists!).
> Governments don't run CAs, and even then, a single browser had to amass so much market power that they could dictate terms to the CAs (and, in the process, kill some of the largest CAs) in order to make CT happen.
If we accept this thesis, it means that the only reason we are able to have a secure internet is because we also have a monopoly on browser technology; and like, we don't want there to be a monopoly on browser technology, right? That means the current situation is unstable at best until we finally manage to regulate that monopoly out of existence... or, and maybe you feel this is the worst case scenario, attempt to turn it into a public good.
> Governments run the roots of the DNS hierarchy.
That this is true has some devastating consequences that I feel as if I've never seen you discuss, as TLS has not, does not, (seemingly) will not, and potentially even cannot fix the problem that a government is in a position to subvert the DNS hierarchy: it is just a fundamental property of deciding to build the web on top of a federated namespace managed by governments. We need to fix this at the web-protocol level by ditching the current TLD authority for something that isn't governmental (and is more distributed, not federated).
I mean, let's look at surveillance that doesn't even require a certificate: 1) governments are in a position to redirect traffic to websites within their namespace by forging the DNS records anywhere under their TLD; 2) IP doesn't provide any protection against someone forwarding the encrypted packets; and 3) TLS only encrypts the content, but doesn't have any mechanism to encrypt the metadata of the communication channel (timing and length)...
...this is already devastating, as not only does it let you target small regions of people and determine if they are accessing a website, it also (and this is the thing that I find not enough people really internalize correctly) let's you know exactly what those users are accessing with high certainty, as you can see the approximate size of the responses to their requests. This technique has been successfully implemented in the past to figure out not merely which page of a site you are visiting, but what region of Google Maps you are looking at (based on patterns of map tile sizes), what video on YouTube you are looking at (based on patterns of video chunk sizes), and (maybe the most incredible) what your search query is (based on patterns of type-ahead suggestion lists: every keystroke causes a new request with a new JSON response of a different size; I think they did it for PornHub).
> Finally, as I keep saying: we're never going to get "DNS transparency".
I think you're throwing up your hands in defeat at something that we actually can fix in the browser stacks: simply add a new TLD that stores all of its second-level records on a blockchain. We already have some of these... they aren't designed well, as they don't have people who actually know much about the needs of internet protocols involved (the level of "defeat snatched from the jaws of victory" in this space is simply insane), but if people like us constantly point in the correct direction, we can pull this off.
Or like, hell: maybe you find distributed blockchain tech distatesful... what if the EFF partnered with Google to manage a new TLD? Hell: Google already manages a TLD in a similar conceptual space (the one where Chrome requires HTTPS on all connections)! And then, what if they decided that DNS transparency was important enough to implement a scheme similar to the CA transparency lists for it? It would take some effort to come up with the correct design for this, but it isn't as if CA transparency was easy to pull off or is in any way perfect even today: it is a shaky system that only sometimes helps, but you care anyway!
That isn't going to happen if you just write off DNS transparency as so impossible that it isn't even worth discussing. If you, instead, consistently said DNS transparency is really important, even though it is hard, maybe people at Google, EFF, Apple, Mozilla, or who all else might be relevant for this might suddenly get inspired to work on it... and, sure: to do it as a new TLD doesn't fix the problem immediately, but new websites are launched all the time, new TLDs already are successfully booted (even by Google), developers seemingly like to jump on fad TLDs, and if you start somewhere, maybe the pressure will slowly cause some of the existing TLDs to do something similar.
Sure, DNSSEC might be a non-starter... but like, replacing DNSSEC is equivalent to fixing it, if you are willing to take a long-enough term approach to your advocacy efforts.
Ironically, I think you've managed to meet my cynicism about DNS security with something even more reductive. I think it's more true than not that a browser monopoly reformed the WebPKI. That doesn't mean I think a browser monopoly is a good thing, only that it did one good thing, and losing that good thing would be, uh, bad.
You're accusing me of "throwing my hands up". But you're also writing a comment that suggests maybe it's not a big deal if TLS certificates can get spoofed by state adversaries; how could it be worse than how things are now?
I mean: in a sense, I get it, because I don't think any of this is the high order bit of Internet security, which has a lot more to do with xnu and Android kernel security decisions than with all this protocol ceremony.
As I keep saying, the fact that DNSSEC is essentially a key escrow system run by world governments is just one of my objections, and not the one I personally care most about. What I care most about is spending hundreds of millions of dollars in a forklift upgrade of a core Internet protocol, all to take several steps backwards in Internet security.
Parts of the inevitable Thomas Ptacek DNSSEC rant remind me of the years of denialism from C++ people before the period when they were "concerned" about safety and the past few years of at least paying lip service to the idea that C++ shouldn't be awful...
One thing I like about Thomas’ history on this issue has been the focus on UX. I think that “can probably be used safely by an expert who understands the domain” as a failure mode is something we should spend more time thinking about as an architecture failure rather than a minor frictional cost.
Sure, although in this space Thomas was already entirely happy with the early Web PKI which is completely terrible for this - similar conditions apply.
At work this week I was hand-holding a DB engineer who was installing some (corporate, not Web PKI) certs and it reminded me of those bad old days. He's got a "certificate" and because this isn't my first rodeo, I ask him to describe it in more details before he just sends me it to look at. Of course it's actually a PKCS#12 file, and so if he'd sent that the key which is inside it is compromised - but he doesn't know that, the whole system was introduced to him as a black box which renders him unable to make good decisions. Out of that conversation we got more secure systems (fixing some issues that pre-existed the fault I was there to help with) and an ally who understands what the technology is actually for and is now trying to help us deliver its benefits rather than just following rote instructions they don't understand.
Anyway, just as we went from "I dunno, I clicked a button and typed in the company credit card details, then I got emailed this PFX file to put on the server" to "Click yes when it asks if you want Let's Encrypt" with both an invisible improvement to delivered security and a simpler workflow, that's very much possible for DNSSEC if people who want to solve the problem put some work in, rather than contentedly announcing that it can't be fixed and we shouldn't worry about it.
I don’t know that I’ve read much support for “entirely happy” but the key difference is that DNSSEC is much harder to upgrade. You didn’t need to wait for my ISP to upgrade their DNS server before you could stop using PKCS12, and you definitely didn’t need me to upgrade my operating system.
The most important work to put in isn’t tweaking DNSSEC but changing it from the pre-PC Internet model where everyone completely trusted their network operators – pushing signature validation out to the client and changing the operating system APIs to make better user interfaces possible.
> possible for DNSSEC if people who want to solve the problem
But for several decades, DNSSEC proponents have been complaining about the vocal detractors, rather than actually addressing the problems that have been identified. That's a pretty significant track record on which to make a fair judgment about who is making the strongest case.
Someone above offered a link [1] that gives some pretty good reasons why nobody is stepping up to fix the problems.
(This is Geoff Huston, for what it's worth, who is an Internet operations luminary of the old breed, and a very long-time very enthusiastic DNSSEC proponent. He doesn't really "call time on" DNSSEC, though; the APNIC Ping podcast he did on this is worth listening to.)
It's a fun site! I'm not entirely sure why the protagonist is a green taco, but I can see why a DNS provider would make a cartoon protocol explainer. It's just that this particular protocol is not as important as the name makes it sound.