> See for example the many problems of NIST P-224/P-256/P-384 ECC curves
What are those problems exactly? The whitepaper from djb only makes vague claims about NSA being a malicious actor, but after ~20 years no known backdoors nor intentional weaknesses has been reliably proven?
As I understand it, a big issue is that they are really hard to implement correctly. This means that backdoors and weaknesses might not exist in the theoretical algorithm, but still be common in real-world implementations.
On the other hand, Curve25519 is designed from the ground up to be hard to implement incorrectly: there are very few footguns, gotchas, and edge cases. This means that real-world implementations are likely to be correct implementations of the theoretical algorithm.
This means that, even if P-224/P-256/P-384 are on paper exactly as secure as Curve25519, they could still end up being significantly weaker in practice.
I tried to defend a similar argument in a private forum today and basically got my ass handed to me. In practice, not only would modern P-curve implementations not be "significantly weaker" than Curve25519 (we've had good complete addition formulas for them for a long time, along with widespread hardware support), but Curve25519 causes as many (probably more) problems than it solves --- cofactor problems being more common in modern practice than point validation mistakes.
In TLS, Curve25519 vs. the P-curves are a total non-issue, because TLS isn't generally deployed anymore in ways that even admit point validation vulnerabilities (even if implementations still had them). That bit, I already knew, but I'd assumed ad-hoc non-TLS implementations, by random people who don't know what point validation is, might tip the scales. Turns out guess not.
Again, by way of bona fides: I woke up this morning in your camp, regarding Curve25519. But that won't be the camp I go to bed in.
I agree that Curve25519 and other "safer" algorithms are far from immune to side channel attacks in their implementation. For example, [1] is a single trace EM side channel key recovery attack against Curve25519 implemented in MbedTLS on an ARM Cortex-M4. This implementation had the benefit of a constant-time Montgomery ladder algorithm that NIST P curve implementations have traditionally not had a similar approach for, but nonetheless failed due to a conditional swap instruction that leaked secret state via EM.
The question is generally, could a standard in 2025 build upon decades of research and implementation failures to specify side channel resistant algorithms to address conditional jumps, processor optimisations for math functions, etc which might leak secret state via timing, power or EM signals. See for example section VI of [1] which proposed a new side channel countermeasure that ended up being implemented in MbedTLS to mitigate the conditional swap instruction leak. Could such countermeasures be added to the standard in the first instance, rather than left to implementers to figure out based on their review of IACR papers?
One could argue that standards are simply following interests of standards proposers and organisations who might not care about cryptography implementations on smart cards, TPMs, etc, or side channel attacks between different containers on the same host. Instead, perhaps standards proposers and organisations only care about side channel resistance across remote networks with high noise floors for timing signals, where attacks such as [2] (300ns timing signal) are not considered feasible. If this is the case, I would argue that the standards should still state their security model more clearly, for example:
* Is the standard assuming the implementation has a noise floor of 300ns for timing signals, 1ms, etc? Are there any particular cryptographic primitives that implementers must use to avoid particular types of side channel attack (particularly timing)?
* Implementation fingerprinting resistance/avoidance: how many choices can an implementation make that may allow a cryptosystem party to be deanonymised by the specific version of a crypto library in use?[3] Does the standard provide any guarantee for fingerprinting resistance/avoidance?
> As I understand it, a big issue is that they are really hard to implement correctly.
Any reference for the "really hard" part? That is a very interesting subject and I can't imagine it's independent of the environment and development stack being used.
I'd welcome any standard that's "really hard to implement correctly" as a testbed for improving our compilers and other tools.
I posted above, but most of the 'really hard' bits come from the unreasonable complexity of actual computing vs the more manageable complexity of computing-with-idealized-software.
That is, an algorithm and compiler and tool safety smoke test and improvement thereby is good. But you also need to think hard about what happens when someone induces an RF pulse at specific timings targeted at a certain part of a circuit board, say, when you're trying to harden these algorithmic implementations. Lots of things that compiler architects typically say is "not my problem".
It would be wise for people to remember that it’s worth doing basic sanity checks before making claims like no backdoors from the NSA. strong encryption has been restricted historically so we had things like DES and 3DES and Crypto AG. In the modern internet age juniper has a bad time with this one https://www.wired.com/2013/09/nsa-backdoor/.
Usually it’s really hard to distinguish intent, and so it’s possible to develop plausible deniability with committees. Their track record isn’t perfect.
With WPA3 cryptographers warned about the known pitfall of standardizing a timing sensitive PAKE, and Harkin got it through anyway. Since it was a standard, the WiFi committee gladly selected it anyway, and then resulted in dragonbleed among other bugs. The techniques for hash2curve have patched that
It's "Dragonblood", not "Dragonbleed". I don't like Harkin's PAKE either, but I'm not sure what fundamental attribute of it enables the downgrade attack you're talking about.
When you're talking about the P-curves, I'm curious how you get your "sanity check" argument past things like the Koblitz/Menezes "Riddle Wrapped In An Enigma" paper. What part of their arguments did you not find persuasive?
yes dragon blood. I’m not speaking of the downgrade but the timing sidechannels
— which were called out very loudly and then ignored during standardization. and then the PAKE showed up in wpa3 of all places, that was the key issue and was extended further in a brain pool curve specific attack for the proposed initial mitigation. It’s a good example of error by committee I do not address that article and don’t know why the NSA advised migration that early.
The riddle paper I’ve not read in a long time if ever, though I don’t understand the question. As Scott Aaronson recently blogged it’s difficult to predict human progress with technology and it’s possible we’ll see shors algorithm running publicly sooner than consensus. It could be that in 2035 the NSA’s call 20 years prior looks like it was the right one in that ECC is insecure but that wouldn’t make the replacements secure by default ofc
Aren't the timing attacks you're talking about specific to oddball parameters for the handshake? If you're doing Dragonfly with Brainpool curves you're specifically not doing what NSA wants you to do. Brainpool curves are literally a rejection of NIST's curves.
If you haven't read the Enigma paper, you should do so before confidently stating that nobody's done "sanity checks" on the P-curves. Its authors are approximately as authoritative on the subject as Aaronson is on his. I am specifically not talking about the question of NSA's recommendation on ECC vs. PQ; I'm talking about the integrity of the P-curve selection, in particular. You need to read the paper to see the argument I'm making; it's not in the abstract.
Ah now I see what the question was as it seemed like a non sequitur. I misunderstood the comment by foxboron to be concerns about any backdoors not that P256 is backdoored, I hold no such view of that, surely bitcoin should be good evidence.
Instead I was stating that weaknesses in cryptography have been historically put there with some NSA involvement at times.
For DB: The brain pool curves do have a worse leak, but as stated in the dragon blood paper “we believe that these sidechannels are inherent to Dragonfly”. The first attack submission did hit P-256 setups before the minimal iteration count was increased and afterward was more applicable to same-system cache/ micro architectural bugs. These attacks were more generally correctly mitigated when H2C deterministic algorithms rolled out. There’s many bad choices that were selected of course to make the PAKE more exploitable, putting the client MAC in the pre commits, having that downgrade, including brain pool curves. but to my point on committees— cryptographers warned strongly when standardizing that this could be an attack and no course correction was taken.
The NSA changed the S-boxes in DES and this made people suspicious they had planted a back door but then when differential cryptanalysis was discovered people realized that the NSA changes to S-boxes made them more secure against it.
That was 50 years ago. And since then we have an NSA employee co-authoring the paper which led to Heartbleed, the backdoor in Dual EC DRBG which has been successfully exploited by adversaries, and documentation from Snowden which confirms NSA compromise of standards setting committees.
> And since then we have an NSA employee co-authoring the paper which led to Heartbleed
I'm confused as to what "the paper which led to Heartbleed" means. A paper proposing/describing the heartbeat extension? A paper proposing its implementation in OpenSSL? A paper describing the bug/exploit? Something else?
And in addition to that, is there any connection between that author and the people who actually wrote the relevant (buggy) OpenSSL code? If the people who wrote the bug were entirely unrelated to the people authoring the paper then it's not clear to me why any blame should be placed on the paper authors.
The original paper which proposed the OpenSSL Heartbeat extension was written by two people, one worked for NSA and one was a student at the time who went on to work for BND, the "German NSA". The paper authors also wrote the extension.
I know this because when it happened, I wanted to know who was responsible for making me patch all my servers, so I dug through the OpenSSL patch stream to find the authors.
I'm asking what the paper has to do with the vulnerability. Can you answer that? Right now your claim basically comes down to "writing about CMake is evidence you backdoored CMake".
> Right now your claim basically comes down to "writing about CMake is evidence you backdoored CMake".
This statement makes it clear to me that you don't understand a thing I've said, and that you don't have the necessary background knowledge of Heartbleed, the XZ backdoor, or concepts such a plausible deniability to engage in useful conversation about any of them. Else you would not be so confused.
Please do some reading on all three. And if you want to have a conversation afterwards, feel free to make a comment which demonstrates a deeper understanding of the issues at hand.
Sorry, you're not going to be able to bluster your way through this. What part of the paper you're describing instructed implementers of the TLS Heartbeat extension to copy data into an uninitialized buffer and then transmit it on the wire?
> What part of the paper you're describing instructed implementers of the TLS Heartbeat extension to copy data into an uninitialized buffer and then transmit it on the wire?
That's a very easy question to answer: the implementation the authors provided alongside it.
If you expect authors of exploits to clearly explain them to you, you are not just ignorant of the details of backdoors like the one in XZ (CMake was never backdoored, a "typo" in a CMake file bootstrapped the exploit in XZ builds), but are naive to an implausible degree about the activities of exploit authors.
If you tell someone you're going to build an exploit and how, the obvious response will be "no, we won't allow you to." So no exploit author does that.
Think the above poster is full of bologna? It's less painful for everyone involved, and the readers, to just say that and get that out of the way rather than trying to surgically draw it out over half a dozen comments. I see you do this often enough that I think you must get some pleasure out of making people squirm. We know you're smart already!
I think their argument is verkakte but I literally don't know what they're talking about or who the NSA stooge they're referring to is, and it's not so much that I want to make them squirm so much as that I want to draw the full argument out.
I think your complaint isn't with me, but with people who hedge when confronted with direct questions. I think if you look at the thread, you'll see I wasn't exactly playing cards close to my chest.
I don't make a habit of googling things for people when they could do it just as quickly themselves. There is only one paper proposing the OpenSSL heartbeat feature. So I have not been unclear, nor can there be any confusion about which it is. Perhaps we'll learn someday what tptacek expects to find or not to find in it, but he'll have to spend 30 seconds with Google. As I did.
Informing one's self is a pretty low bar for having a productive conversation. When one party can't be arsed to take the initiative to do so, that usually signals the end of useful interaction.
A comment like "I googled and found this paper... it says X... that means Y to me." would feel much less like someone just looking for an argument, because it involves effort and stating a position.
If he has a point, he's free to make it. Everything he needs is at his fingertips, and there's nothing I could do to stop him, nor would I want to. I asked for a point first thing. All I've gotten in response is combative rhetoric which is neither interesting nor informative.
Means, motive, and opportunity. Seems to check all the boxes.
There's no conclusive evidence that it wasn't purposeful. And plenty of evidence of past plausibly deniable attempts. So you can believe whatever lets you sleep better at night.
The NSA also wanted a 48 bit implementation which was sufficiently weak to brute force with their power. The industry and IBM initially wanted 64 bit. IBM compromised and gave us 56 bit.
Yes, NSA made DES stronger. After first making it weaker. IBM had wanted a 128-bit key, then they decided to knock that down to 64-bit (probably for reasons related to cost, this being the 70s), and NSA brought that down to 56-bit because hey! we need parity bits (we didn't).
They're vulnerable to "High-S" malleable signatures, while ed25519 isn't. No one is claiming they're backdoored (well, some people somewhere probably are), but they do have failure modes that ed25519 doesn't which is the GP's point.
in the NIST Curve arena, I think DJB's main concern is engineering implementation - from an online slide deck he published:
We’re writing a document “Security dangers of the NIST curves”
Focus on the prime-field NIST curves
DLP news relevant to these curves? No
DLP on these curves seems really hard
So what’s the problem?
Answer: If you implement the NIST curves, chances are you’re doing it wrong
Your code produces incorrect results for some rare curve points
Your code leaks secret data when the input isn’t a curve point
Your code leaks secret data through branch timing
Your code leaks secret data through cache timing
Even more trouble in smart cards: power, EM, etc.
Theoretically possible to do it right, but very hard
Can anyone show us software for the NIST curves done right?
As to whether or not the NSA is a strategic adversary to some people using ECC curves, I think that's right in the mandate of the org, no? If a current standard is super hard to implement, and theoretically strong at the same time, that has to make someone happy on a red team. At least, it would make me happy, if I were on such a red team.
He does a motte-and-bailey thing with the P-curves. I don't know if it's intentional or not.
Curve25519 was a materially important engineering advance over the state of the art in P-curve implementations when it was introduced. There was a window of time within which Curve25519 foreclosed on Internet-exploitable vulnerabilities (and probably a somewhat longer period of time where it foreclosed on some embedded vulnerabilities). That window of time has pretty much closed now, but it was real at the time.
But he also does a handwavy thing about how the P-curves could have been backdoored. No practicing cryptgraphy engineer I'm aware of takes these arguments seriously, and to buy them you have to take Bernstein's side over people like Neil Koblitz.
The P-curve backdoor argument is unserious, but the P-curve implementation stuff has enough of a solid kernel to it that he can keep both arguments alive.
See, this gets you into trouble, because Bernstein has actually a pretty batshit take on nothing-up-my-sleeve constructions (see the B4D455 paper) --- and that argument also hurts his position on Kyber, which does NUMS stuff!
At least in terms of the Bada55 paper, I think he writes in a fairly jocular style that sounds unprofessional unless you read his citations as well. You seem to object to his occasional jocularity and take it as prima facie evidence of him being “batshit”. Given that you are well known for a jocular writing style, perhaps you should extend some grace.
The slides seem like a pretty nice summary of the 2015-era SafeCurves work, which you acknowledge elsewhere on this site (this thread? They all blend together) was based on good engineering.
No, what I'm saying has only to do with the substance of his claims, which I now think you don't understand, because I laid them out straightforwardly (I might have been wrong, but I definitely wasn't making a tone argument) and you came back with this. People actually do work in this field. You can't just bluster your way through it.
This is a "challenge" with discussing Bernstein claims on Hacker News and places like it --- the threads are full of people who know two cryptographers in the whole world (Bernstein and Schneier) and axiomatically derive their claims from "whatever those two said is probably true". It's the same way you get these inane claims that Kyber was backdoored by the NSA --- by looking at the list of authors on Kyber and not recognizing a single one of them.
What do you think about Bernstein's arguments for SNTRUP being safe while Kyber isn't? Super curious. I barely follow. Maybe you've got a better grip on the controversy.
I’m not sure why you’re hung up on SNTRUP, since DJB didn’t submit it past round 2 of NISTPQC. In round 3, DJB put his full weight behind Classic McEliece.
You’ve previously argued that “cryptosystems based on ring-LWE hardness have been worked on by giants in the field since the mid-1990s” and suggested this is a point in Kyber’s favor. Well, news flash, McEliece has been worked on by giants in the field for 45 years. It shows up in NSA’s declassified internal history book, though their insights into the crypto system are still classified to this day.
That's a funny claim given NTRU goes back to 1996 and was a PQC finalist. I barely know what I'm talking about here and even I think you're bluffing your way through this. At this point you're making arguments Bernstein would presumably himself reject!
Since you've been very strident throughout this thread I'm wondering if you're going to have a response to this. Similarly, I'm curious, as a scholar of Bernstein's cryptography writing --- did the MOV attack (prominently featured on Safecurves) serve as a lovely harbinger of the failure of elliptic curve cryptography?
I tried a couple searches and I forget which calculator-speak version of "BADASS" Bernstein actually used, but the concept of the paper† is that all the NUMS-style curves are suspect because you can make combinations of mathematical constants say whatever you want them to say (in combination), and so instead you should pick curve constants based purely on engineering excellence, which nobody could ever disagree about or (looks around the room) start huge conspiracy theories over.
Well, DJB also focused on "nothing up my sleeve" design methodology for curves. The implication was that any curves that were not designed in such a way might have something nefarious going on.
What are those problems exactly? The whitepaper from djb only makes vague claims about NSA being a malicious actor, but after ~20 years no known backdoors nor intentional weaknesses has been reliably proven?