Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why we don’t generate elliptic curves every day (filippo.io)
287 points by mfrw on Oct 24, 2023 | hide | past | favorite | 51 comments


The author never addresses the issue of trusting standard parameters. I agree with them on the rest, but standard parameters need evidence that they were not specially selected for reasons unknown to everyone else.


You're never going to get a satisfactory answer. The world's foremost proponent of not trusting standardized parameters wrote and repeatedly submitted a paper demonstrating a model attack that targeted curves built from mathematical constants; when you've "backdoored pi", there's really no place to go from there.

That author's subtext, as I understood it, was that curves should be selected by functional optimization; pick the curve parameters that best solve the engineering problems. It was a useful argument given their preferred curve, 25519, which does indeed have a lot of attractive engineering features! But it's not hard to see that engineering excellence is a selection principle with even more degrees of freedom than mathematical constants; moreover, you're only ever considering optimality at a fixed point in time, but the optimums change over time --- cofactors might be less important when we lack complete additional formulae for a Weierstrass curves, which makes them hard to implement in constant time, but much more important once we do (as is the case now).

There's really no way around just doing the computer science and cryptological work of working out curve attacks. That's the Menezes and Koblitz argument from the "Enigma" paper: that the reason to trust the NIST P-curves is that we'd know by now if they were weak (not because NSA doesn't have secret attacks, but because the mechanism by which they were generated would result in whole large classes of broken curves that academic cryptography and mathematics would have caught by now).

If you don't find Alfred Menezes and Neil Koblitz persuasive surveyors of elliptic curve security, that's fine, but my response would be that you can't really trust any parameters at all at that point. Certainly, investing trust in the cryptographers best known to the generalist programmer audience seems like a bad alternative strategy.


> cofactors might be less important when we lack complete additional formulae for a Weierstrass curves, which makes them hard to implement in constant time, but much more important once we do (as is the case now).

If I remember correctly, part of the reason for the cofactor in 25519 is because Montgomery curves have to have a cofactor and only those curves have the nice x-only Montgomery Ladder, which basically rules out invalid curve attacks (so long as the curve is twist-secure — not all the NIST curves are IIRC). Do the complete addition formulas for short Weierstrass curves also fix this? Invalid curve attacks are a major danger for NIST curves.

I know you can use compressed point representation to get the same benefit, but that seems very rare in NIST curve implementations (because old now-expired patents). The landscape of NIST prime-order curve implementations is not great. If all those libraries were going to actually switch to complete addition formulas and compressed public keys then I might feel better about them enjoying a renaissance.


I agree with you, and am reassured by cryptosystems that use 25519 and put on edge by those that use the P-curves (not because of backdoor conspiracy theories, which are silly, but because the P-curves are easy to mess up). But cryptography engineers increasingly disagree with me.


(oh, also: yes, that's my understanding too, that you're still stuck validating incoming curve points in ECDH, but I think the anti-cofactor people have a corresponding argument for signature schemes.)


Yeah, and that’s totally reasonable. I’m one of those people that thinks signatures should be used for software updates and nothing else though.


Yes, my brain sort of shuts off and buckets signature cryptography into cryptocurrency, which is where the cofactor issues crop up anyways.


Hell, even picking seeds like "the text of the NYT headline from [future date]", since the FBI or some other shadowy governmental organization could lean on the NYT editor to construct the headline in a manner that is useful to them


That seems solvable by picking things about the future that are a lot harder to influence, like the hashes of future bitcoin blocks, or the exact value of all stocks on the NYSE at a given moment in time.

... ok. stock one is probably easier to influence by suborning the exchange :) - all world exchanges?


It's an interesting thought experiment, but it feels a little like trying to come up with the best defense against alien invaders. We're already talking about attackers able to craft cryptographic weaknesses decades of peer review can't discover or explain. Trying to figure out how to usefully constrain their algorithm design choices does not seem like a winning strategy.

The bitcoin block hash thing is interesting, but if you're talking about an attacker with that kind of unknown but sophisticated capability, how do you know it actually thwarts their plans? Maybe all they need is any large number with high entropy or maybe the leading zeros characteristic of bitcoin block hashes is actually key to their attack? Once you imbue your adversary with unknown powers, it's by definition difficult to know what helps or hurts them.


heh. well, could drop the leading zeros :)

But, yeah, magic powers, sure, anything goes. I was just thinking there must be something more reliable than NYT headlines.


It's way harder to fake physical measurements (like weather, solar activity, etc) published by international bodies, You could maybeeee influence the least significant digit but even this would be hard without cooking up experimental data.

Maybe get physical measurements, stock exchange data, newspaper headlines, and bitcoin blocks, all in 6 months in the future, and XOR all of them.


Hmm. How about asking all members of the UN to produce 8 "random" bytes. Concatenate in ISO alpha order, hash the result.


>> The world's foremost proponent of not trusting standardized parameters wrote and repeatedly submitted a paper demonstrating a model attack that targeted curves built from mathematical constants; when you've "backdoored pi", there's really no place to go from there.

That sounds like an argument against standard parameters. Even if we trust an algorithm, we can't trust others to choose parameters for them? The solution seems obvious - find a better way to choose (non-standard) parameters.


> That's the Menezes and Koblitz argument from the "Enigma" paper […]

PDF:

> In August 2015 the U.S. National Security Agency (NSA) released a major policy statement on the need for post-quantum cryp- tography (PQC). This announcement will be a great stimulus to the development, standardization, and commercialization of new quantum- safe algorithms. However, certain peculiarities in the wording and tim- ing of the statement have puzzled many people and given rise to much speculation concerning the NSA, elliptic curve cryptography (ECC), and quantum-safe cryptography. Our purpose is to attempt to evaluate some of the theories that have been proposed.

* https://eprint.iacr.org/2015/1018.pdf

Then-coverage:

* https://blog.cryptographyengineering.com/2015/10/22/a-riddle...


Could you share the paper(s) about "backdoors" in physical constants? That doesn't make much sense at first sight.



>but because the mechanism by which they were generated would result in whole large classes of broken curves that academic cryptography and mathematics would have caught by now

Might such discoveries by academics be forcibly suppressed by government agencies?


I would argue that suppression of public individuals would be much more difficult than say the NSA just scooping up the best mathematicians in that field


No.


Footnote [1] is about different strategies to select trustworthy standard parameters. I believe they are all good enough to make it so that if someone can still "backdoor" the result, they are so far ahead of the outside world that they might as well have broken the whole scheme, without influencing the parameters.

[1] https://words.filippo.io/dispatches/parameters/#fn1

(Yes, I hide too much stuff in the footnotes.)


Out of curiosity, have people considered rigidly defining requirements, but rather than picking the simplest/lowest value that satisfies the requirements, instead having a public “parameter choice ceremony” using a multiparty random number generation protocol to pick a random value that satisfies the requirements? That seems like it would make it much harder to manipulate the value by changing the requirements.


This probably leads to regress (why trust the multiparty scheme?), and would only be convincing to a tiny fraction of skeptics (asymptotically, the ones who are least likely to be cranks and therefore the least noisy ones).

Meanwhile, a big part of current cryptographic design is recognizing that many of our current primitives have far larger margins than we really need, and that we can squeeze performance out of our schemes by taking a more evidence-based approach to parameter selection/round counts/etc.[1].

[1]: https://eprint.iacr.org/2019/1492


That's why the Bitcoin and Ethereum crowd trust the Koblitz curves like secp256k1 instead of secp256r1 which is the only curve that is in widespread use in browsers, operating systems, secure enclaves etc.

The k1 curve has parameters like 15, whereas the "standard" one has curves with parameters chosen as an "arbitrary" number in the billions for some reason. People believe the NSA may have iterated through the previous ones until they found a vulnerable curve, and NIST recommended that. Take a look:

https://cointelegraph.com/news/this-researcher-says-bitcoins...

“(1) The Koblitz curve is specially designed for faster scalar multiplications. Hence the (signing, verifying and key generation) operations on Secp256k1 are faster than those on Secp256r1. (2) Although the Secp256r1 curve was announced to be randomly selected, there could still exist some suspicion that some backdoor might be secretly set up in the curve parameters. In contrast, the Koblitz curve parameters are mathematically determined, and there is little possibility for setting such a backdoor.”

However, given the prevalence of the r1 curve, Ethereum devs might want to add a precompiled contract so that people can sign into the blockchain without trusting web sites and wallet software:

https://ethereum-magicians.org/t/eip-7212-precompiled-for-se...

https://security.stackexchange.com/questions/256088/is-the-e...


Sounds like one has to prove "a negative" https://youtu.be/n-kXok4tznU?t=26


Whereas parameters generated on the fly don't?


This is very timely for me. Only two weeks ago, I learned that the bouncycastle Java FTP client library refuses to talk to servers that run with custom dh_params and I was wondering why because my intuition told me that custom parameters should be better than parameters shared between everyone.

And now only two weeks later, I get a very good explanation as to why, like often in crypto, intuition was wrong


There can also sometimes be more specialized reasons than the general considerations given in this article. I forgot now whether it was DH or RSA or both, but there was a toy attack where interacting with a peer with super-bad maliciously-chosen parameters can also create an oracle for attacking the same party's simultaneous interactions with a third party, if the same secrets are used in both conversations.

Sorry for the lack of details, I just totally forget them. I know I did something like this in a CTF, though.

A vaguely related example could be that nonce reuse in DSA/ECDSA signatures can leak the signing key, so a DSA signer must not allow the other side to choose the nonce. That isn't conventionally seen as a "parameter", but it's easy to imagine that an implementer wouldn't automatically assume that it's actively dangerous to let the other party choose it.


Famously, small subgroup attacks in DH; I can't think of non-silly RSA parameter negotiation bugs (I mean, E=1 with SaltStack, but that doesn't really count). Invalid point bugs in ECDH and zero-mod-p in PAKEs are similar in spirit, but don't really derive from parameter negotiation.


Curious to hear someone with more expertise than me opine on the validity of brute forcing standardized parameters, which doesn't seem to be discussed in article.

Are standardized elliptical curves still susceptible to being "individually" broken in ECDH in practice? Or are there other subsequent randomized mechanisms to accomplish forward secrecy / per session resistance?

I understand the article's point against diversity, but there seems a gap of the "G20 nation states have orders of magnitude more resources than everyone else" variety. Even if something is ridiculously computationally intensive for everyone, it can be feasible given Manhattan Project level resources, if the payoff is worth it.


Brute force? I don't think so. Orders of magnitude in brute force are sometimes easier to look at as raw numbers. If we take the sibling comments estimate of the global BitCoin network's total computation of 2^90, multiply it arbitrarily by three orders of magnitude we get:

  2^90        =            1237940039285380000000000000
  2^90 * 1000 =         1237940039285380000000000000000
  2^128       = 340282366920938000000000000000000000000
Even G20 nation states would still be many orders of magnitude short. Much (much, much, much) cheaper to spend your money and energy on an actual Manhattan Project. Then just show up with a nuclear warhead and ask politely for the key.


… or another stuxnet even.



Every P-256 or X25519 ECDH operation uses a new ephemeral single-use key breaking which is about as hard as brute forcing AES-128. Global bitcoin hash rate is something like 90 bits per year. People think NSA doesn't have more hardware than all the bitcoin miners together. If they can break 128 bit security, they can't break it for every roundtrip in a Signal chat. I don't know what they would even use the capability to do. I don't think they need brute force to forge a Microsoft or Apple signature - easier to steal it. But it would be something high value that would be broadly useful, not a single message.


and more importantly, any operation that needs that much compute power can be more efficiently solved by passing around a couple billion dollars of bribes


Or blackmailing the right person.


You could have one side generate new keys for every session. That's how WebPush encryption works, if you're looking for an implementation. Client has a static public key stored on the server, server generates a per message ECDH key and embeds the pub key in the message.


TIL that there are known attacks on primality testing routines, where the test will accept as prime a composite number. Vulnerable libraries at the time of the paper (2018) included OpenSSL, GMP, Apple, JSBN, Cryptlib, LibTomMath, LibTomCrypt, WolfSSL, Bouncy Castle and Botan.


It's a very well made paper, absolutely suggested if you're into cryptography and number theory.

Prime and Prejudice: Primality Testing Under Adversarial Conditions by Martin R. Albrecht, Jake Massimo, Kenneth G. Paterson, and Juraj Somorovsky.

https://eprint.iacr.org/2018/749


The author proposes a corollary to Kerckhoff's principle:

> A cryptosystem should be secure even if all the parameters, except the key, are shared across every user.

It seems that if the key counts as a parameter, then surely nonces also count, and we certainly do not want nonces to be reused. This may be why this principle is not best practice. I am quite confident that the author is well aware of this, and I agree with the general thrust of what they're saying... Perhaps another way of saying it is that where parameters can be held constant, they should be. Or to quote Daniel J. Bernstein[1] quoting Mark Twain:

> Behold, the fool saith, "Put not all thine eggs in the one basket"—which is but a manner of saying, "Scatter your money and your attention;" but the wise man saith, "Put all your eggs in the one basket and—WATCH THAT BASKET."

[1]: https://blog.cr.yp.to/20140205-entropy.html


[removed for irrelevance]


I actually think he means everything is fixed except the key, salt, nonce, etc. Not just shared as in publicly known, shared as in the same value is used by everyone.


You're right of course, I read TFA too quickly.


Are there cryptographers with dissenting opinions to this article, or is what the author writes about generally recognised best practice?

I ask because I remember an article a while back where this same author had very confidently got some details wrong about CloudFlare and I think the CEO or CTO or someone like that had to step in and correct him.


Are you sure you're thinking of the right author? Filippo used to work at Cloudflare so I can't imagine he got the technical details about it too wrong.


Yes, definitely. It was this comment chain on an HN post: https://news.ycombinator.com/item?id=33573477


It’s easier to backdoor a known key. Tell people they have encryption, and they will trust you. Very few will be able to verify, even if they care too. When they do, the algorithm checks out, but it doesn’t try matter if you already have the answer.


yay! another filippo post! I always appreciate their writing <3

now the silly part!

[wildly gesticulating at a particle board filled with photos and strings]

"that's just what the nice golang cryptographers want you to think!"


I've wondered about this, but I don't trust myself to figure out the answer. Even though I have a background in math and I've done some cryptography related work, I know enough about the crypto space to know that I don't know enough to make my own decisions. The featured article by Filippo Valsorda makes a lot of sense.

On the other hand, if the NSA really could find backdoors into elliptic curves (perhaps with a great deal of work) they would be motivated to gaslight the rest of us about elliptic curves by creating article like TFA.


If that's really a concern of yours, the only logical responses are:

(1) Don't do any cryptography

(2) Seriously study cryptography so that you can attempt to analyze conclusions axiomatically.

Any other answer requires your to trust people that might be agents of NSA.


I guess there is also the approach, not favored by cryptography engineers, which is to decide for some projects that you don't care about efficiency, you would securely combine too many primitives. Like stream chunks for bulk encryption, but each chunk uses both AES-CTR-256 and ChaCha20, both with extended nonces, and HMAC-SHA256 for MAC. The asymmetric part combines X448, P-521, Kyber-1024 and NTRUPrime-whatever, the way normal people combine X25519 with Kyber-512. If people I trust would build "tinfoilhatcrypto" this way, I would use it for some things. The environmental impact of using this for small amounts of data would probably be negligible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: