Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The thesis of this paper seems to be that there aren't many alternatives to 25519. But anybody who watched the fiasco of the IRTF trying to standardize a recommendation on 25519 knows that's not true. Browser vendors were sold on DJB's curve and asked IETF to add it to TLS; given that simple charter, people came out of the freaking woodwork with alternatives.

The "randomness derived from lotteries" property seems like a gimmick. Generating plausible random seeds isn't the hard problem with generating new curves. Rather, the problem is coming up with curves that are performant on common hardware without any patent-encumbered techniques.

It's also not clear to me how the $1MM curve is a meaningful alternative to 25519. Almost the entire paper is given to considering how to generate random parameters. Having accomplished that task, the paper simply generates a random Edwards curve, using their lottery generator to find a random prime, d, and base point, filtered against the SafeCurve requirements. So they end up with (in effect) a randomized version of Bernstein's curve. If 25519 somehow falls, why wouldn't this curve fall with it?

Compare this paper to those of 25519:

https://cr.yp.to/ecdh/curve25519-20060209.pdf

... or to FourQ:

https://eprint.iacr.org/2015/565.pdf

Also: the swipe at Brainpool seems disingenuous. Zero is the number of people who believe that Brainpool's mathematical constants were arranged specifically to create a backdoor. The point of BADA55 is that Brainpool's use of mathematical constants as seeds isn't dispositive, and that the performance-targeting parameters of 25519 are equally credible.



I don't think that's the thesis. I think the thesis is that there could be a "BADA55 construction" even in Curve25519. Even though all of the individual pieces are justifiable, for instance the 2^255-19 is justifiable because it's the smallest number. They knew the entire curve before trying to make it popular. Also, they could have used 2^255+95 because it's the smallest larger than 2^255 with some justification that it needed to be larger than 2^255. With enough arbitrary decisions a "one-in-a-million" vulnerability is possible. The premise of this curve is that there are no arbitrary decisions, since you commit to supporting the curve before even knowing what it is.

I think the analogy is someone shuffling a deck of cards, and taking a peek at the first card. Then betting someone $20 that it's the ace of spades. The process is justifiable, because you don't have any control of what the top card happens to be. It's just that you know what card it is and wouldn't make the bet if you didn't know it was already an ace of spades. Similarly, Curve25519 might have been constructed and happened to have the "one-in-a-million" vulnerability. How many other "Curve25519s" are there that we didn't hear about. Imagine that instead, the procedure was shuffle the cards after you commit to the bet.

I really doubt there's anything wrong with 25519. I feel like if anything, if djb et al knew about some vulnerability in curves that 25519 happened to have before releasing it, they would have put the weakness as one of the failing criteria to pick a new set of parameters.


I'm pretty sure the authors aren't suggesting that Curve25519 has been tampered with, because the FAQ for this paper suggests (a) that they continue to recommend Curve25519, and (b) that they are proposing this curve because they're concerned that there aren't any good alternatives to Curve25519 with trustworthy seeds.

Curve25519 follows roughly the same generation procedure as Microsoft's NUMS: it uses minimal parameters that satisfy a performance/security goal. NUMS starts from a security level, selects the smallest prime that satisfies that level, and then selects the smallest Edwards 'd' that passes security criteria. Curve25519 selects a prime of sufficient security as close as possible to a power of 2 (making it sparse and thus fast to do math on in software), and then selects a minimal Montgomery A given security criteria.

$1MM and 25519 represent two different schools of thought about how to generate curves. $1MM says, generate random unstructured parameters, and come up with a randomness procedure that is difficult to impeach. 25519 says generate minimal parameters that achieve particular performance goals. Both schools of thought address the concern that the curve generation procedure might be untrustworthy, but in different ways: the former by somehow proving randomness, the latter by removing degrees of freedom.

25519's school of thought has pretty much won the Internet.


I really appreciate the level-headed discussion in this thread so far, especially the comment I'm replying to.

It's a stark contrast to the CFRG mailing list. (At least, so far, no one has tried to derail discussion here with "hey check out my custom cipher it's soooo secure but you need to compress the data before encrypting it or else you can observe a repeated structure out of it".)

I like 25519's school of thought. If you use the smallest possible value for a given performance/security goal, there's less room for conspiracy theory (provided the person making the theory understands what's even going on).


Indeed. One might be biased, given one was quite... um... vocal in the whole process! I personally feel Curve25519 and Ed448-Goldilocks were very thoroughly discussed, absolutely fine, and they are the final CFRG recommendations. I am quite certain of that, and I do believe we have proved that just about as well as anyone possibly can.

(I'd like to thank everyone who suffered through the whole thing. I am just a participant there, I do not speak for anyone at all except myself, and hold no special anything.)

Both of the CFRG curves are rigid 'safe' curves, with (unlike the NIST curves) solid criteria given for each and every aspect of their definition, all of which were argued in the open, and fully-reproducible parameter generation algorithms are included in RFC7748 - https://datatracker.ietf.org/doc/rfc7748/

CFRG made the decision to select rigid 'safe' curves pretty early on, so that everyone could see exact reasons why we picked what we did at every stage, in the open, and that therefore absolutely no shenanigans were afoot. Specifically, we ended up picking one curve over a near-Mersenne prime (2^255-19), and one curve over a trinomial golden-ratio Solinas prime (2^448-2^224-1) - because they're about the right sizes, and rigidly-structured in a way that makes them almost uniquely fast in software. (We voted on specific sizes, and had supporting reasons even for that.) Both curves are also very handily - thanks djb & Mike Hamburg! - both already present in the existing literature, and indeed quite a few examples of X25519 (DH) and Ed25519 (sig) deployment in particular already exist.

During the middle of the process, some people (some of whom were hardware developers involved in the selection of the "Brainpool" curves, a similar set of random prime field curves) showed up pushing for a new set of curves with random, unstructured prime fields.

CFRG didn't go that way, and we discussed that decision and the reasons why thoroughly. Mostly because the performance of curves over randomly-structured primes absolutely sucks in software (I'm paraphrasing here, but we are talking an order of magnitude difference, at least), but also because given we were even arguing about endianness for God's sake, we'd probably never have been able to agree on a sound, unbiased-in-the-face-of-future-scrutiny method for selecting random parameters in an unbiased way, let alone one that we were positive we could convince sceptical third parties about. It would appear this curve comes from the hardware developers who really wanted new random curves in any case, and have tried to do just that anyway. Good luck to them there.

The only possible advantage really with a random prime field is if some big problem exists with using elliptic curves over prime fields with a sparse bit pattern that doesn't also apply to random prime fields. We argued about that for a while (you may be detecting a pattern here...!), and the conclusion was that we simply don't know of any good underlying reason that would happen that wouldn't also pose a huge threat to random prime fields, or elliptic curve-based cryptography in general (e.g. big practical quantum computers - in which case, arguing about which elliptic curve to use would be pointless and maybe we just ought to burn our worldly possessions and go live in caves. I'm paraphrasing a bit here! <g>).

The only thing that popped up was that if one has old crypto hardware that one is trying to reheat and present as new crypto hardware with go-faster stripes, and one is trying to get better performance from it by abusing its pre-existing RSA multiplier to do your ECC maths as well (like, for example, a certain hardware developer, ahem, might do), the side-channel blinding technique there would not work properly, because it's only designed to work with unstructured prime fields as you usually see in RSA, rather than a field with lots of concurrent 0s or 1s in there. So, um, friendly reminder, as I said at the time: if you have an RSA multiplier, please don't abuse it to implement your elliptic curves. (Note: this would also present an issue if you did it with the NIST curves, which are also so structured.)

All the software out there that I've seen that implements Curve25519 appears to be at least timing side-channel resistant (this also being a goal, and why we went with Montgomery/Edwards forms specifically to simplify this, because short Weierstrass form is a bloody nightmare by comparison).

If you are tasked with designing new crypto hardware today, I do feel you'd be better placed doing what a friend of mine suggested: instead of bolting new bits on decades-old designs like most people seem to, actually go and design new open hardware (RISC-V?) and try to restore some verifiability in that. Put all the crypto in open-source, verifiable software which the user loads and verifies (a friend's prototype design just uploads the whole executable EEPROM to the host for comparison with a known-good image whenever you connect it, and zeroises the data EEPROM in hardware whenever the executable EEPROM is written by the host) - and then ship the hardware absolutely blank. Then you hopefully have one less problem, as you don't really have dedicated crypto hardware as much as a solid general-purpose microcontroller with side-channel protection. (I wonder what impact that might have on export controls, if any?) In any case, you probably also have enough problems to worry about in your supply chain already without risking becoming the next Gemalto... or, God forbid should things not go well, Apple. Just a thought there. I know several people are working on open hardware now; I've heard of a couple of dedicated hardware implementations of 25519 floating around, too.

I don't know of anyone who actually wants to use this so-called "million dollar curve". CFRG aren't recommending it. I'm certainly not. Not because it's bad. Just because - sorry, but as I said at the time - it's kind of... pointless? If you're using the Brainpool curves already, there's not really any reason you couldn't use this... but unless: a) you believe the Brainpool curves are deeply flawed somehow, enough to avoid, but these somehow aren't; and, b) you believe there's a huge breakthrough in number theory that affects structured primes but not unstructured primes or elliptic curves in general; and also, c) you believe nobody's going to show up with a quantum computer in the time period you're worried about... I just don't - personally! - see why you would bother switching. Clearly these people do. I suppose we simply disagree there, which wouldn't be the first time.


> we were even arguing about endianness for God's sake

Why is that? Where you scared that developers might have made mistake while typing constants in their implementations?


Because there is a little-known IETF/IRTF bylaw that no important standards debate can take place without relitigating endianness. You might think I am joking, but I am not. It's Postel's Second Law.


Quite! Some like big endian, because tradition ('network byte order'); little endian, because simplicity (current CPUs); mostly it's actually just classic bikeshedding¹.

Since X25519 and Ed25519 had already been deployed by a fair few people, all of whom used little-endian, recommending the opposite without a compelling reason would have been a bit too silly. (Upstream working groups are encouraged to consider the field elements as opaque octet strings, to try to avoid arguing the exact same point again.)

[1] http://bikeshed.com/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: