Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there a good random library that's not the giant ball of death that's OpenSSL?


Depends on why you want to use a random number generating library. What are your requirements and specifically what are your requirements that aren't satisfied with the system's csprng. Once you have that defined, you can start looking at what algorithms you want and from there what libraries implement it.

Edit: or to put another way, first define what is it that linux's getrandom() and /dev/urandom doesn't provide you.


This talk about C++ and rand being harmful is quite educational: https://channel9.msdn.com/Events/GoingNative/2013/rand-Consi...

What it points out is that even a good random number generator can be used incorrectly, and without the right tools your efforts to produce truly random numbers are doomed from the start.

C++ has an embarrassing wealth of random number generators. The Ruby core has almost nothing that can measure up to that, yet it seems like a huge oversight.

I like that C++ has a generator for many different use cases, they all have their reason for being there, but Ruby has a singular one with unknown properties. Porting over what C++ has and making a proper Random library for Ruby would make a lot of sense here.


None of those C++ RNGs are suitable for cryptography. The answer for C++ CSPRNGs is, like in every other language, to use /dev/urandom. Multiple CSPRNGs just mean multiple single points of failure.


Not every random number generator has to be cryptographically secure. Sometimes they just need to be properly random.


ISAAC (http://burtleburtle.net/bob/rand/isaacafa.html) for a CSPRNG. But what is your use-case for something non-standard? Just use /dev/urandom


ISAAC is a DRBG. It's like 1/3rd of a complete CSPRNG, and it's the easiest third to build. To use ISAAC securely on Linux, if for some reason you actually wanted to do that, you'd swap out the hash DRBG inside the kernel LRNG with ISAAC, and then continue using /dev/urandom.

Simply replacing the OpenSSL RNG or /dev/urandom in userland code with the ISAAC routines is likely to blow your app up.


How about, backing the equivalent of /dev/urandom for a unikernel OS?


No, no, no, don't use ISAAC.


Do you have a citation you can provide? I don't know of any research that has shown a significant weakness in ISAAC.


Why would you use a PRNG with unknown cryptographic properties, not designed by a cryptographer, as opposed to one of the NIST's DRBG or a good stream cipher, such as ChaCha?

Weakness: https://eprint.iacr.org/2006/438 — "huge subsets of internal states which induce a strongly non-uniform distribution in the 8192 first bits produced"

Finally, why is deterministic PRNG suggested as a replacement for OpenSSL's random number generator? In general, the advice to write your own userspace PRNG replacement for OpenSSL is not a good advice, because many people are not competent enough to do it.


> as opposed to one of the NIST's DRBG

I certainly wouldn't go anywhere near another NIST DRBG..

> https://eprint.iacr.org/2006/438

From my brief understanding Aumasson's paper uses a different seeding routine from the example provided in the c implementation which allows the weaker states to be produced - indeed it's mentioned on the author's website.

> Finally, why is deterministic PRNG suggested as a replacement for OpenSSL's random number generator? In general, the advice to write your own userspace PRNG replacement for OpenSSL is not a good advice, because many people are not competent enough to do it.

If you read my above post I clearly do not suggest this.


> I certainly wouldn't go anywhere near another NIST DRBG..

OK, this is an instant red-flag to me to get out of the conversation.


> OK, this is an instant red-flag to me to get out of the conversation.

Perhaps you are unfamiliar with? https://en.wikipedia.org/wiki/Dual_EC_DRBG


I'm familiar with it. CTR DRBG, Hash DRBG, HMAC DRBG are all fairly solid designs.


I've used Mersenne Twister[0] in the past.

[0] http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/emt.html

If you're talking about an RNG library for cryptographic purposes, then it depends on your use case.


It does not depend on your use case. The answer, in all cases, is just to use /dev/urandom.


I had a rather specialized case where it was the pragmatic choice (note, not technically required): running a lottery with potentially litigious losers.

If you used a CSPRNG with a seed space smaller than the set of possible lottery outcomes, losers could argue (misleadingly, since we still couldn't feasibly bias the result) that not all outcomes were equally probable and try to get the results thrown out. That is, the fact that there are widespread misconceptions about /dev/random can very rarely be a reason to use it :P

However, I agree that the rule is that you should just use /dev/urandom.


Why not use a hardware rng in that case? Seems a lot safer if you have to deal with litigious people.


We did, in a way. One of the sources used was random.org (uses radio receivers tuned to static from atmospheric noise: hardware RNG as a service). I also had less than 3 weeks to take it from proposal to production.

Combining two independent sources obtained by different people and using a cryptographic commitment scheme ensured that 1) no one person could fix the results or make it nonrandom (protection against Eddie Tipton-style attacks), 2) if at least one of the independent sources was random, the result would be.


Then what CSPRNG do you use? Any that has seeds larger than 256-bit?


Anything that reseeds during operation can qualify. In fact, if the CSPRNG's internal state isn't large enough, you need to periodically reseed or face the same objection.

But a CSPRNG which you need to explicitly seed with random bits as big as your output isn't providing much value (simply whitening) since generating the seed is the same problem you had before adding the CSPRNG. So you end up looking at a TRNG.


Are there not cases where blocking if there is insufficient entropy is the correct thing to do? Particularly if we're talking about important random numbers like long-lived private keys.



It must be possible for a computer to simply not have an adequate supply of randomness though, no? Sure, urandom will use hardware sources of randomness if they're available - but what if they're not?


No, this is a misconception. Once the LRNG is seeded, it can generate a more or less unbounded amount of high-quality random bytes from that seed. Entropy can't be "depleted".


Then how can a 4096-byte key be meaningfully safer than a 1024-byte key? Couldn't you just use the 1024-byte key to seed a RNG, generate a 4096-byte key from that, and have 4096-byte safety?

(Also what if the linux RNG isn't seeded e.g. in a VM scenario? Or is the answer just "don't do that"?)


mt19937 (or preferably SFMT¹) is a great library (and fine for cryptographic purposes if you have a sufficiently random entropy source with which to seed it, and reseed it frequently from your entropy source), but you still have to seed it somehow which is where /dev/(u)random comes in. So it really doesn't solve anything.

--

¹ http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.h...


Have you seen the php mt_rand cracker?

http://www.openwall.com/php_mt_seed/


That one needs manual seeding too, which may lead to more issues.


As others have said it depends on your needs[6] and whether or not it has to be a CSPRNG (cryptographically secure). Since you mentioned OpenSSL I'll assume that in this context we are talking about a CSPRNG.

The short answer is to just use the OS provided one if available. Linux has /dev/urandom[1] and the getrandom[2] syscall and Windows has RtlGenRandom[3][4] (there's also CryptGenRandom on Windows which is the "official" version but is more awkward to use since it requires a CSP context to use). On Windows this uses AES-256 in CTR mode as specified in NIST 800-90.

Outside of those (if for whatever reason[6] you're not satisfied with just using what the OS provides you) you can also take a look at how BoringSSL does things[5] (and LibreSSL although I'm less familiar with it). It uses a ChaCha20 instance to filter rdrand output (if supported, otherwise it just uses the system CSPRNG directly). The system CSPRNG is used to key the ChaCha20 instance and then for every call to RAND_bytes it uses rdrand output equal to however many bytes you requested and that gets essentially filtered through the ChaCha20 instance. It's fast and pretty simple. I found the code to be pretty easy to understand[5]. I believe LibreSSL does something similar with ChaCha20 although I'm not sure it uses rdrand.

I think there should be a pretty good reason if you're choosing not to use what the OS provides for you.

tl;dr Just use /dev/urandom (or the getrandom syscall) on Linux and RtlGenRandom on Windows.

[1] http://sockpuppet.org/blog/2014/02/25/safely-generate-random...

[2] http://man7.org/linux/man-pages/man2/getrandom.2.html

[3] https://msdn.microsoft.com/en-us/library/windows/desktop/aa3...

[4] https://boringssl.googlesource.com/boringssl/+/master/crypto... (RtlGenRandom requires slightly special definitions to work).

[5] https://boringssl.googlesource.com/boringssl/+/master/crypto...

[6] There really isn't a "depending on your use case" decision to make. /dev/urandom (or RtlGenRandom) is the correct choice for all cases.

With that being said there may be two small caviats to that. Whether or not it has to be a CSPRNG and whether or not /dev/urandom (or RtlGenRandom) can provide the throughput which you require. In certain cases you may have to expand /dev/urandom with something like ChaCha20 if /dev/urandom is too slow (the BoringSSL devs mentioned that for AES-CBC IV generation on servers this can sometimes be the case, see [5] for the BoringSSL implementation).


/dev/urandom is a shared resource across all processes, and that implies locking/synchronization that can run you into scalability issues if you are trying to generate a large volume of random numbers in parallel on multiple cores.


> trying to generate a large volume of random numbers in parallel on multiple cores

You may be better off just using rdrand directly in this case depending on the throughput required. AFAIK you should be able to saturate all logical threads generating random numbers with rdrand and the DRNG (digital RNG) still won't run out of entropy.

Or as I also suggested look into expanding /dev/urandom output using a ChaCha20 instance like what BoringSSL does (which also combines it with rdrand since it's fast).


I'm surprised there's no talk of essentially making reads from /dev/urandom occur via a VDSO that grabs a seed for each process from the common pool, and then runs entropy generation from then on in the process's address space.


You really want a separate instance of the CSPRNG per thread, not per process.


You can also use the hw rng if supported (e.g. rdrand) for fork/VM duplication safety. /dev/urandom should be fork safe though so long as you're not buffering data from it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: