Hacker Newsnew | past | comments | ask | show | jobs | submit | drsnyder's commentslogin

Most (all?) of these companies have not been through a complete credit cycle since they were founded after 2008. I would be cautious about putting money into them that you cannot afford to lose.

Only after they go through a full credit cycle from boom to bust will you know how well they have managed risk and leverage. If they are transparent enough for you to be able to do the due diligence on their assets and leverage (and you have the expertise) you should probably wait and see how they do during the next recession.

I would also suggest you ask the question why would you would invest in them as opposed to using more traditional REIT or investing in real estate yourself if you are so inclined. Is it because you are expecting a greater long term return and you think they might be able to provide it? Again, I would wait and see how they do in the next recession (or credit crisis) to get more realistic view of their long term return profile.


The curl handle should be re-used if possible so that's also part of the problem.


Good question. The only reference to it that I could find was here http://lkml.iu.edu//hypermail/linux/kernel/0412.1/0181.html but he doesn't explain why it's necessary.


From the mail:

>This patch solves a problem where simultaneous reads to /dev/urandom can cause two processes on different processors to get the same value. We're not using a spinlock around the random generation loop because this will be a huge hit to preempt latency. So instead we just use a mutex around random_read and urandom_read. Yeah, it's not as efficient in the case of contention, if an application is calling /dev/urandom a huge amount, it's there's something really misdesigned with it, and we don't want to optimize for stupid applications.


I guess that means all go applications that use crypto/rand are considered misdesigned then[1].

[1]: http://golang.org/src/pkg/crypto/rand/rand_unix.go#L30


If you're using crypto/rand to yank a whole bunch of random numbers out for the purpose of deciding which DNS record to use when multiple DNS records were returned, yes, the Go application is misdesigned. Such applications should be using math/rand. Seeding your math/rand from crypto/rand isn't a bad idea, but you don't need to be hammering on /dev/urandom in such code.


This "some random data is more important then other random data" musical chair dance going on with /dev/random vs /dev/urandom vs userland [CS]PRNGs (often gathering from extremely poor sources, or using broken algos) has been nothing short of an unmitigated security and useability disaster.

We have the abillity to make the /dev/urandom CSPRNG secure enough and fast enough for (almost) any randomness purpose. We need to cut all the rest of this insane crap.

People choose the wrong RNGs and get burned, or wont use the right ones because of speed or imaginary entropy exhaustion issues. This matters.


It's impossible (or just not worth the trade-off) to make one piece of software (this time, kernel) fast for any use case imaginable. In this case, kernel behaved correctly but with the speed degradation for extreme cases. That the author's Rube Goldberg machine then runs slow I don't consider kernel to be guilty.

The guy uses PHP and instead of built-in HTTPRequest he uses curl to make a request to "a bucketed key-value store built on PostgreSQL that speaks HTTP which uses Clojure and the Compojure web framework to provide a REST interface over HTTP." A bit of shooting the flies with cannons on every side?

On another side, if it can be proved that urandom has serious problems in reasonable use cases it should be checked what can be changed and how.


Slightly off topic, but I wanted to clear this up:

> The guy uses PHP and instead of built-in HTTPRequest he uses curl to make a request

HTTPRequest is not built-in to PHP. It is a PECL extension that is usually installed separately from PHP.

Curl is more built-in to PHP - it's a PHP compile-time flag, and it is distributed with PHP source.


So the best approach for the given problem (just send the request, fetch the response, without too much overhead) seems to be using something a bit lower level:

http://at2.php.net/stream_socket_client


> It's impossible (or just not worth the trade-off) to make one piece of software (this time, kernel) fast for any use case imaginable.

We're not nearly at the theoretical limit of what /dev/urandom can provide.


Exactly. Most developers are about as good at picking RNGs as end-users are at picking passwords. We need to stop asking them to make that choice.


Yes. Exactly this.


You don't need to be, but why not? It should be plenty fast and work well. If it's turning out to be too slow due to too much locking, that should be fixed.


1 - Rand is faster

2 - You don't need the crypto qualities of it and you're emptying the entropy pool for nothing

3 - You're doing much more work, especially if you're reading one byte at a time from /dev/urandom (doing a syscall, etc), while rand is just a calculation


There is in practice no such thing as "entropy depletion". The retail side of a CSPRNG is very similar to a stream cipher. The idea behind "entropy depletion" is structurally the same as the idea of a stream cipher "depleting its key". You can run AES-CTR as a stream cipher for several exbibytes before the output starts becoming distinguishable (which is not the same thing as "reveals the key").


True, unfortunately /dev/random blocking "soon" in Linux helps to propagate this myth. I stand corrected.


1 and 3 are the same thing. I think the best way to address these, if performance is a problem (don't optimize what doesn't need it) is buffering to reduce syscalls, and optimizing the kernel implementation to fix the sort of internal performance problems like the link describes.

For 2, entropy pool depletion is a fictitious problem if you're worried about security. Some discussion here:

https://news.ycombinator.com/item?id=7361694

If you're worried about blocking apps that use /dev/random, the answer there is to fix them to use /dev/urandom so they don't block.


Yes, it should be fixed. Yes, it's still a "misdesign" to use the cryptographic random number generator when you just want "a" psuedo-random number, right now. For choosing which of the several DNS answers you use, you could pretty much get away with keeping a counter and returning that counter modulo the number of choices. It's technically wrong for several reasons, but you could get away with it. That's how low-impact this random number usage is. Using a cryptographically secure random number generator for that is always going to be overkill for such a task.


There are lots of applications that require a cryptographically secure random number generator, which math/rand doesn't claim to be.


This is not one of them.


And while that's interesting from the perspective of fetching web content via curl (and kudos to the author tracing it down to that), that doesn't mean the fundamental issue shouldn't be fixed.

In the current security environment, from heartbleed to the NSA, it's becoming clear that security issues need to be systematically dealt with from an industry perspective or people will start to lose faith in secure Internet communication, which would undermine too much of what's valuable about the Internet.

What we need is great APIs/frameworks/design patterns to simplify cryptography so that a newbie ruby on rails programmer CAN create actually secure applications and not even realize that it was complicated in the first place.

In cryptography you make one misstep and the entire chain is broken. It's thus important for things like the linux kernel to provide great implementations so that people don't think twice about using it and never want their own PRNG.


Why does kernel have to be "fixed" if anything in the current "APIs/frameworks/deseign patterns" is not doing its own homework?

It seems the author admits in the comments: "All the application needs to do is open a socket and generate a GET request." So why complaining about the kernel?

If there's problem with urandom, demonstrate it on the reasonable use case example, don't try to impress anybody by showing how much different libraries, modules and programs you combine for one key-value query.


Are you also suggesting that the kernel provide a "great implementation" of SHA? Of SSL? Of https? Where do you draw the line?

There's no reason for the kernel to provide standard library functions. In fact, I'd argue that syscalls should be reserved for only actions that cannot be done wholly in userspace (futex is a good example of this). The current model of "hardware randomness to seed a PRNG" makes sense. It is up to the userspace libraries to provide good implementations.


I would draw the line somewhere between SHA and SSL. Next question? ;)


Exactly. I thought it was pretty clear that I was talking about the kernel providing great crypto (read: random) by default for the things it already provides.

Similarly there's a need for great "APIs/frameworks/design patterns" for what the kernel doesn't provide. I predict over the next 5 years this will become a far bigger priority in how people develop software and thus use libraries.


So, would a possible solution be to check how many people are using the random generator at once? If only one process is currently accessing /dev/urandom, then avoid the spinlock and problem solved. Or, I am completely wrong.


If there is only one process, then no spinlock contention, so no problem.

The problem comes with multiple processes competing for the lock.


Would a possible solution be to check how many people are using the random generator...?

One preson may have multiple processes reading from /dev/urandom.


From my experience (as a programmer) I have seen what I would consider both mediocre and talented developers who fit your description of frothing to themselves thinking they could do better on their own. Neither performed up to their abilities mainly because of the distraction that the frothing caused.

I the case of one talented developer, I was tempted to ask them the following question but held back: What makes you think you could do this kind of work at a high level as an entrepreneur when you are struggling doing it for someone else?


I also think Racket is a good introduction to Lisp. The documentation is great. Take a look at the vector documentation as an example http://docs.racket-lang.org/reference/vectors.html?q=vector&.... If you are not familiar with the language, there are plenty of examples to help you get going.

I recently started working through The Practice Of Programming and I wanted to compliment it with a functional language. I thought it would make the on ramp to functional languages a little smoother by first coding something in a language I'm familiar with and then port it over (producing more idiomatic scheme is the next iteration). I chose scheme as the compliment and PLT/Racket as the environment. So far, it has been working well.


The Trappist in downtown Oakland is a great spot. Its about 4 blocks from the 12 street BART.

http://www.thetrappist.com/


I've heard some suggest that one play would be to buy gold in euros.


Huddler is hiring in San Francisco. http://www.huddler.com/careers.html


My suspicion is that their struggle is more about the perceived cost and time. My wife and I are able to eat most of our meals at $2-3 per person. We have found that to get the cost low, you need to seek out cheaper produce (usually not safeway) and cut way back on meat consumption. Making your own cereal also helps as well as only buying things on sale. You also need a little more time to prepare the raw ingredients.


Ask your self: what would you rather have a) a fulfilling relationship and perhaps sexual intimacy or b) the possibility of wealth and riches?

You can't buy a and there is no guarantee of b.

Edit: After thinking about it a little more, the chances are if you are reading this, you are already filthy rich-- meaning you are probably in the top 1%. We often evaluate our "richness" by evaluating our selves with those who have more than us. Stated differently, we evaluate ourselves "up" instead of "down". The reality is that 99% of the rest of the world has less access to opportunity, education, resources, even food.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: