I would argue that heavily CPU-bound stuff shouldn't be run in a web app. It's much better to offload the task to a worker system via redis/zeromq/kestrel/etc. The majority of activity you see in every web app I've ever designed has been almost exclusively I/O bound.
Don't you think there's at least some validity to Ted's argument that the statement "Because nothing blocks, less-than-expert programmers are able to develop fast systems." from the node.js homepage is somewhat misleading?
As Ted points out, there are things like Fugue and Nginx which people who are not "less-that-expert-programmers" do, "experts" will be fine whether they've got magical behind the scenes async stuff going on or not. The question as I see it is - are the node.js docs/homepage misleading about how easy is is to "develop fast systems"?
Wait, I think I've missed something because that response seems overly dismissive. How is offloading CPU-intensive stuff to a worker system not getting it done?
because you dont fix everything with another layer of indirection. add in some more queuing (which as we know never has a problem) just to ger around 'no threads' seems stupid.
In a classic thread-based system, it's okay if a single page takes a bit longer (e.g. >1 sec) to render if it's within your user's line of expectation.
You can't do that here because you'll block all the others.
But any decent developers knows that, so he takes the advantages Node.js offers and fixes the disadvantages that come along with it. Big deal.
Is this discussion really only about the scalability tagline? Some taglines are misleading, really?
That comment sort of destroyed all your credibility. Would you care to elaborate where in the "real world" there is ever a situation where a tight, asynchronous event-processing loop is required to do heavy CPU lifting?