The only difference in this implementation is that it uses memory-based caching (memoization) to compute given value once and then serve the cached copy.
Of course this is fast for 1000 iterations, since the effective cost is zero from the second request.
People really are misunderstanding the critique of fibbonacci as representing any CPU intensive task.
TLDR;
All the author did here was remove the CPU intensity by caching the calculation
I'm actually very confused by his criticism. You can write a CPU intensive task in any language, and you'll have the same problem.
Or is that the point? Some people believe Node.js will magically make all processing computations = 0? I'm all for discouraging the rumor that Node.js will solve every problem, but don't call it Cancer.
The issue is what happens when you write that CPU intensive task. If you do it in idiomatic Go, that same server will keep on responding to other requests in the meantime. With Node.js, that is not the case. You have to do things like, well, what this article does, to get it to work.
I'm still not getting it. Go might be able to handle an extra request or two for high CPU intensive tasks, but it will inevitably suffer the same consequence of locking all it's processes given enough requests.
Still doing anything that is CPU intense on that layer is stupid in the first place.
A so called 'goroutine' locking up will not lock up the rest of that server. Other pages will still be served by that server. This has nothing to do with the efficiency of the underlying language, it has to do with Go doing cooperative multitasking transparently. It might as well be a separate process (in fact, for all you really know it could be).
The code seen here does cooperative multitasking explicitly. It is essentially explicitly specifying places where rescheduling can take place. The result is the same but with this Node code you are getting your hands far more dirty.
The point of the Cancer article is that since it (allegedly) is not made explicitly clear what is going on, programmers can deal far more damage to themselves than they would with other schemes.
I really don't know how to explain it any better than that.
PS: "cpu intensive" is relative. The fib example is a deliberate exaggeration of what you'd see in reality, to make the point trivial to observe.
Well, that just makes me more confused, because most languages I know handle multitasking explicitly, and they can all be dangerous when programmers exaggerate "cpu intensive" processes. I appreciate you trying to explain this guy's expectations, but I can't help but see this all as trolling FUD.
Many languages offer threads, which either the operating system or the runtime will preëmpt to ensure they all make progress in some vaguely fair way no matter how they behave. I haven't been required to write "I have more work to do, but this would be a good place to interrupt me" for fifteen years (when Win95 displaced Win3).
Of course this is fast for 1000 iterations, since the effective cost is zero from the second request.
People really are misunderstanding the critique of fibbonacci as representing any CPU intensive task.
TLDR;
All the author did here was remove the CPU intensity by caching the calculation