So far as code leaks go I think the professional story is divided. Everyone runs into them, some people learn to fix them, those that do tend to think of them as a little annoying but not significantly more so than profiling and tuning strict code.
Fortunately, outside of space leaks the Haskell runtime is actually really quite fast. At the end of the day most of your code which isn't leaky (buggy) will be really fast almost for free.
But that second step can be painful right now. The tools exist to profile and debug space leaks, but they're not cohesive and require a fair amount of black knowledge. It's also fairly murky as to how to escalate: if you've identified your problem area, strictified it a bunch, and it's still broken... well what then?
The answer seems to be growing that internal expertise. There are enough people out there writing enough stuff that a dedicated student of the runtime can learn all of the tricks needed to really attack failures like that—profiling, unboxing, reading core, figuring out the inliner, and a variety of style guide hints to making things all work smoothly (e.g. avoid the Writer monad).
Speaking as a part time GHC contributor (and alleged "core haskeller"), theres A LOT of exciting work going on right now for improving debuggability wrt correctness bugs and performance tuning.
GHC 7.10 is slated to (hopefully have):
1. exceptions will come with stack traces!
a. Some of this same work will also allow for sampling based based performance tools
b. mind you, a lots still in flux
2. theres a non zero chance you'll have in langauge facilities to create threads that get killed automatically if they exceed certain programaticaly set CPU/Memory/other resource limits!
3. Other things which are a combination of hard work and clever research grade engineering
I myself (as Tel knows) have plans for making GHC haskell AMAZING for numerical computing, and my wee ways of contributing to GHC are guided by that goal
Sure, or any strict language. There are a lot of benefits to laziness though even before you start to consider other Haskell/OCaml deltas. Laziness provides a much better platform for separation of concerns which has enabled Haskell programs to have an unheard of degree of decomposition and reuse.
As always, it's a tradeoff. At this point, I don't personally feel afraid of debugging core or profiling space leaks. I don't feel like I'm much more than 50% up to date on techniques to fix them... but when they arise I have little trouble eliminating them.
Lazy resource deallocation is a problem of lazy IO. The solution is to not use lazy IO and that's tremendously tenable today due to libraries like pipes and conduit.
On a quick glance I'd suggest (in addition to the responses already given) to (a) use a more efficient structure for holding events (Data.Sequence or maybe Pipes) since (++) has bad performance when the left argument is large (as is definitely your case), (b) for all of your "static" types, add a strictness annotation and possibly an {-# UNBOX #-}, (c) consider replacing System.Random with mwc-random, it's more efficient and has better randomness.
Diving into processOrder since it's the big time/space hog I think strictness annotations might help uniformly since there's a lot of packing/unpacking going on. To really dig in you can add some manual SCC annotations or dump core on it and see what's doing all the indirection.
These are all pretty general, high-level Haskell performance notes, by which I mean if you learn them once you'll be able to apply them much more quickly in the future whenever you find bottlenecks.
Yeah, a good starting point is that if something is running slow, and if there's lists involved, look at how you're using them. Also, as a special case, replace String with (strict or lazy) Text or Bytestring, as appropriate.
Unfortunately OCaml leaves a lot to be desired after working with Haskell for a significant amount of time. As far as I know GHC's extensions makes the type system much more powerful in non trivial ways. For example things like type families, data kinds, kind polymorphism, type level naturals, RankNTypes, and so on. That being said OCaml is still orders of magnitude better than most other options. As well, a least for me, it is a lot more work introduce explicit laziness all over the place into eager language, than it is to go the other way and apply selective strictness.
Well, most important is probably availability and stability of domain relevant libraries in either language. There are areas Haskell has well covered and areas where it's thinner...
It's also been the case historically that Haskell had much better concurrency support - I'm not sure whether that's still true - in which case a more parallel workload might push Haskell-wards (if it's not so parallel that simply spinning up N entirely separate processes make sense, in which case concurrency support doesn't matter).
I'm not familiar enough with the current state of OCaml, or with Haskell libraries outside the domains I've been playing in, to really give a thorough run-down of specific domains, sadly. I'd be interested to see it if someone else took a swing.
Fortunately, outside of space leaks the Haskell runtime is actually really quite fast. At the end of the day most of your code which isn't leaky (buggy) will be really fast almost for free.
But that second step can be painful right now. The tools exist to profile and debug space leaks, but they're not cohesive and require a fair amount of black knowledge. It's also fairly murky as to how to escalate: if you've identified your problem area, strictified it a bunch, and it's still broken... well what then?
The answer seems to be growing that internal expertise. There are enough people out there writing enough stuff that a dedicated student of the runtime can learn all of the tricks needed to really attack failures like that—profiling, unboxing, reading core, figuring out the inliner, and a variety of style guide hints to making things all work smoothly (e.g. avoid the Writer monad).