Interesting question -- in Haskell that means something else (some would say "nothing"). The language is designed around making sure that you only specify the function that the code is supposed to accomplish, and all implementation decisions are left to the compiler -- just as it would be for the RHS of a statement like `x = 3 + 4*(5 + 1);` in C.
In this case, as in all others, the Haskell compiler would decide what is the optimal way to implement it, given whatever other constraints you've placed on the program.
You can add additional constraints that ensure the compiled code turns out to be in-place, at a cost of verbosity.
If there is an example where such an optimization has actually been implemented and correctly chosen, that is news to me.
I mean, to a very large extent, this is the reason why classical imperative languages are so easy to understand and reason about. The programmers directions are much more straight forwardly executed than in examples like these hypotheticals.
Oh, I just got to your last sentence. Basically, that this would be a lot more verbose if written in such a way that it would be in place. Which is kind of the point of this criticism. Isn't it?
Don't get me wrong, the power of Haskell is not really in question. Just that example.
Your criticisms are all from the perspective of "it keeps me from telling it how to implement the functionality", which is true, but not what Haskell's design (or pure FP language design in general) optimizes for.
Sometimes, you really do want to make commands to the bare metal regarding exactly how you want the program implemented. For general use, however, you don't: compilers are (at least these days) a lot better at finding optimizations than the typical user.
It comes down to abstraction levels: pick the one that matches the problem you're working on. The FP philosophy is "speaking about specific memory cells is way too low" in most cases, like telling it you want a sorted list.
In fact, it would probably be even better (from that selfsame philosophy) to define a sort in terms of the conditions the final list would meet (as opposed to nested, recursing conditions like in the example). Something like, "return a list where every element is <= the one after it, and where every element is also in the original list, as many times as it occurs there".
If you want to see the examples of quicksort in Haskell to enforce in-place and other requirements, see this discussion here, with links:
I understand what you are saying. I'm arguing that when quick sort was first contrived, the speed aspect that you can achieve with the in place variant was not just a lucky by product. It was the bloody point. To such an extent that if you are lucky enough -- which most of us are -- to be in a place where that is not a consideration, then sure, go for the higher level abstractions.
That said, it is highly telling that none of the actual implementations of quick sort in any of these languages are that succinct. To the point that it is still a truism to "use a library" for something such as sorting. Because that isn't easy. Do not be fooled by such a quaint example.
It's interesting to see is how very similar Haskel and C sources fundamentally become once they really solve the same problem and not the different ones.
The "direct compare" in Haskell isn't. The Ord constraint effectively passes a dictionary, and even if that were inlined in a particular case you could always apply a newtype for a different ordering.
I'd really like to learn, can you please explain where the Ord constraint is, and where the dictionary is passed in the link I've given (with the STUArray -- isn't that really an array in a C sense)? To give you an idea how little I know: I understand the basic ideas of Haskell (that is, what I as a C programmer who made some compilers understand as basic: the type inference and the illusion of the infinite memory by not doing things in place but lazily with the GC unless unsafe or when it is possible to optimize the laziness out) but I don't really know the language. Thanks in advance.
Ah, my bad, the STUArray example is specialized and probably does not pass an Ord around (though there's implicitly one present because of the less-then). Certainly, the function does not meaningfully allow the user to pick it.
Has anyone ever bothered to ask Tony Hoare what he thinks on this topic? I know SPJ and him both work for MS Research, maybe he can get on that and settle this for us.
I would be curious to know the answer. Though, I should have added in my original post that I am far from the originator of this view. I see a sibling post elaborated. I think the crux is simply that without the "in place" nature, it loses a lot of its speed advantage. Cache localities and such.