Packages are supposed to specify upper bounds because without them you can end up with spontaneous breakages when a transitive dependency several steps away changes in a backwards-incompatible way. Haskell has the PVP which can prevent this. It is pretty similar to Semantic Versioning (http://semver.org/). The upper bounds are not specified willy-nilly. They are specified in a way that includes backwards-compatible revisions, but excludes backwards-incompatible ones.
I've been using Haskell for more than 6 years and in my experience the worst build problems happen with parts of the ecosystem that don't specify upper bounds. There's been a lot of discussion about this in the community. We're aware of the issues and we're working on solving them. One attractive possibility being discussed is adding a flag to cabal that allows one to ignore upper bounds. In theory this seems like it can preserve the benefits we get from the added information in an upper bound while eliminating the pain. An extension to this idea is to make two operators (say < and <!) that specify a "soft" upper bound that is just the highest you've been able to test against and a "hard" upper bound where you're telling cabal that your package is known to break. This extra information would make the solver much better when you tell it to ignore upper bounds because it would only ignore the ones that are safe to ignore.
I suspect that this problem is more visible in Haskell than in other languages. Haskell's type system gives you a way to bound the number of things that you must understand about an API in order to use it. The confidence that we get from having these guarantees makes code reuse orders of magnitude easier in Haskell than it is in pretty much any other language in mainstream use today. Others have made this same observation. Here (https://www.youtube.com/watch?v=BveDrw9CwEg#t=903) is a presentation given at CUFP last year by a company using Haskell in production that gives the actual reuse numbers that they've seen for several different languages.
I've been using Haskell for more than 6 years and in my experience the worst build problems happen with parts of the ecosystem that don't specify upper bounds. There's been a lot of discussion about this in the community. We're aware of the issues and we're working on solving them. One attractive possibility being discussed is adding a flag to cabal that allows one to ignore upper bounds. In theory this seems like it can preserve the benefits we get from the added information in an upper bound while eliminating the pain. An extension to this idea is to make two operators (say < and <!) that specify a "soft" upper bound that is just the highest you've been able to test against and a "hard" upper bound where you're telling cabal that your package is known to break. This extra information would make the solver much better when you tell it to ignore upper bounds because it would only ignore the ones that are safe to ignore.
I suspect that this problem is more visible in Haskell than in other languages. Haskell's type system gives you a way to bound the number of things that you must understand about an API in order to use it. The confidence that we get from having these guarantees makes code reuse orders of magnitude easier in Haskell than it is in pretty much any other language in mainstream use today. Others have made this same observation. Here (https://www.youtube.com/watch?v=BveDrw9CwEg#t=903) is a presentation given at CUFP last year by a company using Haskell in production that gives the actual reuse numbers that they've seen for several different languages.