This is true of any dynamic memory management unless very careful constraints are put in place. I would assume anyone writing an OS kernel, regardless of language, would be careful to avoid unbounded consing. (Of course, memory allocation in kernels is tricky even in C: http://lwn.net/Articles/627419/)
The "garbage velocity" is not the most important parameter. Remember that garbage is all the space which remains after we identify what is reachable. The traversal of the graph of what is reachable is mainly where the performance pitfalls lie. When garbage is generated at a high rate, it just means that collection has to be more frequent. However, real-time techniques avoid scanning the entire graph of everything that is reachable.
Under ephemeral garbage collection, if the software generates a lot of garbage fast, it means that it's rapidly making large numbers of "baby" objects (objects in the "nursery" or "generation 0") and immediately losing them. Whenever a generation pass comes along, there are hardly any nursery objects to visit (they almost all been lost due to the "garbage velocity"), and the tenured objects aren't traversed either, so ... it's quick. Quick isn't "free", but it's not "beyond the ability to cope".
There is always some garbage velocity beyond which any given system is not able to cope.
Usually that limit is kinda small compared to what you'd actually like your program to be able to do.