I don't think the idea of GC is to save you from thinking about memory allocation. Actually, not thinking about memory allocation is a great way to create very slow programs, independant of whether you us an GC or manually free memory, though modern GCs keep the performance penalty reasonably low. Still, you want to think about memory allocation, so that you don't allocate inefficiently.
What GC delivers is first of all the guarantee of correctness, a pointer is either nil or points to a validly allocated memory. Also, it removes most needs of bookkeeping. That simplifies programs and especially allows to write clean APIs, where functions are fine to allocate reasonable amounts of memory.
But wherever there are large parts of memory required, especially with a clear lifetime, I think a lot about memory allocation and even in a GC language I try to reuse memory wherever that makes sense.
So for a game engine, there should not be much need for allocation during the run time of a "level" and thus GC pauses should only happen between levels.
Not thinking about ownership over references to heap allocations is very very much the big productivity win from GC. Memory safety is usually an additional bonus, but it's a distinct thing; it's possible to have (conservative or refcounting) GC without memory safety and vice versa.
We work on a dual C++/C# codebase. Something like 3/4 C++ and 1/4 C#. Basically all of the memory lifetime errors happen in C# land. I do not recall a memory lifetime error _ever_ hitting master in C++, but we have one bug against _prod_ right now and two bugs against master right now, as we speak in C#.
Dealing with lifetimes in C++ is easy, dealing with it in C# is a nightmare. Maybe it's easier in Java or Go, I don't know, I've only dealt with Java in school and never coded in anger in Go.
async factory methods for stuff the API requires me to keep around, but the same API requires my objects to be constructed synchronously. (This includes stuff in .NET) C++ doesn't have async as a language construct. (yet. I dread the day.)
IDispose. We have an ecosystem where a nonnegligible number of in flight objects need to manually be disposed of. In C++, RAII takes care of this for us.
Un listening to events. Needs to happen manually in the standard .NET listeners. C++ solves this with weak_ptr. C# could solve this with a better standard library, but we have .NET.
Honorable mention (not lifetime, but the deadlock quagmire that is C# and its half async half synchronous standard library) is WritableBitmap, which is impossible to use correctly, and has not been deprecated or had a safe replacement offered.
C++ surprises me in ways I expect to be surprised. C# surprises me in ways that leaves me confused and perplexed.
> Not thinking about ownership over references to heap allocations is very very much the big productivity win from GC.
I very much doubt that this is a big _productivity_ win. Languages in which it is idiomatic to "think about ownership over heap allocations" (C++, Rust) aren't obviously less productive than comparable languages where such thinking is not so idiomatic (C, Java, .NET, ObjC, Swift etc.).
It's somewhat common to use refcounting (shared_ptr<>, etc.) in the more exploratory style of programming where such "thinking" is entirely incidental, but refactoring the code to introduce proper tracking of ownership is quite straightforward, and not a serious drain on productivity.
GC might not be a productivity win for you, but for many people it definitely is.
I'm pretty sure that's true for the great majority of software developers, but of course they don't even use a non-GC language!
Part of the reason they don't is that productivity. Not that they chose it personally for that reason, but e.g. historically enterprise code moved to Java and C# for related reasons.
(I also agree there are people that are equally productive in non-GC languages, or even more - different people have different programming styles.)
Enterprise code moved to Java (and later C#) for memory safety, period. The level of constant bugginess in the C++ codebases just made them way too messy and outright unmanageable.
The enterprise world moved to Java and C# because:
- It was a corporate language with corporate support and that matter a lot in many environment.
- It had at the time one of the best ecosystem of tools available.
- It was the mainstream fashion of a time and nobody get fired to buy Sun/IBM/Microsoft right ?
Most companies (and managers) could not less give a dare about your program crashing with a segfault (unsafe) or a null pointer exception (safe). It's the same result for them.
Not in a security-related situation, it's not! And to a lesser extent, lack of memory safety also poses a danger of silent memory corruption. (Yes, usually the program will crash outright, but not always.) And it can be a lot harder to debug a crash when it doesn't happen until thousands of cycles after the erroneous access.
Sun and Microsoft wouldn't have built and pushed Java and C# in the first place if there hadn't been a real need for safer languages.
> Sun and Microsoft wouldn't have built and pushed Java and C# in the first place if there hadn't been a real need for safer languages.
Excepted they were safer languages before Java and C#: Ada, Lisp, All the ML family.... And all of them never lift off.
Java and C# have been successful because they were accessible and easy to learn ( partially due to their memory model), not because they were safe.
As a parenthesis, a beautiful side effect of that has also been an entire generation of programmer that has no clue of the memory model their language use underneath, because "it's managed", because it's GC.....without even realising that their 50 Millions nested/mutual object graph will make the GC on its knees on production. With the results we all know today.
Maybe, but remember that computers were very, very slow and with small memory, so GC's overhead used to be unacceptable (Emacs == eight megabytes and constantly swapping? I've seen it)..
I think that Java came 'at the right time': when computers became fast enough that the GC overhead didn't matter (except where low latency matter).
What GC delivers is first of all the guarantee of correctness, a pointer is either nil or points to a validly allocated memory. Also, it removes most needs of bookkeeping. That simplifies programs and especially allows to write clean APIs, where functions are fine to allocate reasonable amounts of memory.
But wherever there are large parts of memory required, especially with a clear lifetime, I think a lot about memory allocation and even in a GC language I try to reuse memory wherever that makes sense.
So for a game engine, there should not be much need for allocation during the run time of a "level" and thus GC pauses should only happen between levels.