Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, but that code still sits in memory, occupies space and takes non-zero time to load. It also takes free memory from the fragments of code that are on the critical path. Sometimes it doesn't matter (most desktop apps), sometimes it does (embedded software or system-level programming). Also, RAM usage of quite a lot of kinds of software is determined mostly by code size, not data size (e.g. word processors, spreadsheets, CAD software, IDEs, compilers) and startup-time is also determined typically by loading and dynamic linking time.


For most of the common template instantiations, the standard runtime's shared library provides you with deduped versions that are likely already in memory, and most linkers are smart enough to make that dynamic linking very efficient.

True, for embedded systems this can still be a painful waste... But I'd argue if that is the case, you probably should be very explicit about using/not using STL libraries.


Don't most operating systems memory-map executables in anyway? As long as the code is never paged it in shouldn't make any difference.


Agree, but if the code is used even only once, it gets loaded to memory. So if you've got a complex_generic_container<T1, T2>, its code gets loaded for every combination of types used.


It gets loaded and then most likely paged out, particularly if your linker is profile guided (but even if not... rarely used code tends to get linked with other rarely used code).

It's fair that if you want to optimize startup times, that first page in hurts, although generally with C++ is's more about the symbol linking overhead than anything else (and there are a lot of strategies for minimizing that, but it sure isn't want happens by default).


There is still link time if the types are shared across shared libraries (which happens a lot).


There is also cache to consider.


There is only cache to consider! ;-)

Other than cache, there are plenty of tricks that can cure a lot of other ills, but cache is precious.


If you are paging, fixing that is a win on the order of fitting things in cache.


Sure, but that's really just another form of "cache". Before you hit that, you generally get TLB stress on top of the cache line stress, so you're already hosted.

I have to say though, I've not seen too many cases where paging was caused by bloated object code size (lots of cases where it was caused by brutal bloat due to just outright bad code).

There was a time a couple of decades ago where it was a common issue with C++, but since then RAM has gotten a lot cheaper and C++ compilers/linkers have become much smarter (not to mention all compilers/linkers getting much smarter about code size). Sure, you can find pain points, but with a modicum of effort it's really hard to imagine a project really flapping outside the working set size purely due to the compiler/linker refusing to be drink from the water it has been lead to...


Well, the original context seemed to more or less be someone saying "You're not really going to be paging anymore..." I suppose a more precise response on my part would have been "there are also other caches to consider", but meaning "processor caches" when saying "cache" with no other modifiers is fairly typical in my experience.

Upvoted anyway for precision.


I will concede that my "there is _only_ cache to consider" statement was largely tongue in cheek.

In all serious though, swapping due to code bloat (as opposed to actual runtime bloat)... I haven't seen that in ages. Caches though... like a samurai with an absurdly sharp blade... they kill you almost every time, and usually you don't even know you're dead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: