Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A RAM Edition of Dirty Coding Tricks (gamasutra.com)
303 points by edgarvm on Dec 17, 2017 | hide | past | favorite | 74 comments


A pretty common trick that was part of game programmer lore back in the PS2 / Xbox era was to have a large static array hidden away in some code file somewhere. When days before shipping you couldn't quite fit the release build in to memory this allowed a heroic programmer to miraculously 'find' a few hundred extra kilobytes of memory by reducing the size of the array by just enough to fit.

There was a less common variant of this for finding some extra performance by reducing the iterations of a loop doing no-ops somewhere in the main game loop.


Sadly this usually works only once, especially if it's one person doing it.

I used to work on the performance team at a Bay Area company. One of the things we did to keep our JavaScript bundle sizes under control was introduce a "ratchet". There was a threshold, enforced by CI, that you couldn't let the bundle size exceed without getting in touch with us first and figuring something out. [0]

This worked wonders for a while, until a few different teams were starting new feature development. At that point, the ratchet was forcing teams to pause their feature development to do cleanup work, which made the PMs very unhappy. Engineers got salty because they would remove dead code, only to find that another engineer had gobbled up the space they'd freed before they could land their own commit.

Engineers started working around the ratchet by hoarding dead code and disguising it to look "not dead" so that it could be easily removed later when a few dozen kilobytes were needed.

There were many thing that weren't good at this company, and the culture around ownership of the shared codebase was definitely one of them. I'd like to think that there are plenty of teams that don't have this problem, but I'm inclined to think that it's human nature to subvert these sorts of things by default.

[0] This was necessary because of the volume of tech debt. Teams/engineers had a bad habit of building new things, then not cleaning up the stuff the new things replaced. At one point, we estimated that over a third of the JavaScript was dead code. Some teams had gotten to the point where the codebase contained >2 versions of their product, while only one of them was physically accessible to users.


Back in the era where this was somewhat common on games it only had to work once. In those days you burned a master for a console game and once it shipped that game was done. There were no zero day patches (or any other kind of patches) and no updates to the game once it shipped.

Often it would be the lead programmer on the game who put this array in and it wouldn't necessarily be known about by everyone. It wouldn't have worked well if people were constantly grabbing bits of memory from it during development. It worked because it was a 'secret' and its use was reserved for shipping.


> Engineers started working around the ratchet by hoarding dead code and disguising it to look "not dead" so that it could be easily removed later when a few dozen kilobytes were needed.

Sounds like an example of the cobra effect[1]

[1] https://en.wikipedia.org/wiki/Cobra_effect


Not only in game programming.

Chet Haase & Romain Guy also told on their Android history keynote that the kernel team did the same. They have hidden 20 MB.

https://www.youtube.com/watch?v=rimXGaUdaLg&t=773s


I used to be a huge gamasutra reader so I've read the postmortem that described this.

I was under the assumption that it was more of an apocryphal story that didn't actually happen. I mean how can someone hide a static array or no op loop from a team of 20+ programmers in a game code base.

As far as I'm aware, it was never confirmed and seems a bit to far fetched to be real, but then again.... :)


I've seen it done. In those days programming teams were usually smaller and this would be something put in place by a lead (usually a grizzled veteran of shipping multiple titles) and not advertised to the entire team. It might be hidden away somewhere like the core memory manager which would be owned by that lead and not generally touched by anyone else without consulting them. If someone happened to find it when running a memory profile they'd probably go and ask the lead about it and be let in on the 'secret'.

I actually found one of these when pulled in to help a title ship that had been put in for a previous title in the franchise and forgotten about when the lead moved on. I found it when memory profiling and after talking with a few people we figured out what it was there for and I got to be the 'hero' who found some extra memory to ship.


> If someone happened to find it when running a memory profile they'd probably go and ask the lead about it and be let in on the 'secret'.

Perhaps the obvious question here is "hidden from who?" Blinding the entire dev team to the trick might be possible for the lead, but sounds like a pain.

But squirreling away a bit of memory that won't be announced to PMs/artists/designers/producers? I can certainly picture a couple of programmers realizing that the person calling the shots intended to push them to the absolute limits of "what will fit", and deciding to fudge those limits. Hell, hedging is common practice for programmers and freelancers today when they anticipate bad requirements, and everything I know of the game development's history says the problem used to be much worse.


It's pretty normal for game assets to follow their own version of Parkinson's Law https://en.wikipedia.org/wiki/Parkinson%27s_law and expand to fill the memory / performance budget available (usually at least +5-10%). That can be just a consequence of artists and designers trying to make the best game they can within the budget available to them and so isn't particularly a bad thing. That's why an experienced programmer building in a 'secret' extra buffer for use around ship became a thing.


Yeah, I've seen something similar done to the art budgets, but hiding memory from programmers is impossible if they had any clue, especially on PS2/Xbox (32M/64M RAM). You could read .map in 5 minutes and see anything suspicious. Same with delay loops - good luck with that. Though I've observed some "miraculous" recoveries due to utter idiocy. E.g. a PS2 game using double floats (because your time calculations will break after few hours with a 32-bit float, duh!). The CPU had 0 double support so they were all done in software and the library with it was rolled into the standard math lib. Or some incredibly misguided GPU programming done to a "non-lead SKU". Nobody notices that one system runs 5 times slower than another until it's close to shipping and you try to bring it to QA.


The point was not to hide memory from programmers who "had a clue" if they went looking. During production, most programmers on the team would be focused on delivering features they were responsible for and fixing bugs. If they were doing perf or memory profiling it would mostly be focused on whatever feature they were implementing. Only a few people on the team, perhaps only the lead, would be looking at global memory usage / perf on a regular basis and even then they'd mostly be looking for regressions. If anyone did happen across this they'd probably go talk to the lead (there might even be a comment directing them to do so) and be let in on the 'secret'.

At the end of production coming up to release is when there'd be a wider focus on general perf / memory usage to get everything to fit for the final release build. In my anecdote above, the reason I happened to find an example of reserved memory was just that I happened to be the first person to go looking in the right place with the right tools, not due to any particular skill or experience on my part (I was pretty junior at the time, this would probably have been around 2004 towards the end of the PS2 / Xbox console generation).


I don't buy it. If you need to shave some bytes the first thing you do is look at the map file or run a binary analysis tool to figure out what the best candidates for optimisation are.

And if your performance is poor the first thing you do is add some debug code to measure what is taking too long and where the biggest gains can be made.

The whole point of these tools is to make this stuff easy to find.


Noel Llopis is quoted here as saying it did happen: https://www.gamasutra.com/view/feature/132500/dirty_coding_t...

I'd assume that in the era where lots of stuff was global and code was messy and convoluted to fit into a console no one would take issue with yet another global variable if it was named a clever enough name as some buffer, temporary, cache, etc. and compilers didn't yet warn about unused variables or remove them (which is an easy fix in C anyway with a single totally portable line or a special compiler specific directive).


I have seen it mentioned here on page 4: https://www.gamasutra.com/view/feature/132500/dirty_coding_t...

There's yet another article on dirty game hacks that I recall due to EULA self-hack: https://www.gamasutra.com/view/feature/194772/dirty_game_dev...

I have no idea why this article doesn't link to these two and just to the 2017 one because it's not like these are current generation tricks, it's just that Brandon Sheffield took to write about them again (one of the articles above is actually his and the other one is by GDM that he is editor in chief of).

Also, the Crash Bandicoot text is the most impressive one and the first in this article but it was very widely circulated for a while now on several sites (including on HN once), despite the "nobody was the wiser until now" at the start of the article.


The one on the last page about using dexdrive was pretty cool. I used a similar technique to create ps1 save files that would trigger an exploit on the ps2 that would load unsigned code. I miss being able to do stuff like that. Modern systems are so locked down.


Hah yup, we kept 15M tucked away on the last game I shipped. Came in handy for demos more than anything else.


"When finishing a level, the game would reboot the console and restart itself with a command-line argument(the name of the level to start) ... into the next level. Voila, the perfect(?) way to clear all memory between levels."

Well, that's basically how most websites work, lol


Serverless architecture right there. That or PHP and similar tools - at least back in the good ol' days. But functionally, it's still that a PHP scrip runs a whole application for the duration of one request, then everything is wiped and the whole thing is restarted for every request.

I mean it can work and it's a viable tactic if startup time is fast enough.


As long as the process is acceptably fast, nuking & restarting (with an appropriate amount of isolation) is a fine approach to many things that would be harder or less efficient if done the “right” way. Memory pools and “let it fail” architecture in Erlang come to mind; and there’s a practice sometimes seen in Forth, where you ensure that the codebase itself is always small enough that it can always be easily rewritten, for example if requirements change enough that incremental development would be harder.


I do it like that on mobile too.

New screen? Okay, lets throw everything away.

As long as performance is okay, I don't keep stuff around in memory.


Whenever I think, "Thank goodness the limited memory days are behind us", it pops up again and again. Sure, you can buy a new iMac Pro with 128GB of RAM(!!) and smartphones regularly have 8GB available, but the increasingly popular IOT devices and smart consumer hardware (like streaming media boxes, etc.) try to limit the BOM cost and thus limit memory as much as possible. Tiny memory leaks become an issue, or random crashes from wonky media codec implementations, etc.

I think the skills (and hacks) that used to be useful only to game developers and OEMs are now going to be needed by a much wider audience of devs.


> Whenever I think, "Thank goodness the limited memory days are behind us", it pops up again and again. Sure, you can buy a new iMac Pro with 128GB of RAM(!!) and smartphones regularly have 8GB available

I think you're underestimating the accretion of software bloat. You remember the days when you did exactly the same things, exactly as fast, with 1GB machines? 512MB machines?

As long as devs don't give a fuck (er, they make "professional decision" of optimizing dev time (over product quality)), the days of limited memory are not going to be behind us.


I always think about Windows Vista when I hear "unlimited memory". Somebody looked at expanding memory and processing, and decided "let's have an animated 3D background for the OS, that'll be good". Memory is always going to be pushed to its limits unless devs and users alike go out of their way to keep it intact.


What's the point of having more memory if not for new features?


In general, nothing.

For example, Chrome eats pretty much all free memory keeping tabs loaded, but surrenders it gracefully when needed. That's a great use of memory.

But for an OS, and specifically for a feature that doesn't devolve unless the user goes and manually changes it, it's obnoxious. The OS is inherently a support layer for the things the user opens by choice; I'd argue any expansion of its resource footprint ought to have a clear justification.


In an Operating System, that ‘more memory’ could be used for applications instead of bloat.


Sorry, I don't trust myself or you to exercise restraint. The ONLY thing that keeps memory usage in check is physics.


Yes, man, and kids don't understand or perceive software bloat at all ;(

I remember coding while listening to music and reading docs somewhere using a P166MXX with 32mb of ram, and that ran quite snappily using debian and fluxbox.

I now code while listening to music and reading docs somewhere using an i7 with 6GB of ram and sometimes I'm swapping because I left Google Chrome opened for too long :(


It feels like that dev time cost is instead being crowdfunded by energy, disk, CPU and RAM that every user has to pay with and eventually that cost is paid by Earth itself. An app might be cheap or free but in the long time cheaper for the planet and each user would be an app costing a dollar or two more per install. I was using a 2011 laptop with only an HDD, 2 gigs of RAM and no discrete GPU until it actually went and finally broke last year so it irks me especially when I see this handwaving of "computers are cheap", especially from Westerner millenials or developers from SF. The fact that well off people who change machines every few years can even dare to call less well off people with older and shittier hardware "entitled" for wanting performant snappy software just like they got 5-10 years ago when they bought their machine is baffling. Not everyone needs a crazy and new machine, writers, reviewers, sales people, admins, etc. Case in point - G.R.R. Martin uses a DOS machine to write - https://www.youtube.com/watch?v=X5REM-3nWHg .

I.e. Slack and Atom got absolutely lambasted for performance, sluggishness and resource use (while VS Code was applauded, so it's clearly not an Electron specific thing) despite being made by companies valued in billions and based in the most expensive region of the world, one of them even being a paid product.

Or a game with pixel art (I do like it and I understand that particular indie dev optimizing for time with such a niche product so I don't want to name names here) graphic and gameplay only as deep as some better Flash ones from mid 2000s requires as its minimal system requirements several GBs of RAM (for comparison, Doom 3 recommended, not even minimal, was 512 MB in 2003) and disk space, etc.

Or when a graphically simple 2D game requires a 64 bit OS (despite using no 64 bit features seemingly), a non-integrated GPU (and not because of some lack of OpenGL features but due to poor optimization) and runs at 30 FPS on an integrated Intel that has 0 problems with Mincraft with really far draw distance. And it attempts to load hundreds of files (all of the game assets for an entire 4-10 hour long VN) at boot, taking 30 seconds on an HDD. And they could be loaded incrementally (loading what is needed right now only and everything else in the background, even dumbly and fully into RAM as it does now) or packed into SQLite or a ZIP to avoid so much FS access, but no - hundreds of files are being opened at game boot and there are tons of XML assets with 0 compression or minization. But instead the solution to performance woes (in gaming especially but through things like Electron it's seeping into main stream) is apparently to "git gud", "stop being a poor pleb" and getting a new GPU (apparently GTX 950 M is a potato level GPU now and only an idiot would play games on it in 2017) or an SSD so that the developer doesn't have to bother to do the tiniest of optimization.

That 2D game loading all assets, wanting a 64 bit CPU and non-integrated GPU, all for no good reasons, was Tokyo Dark by the way and due to the way the developer carry themselves I have 0 problem name dropping them, I made an entire video about that game, the disk and GPU part is at 15:15 : https://www.youtube.com/watch?v=sCXwgPJGLIE

It feels like what was done with Crash Bandicoot is some interstellar death star level technology in comparison to what some developers do, not even bothering to pack files to reduce FS chatter or load smartly or compress textual assets, they probably had it developed on an SSD, it loaded fast enough for them, it's done and prime for shipping, duh! Just gotta write a hype text about how extensively we tested it and how much effort we put in making it!

I realize I sound like an ass that's ranting and I am writing too lengthy (I did think about writing articles instead of lengthy HN comments like this one so if someone is interested feel free in hitting me up) but some of the stuff just blows my mind in ways I didn't know existed.

It's not even optimization for dev time like Python could feasibly be but sometimes outright waste or lack of basic care, i.e. Slack was apparently launching a full blown browser per organization until recently (or something like that), completely needlessly, now that part is out. At the same time they had this crazy involved (and cute, because it's 2017 and things must be cute) error page: https://slack.com/asdsad , or that semi-notorious reply article from a guy using unix CLI instead of hip BigData(tm) tools to analyze relatively small amount of data (yes, the guy is rubbing it a bit in too badly when he brings out mawk): https://aadrake.com/command-line-tools-can-be-235x-faster-th...

That lack of care is evident in other areas too, i.e. in security it manifests as these SQL injections, IoT botnets, outdated software pwns and plaintext/unsalted+sha1 password debacles. Afterwards it gets justified by "state attack, China or Russia probably" or handwaved like "we store passwords in plaintext to send them to user via email when he forgets them" (an actual explanation I read once..) or "we innovated so fast to deliver SUPERB customer experience that we didn't focus on security" (while 'security' in that case would amount to closing an admin port on an IoT appliance for example..). In general software we get also stuff like that TP-Link repeater (recently on HN) that needlessly queries NTP every 5 seconds, squandering hundreds of megs of transfer per month and basically DDoSing these NTP servers.

It's like this entire mentality that good stuff is too hard or too complicated or too expensive to do (like that Chess guy and his "clever multi-threaded application") while Pareto is very much in effect and even as little as not opening a hundred files at once at game boot or reading the dense man/info pages and thinking for 20 minutes about the problem at hand or back of the napkin math could make a big difference. 10 or 20 minutes or hours of dev time per year is not a big enough reason to squander resources so badly. There is an expression in Polish that seems really apt for developers who "optimize" their time to that degree: korona ci z głowy nie spadnie (the crown won't fall off your head, basically meaning something along the lines that exerting a little effort towards something isn't too much to be reasonably asked/expected of you).

I recall a similar event when someone wanted to stress test something on a webserver and had a few million long file with URLs in it, he did while read line curl $line in bash, it brought his local machine to its knees, probably due to this rapid process creation and destruction. I gave him an xargs with -P and -n to launch a single curl per each 100 URLs instead and it ran no problem and this time the webserver we were testing was on its knees on my much weaker laptop (weakest in the company actually, since I wasn't a programmer and didn't need a strong one), as intended. I'm actually guilty of overengineering myself, since my first try was a Python 3 + requests + grequests script, and only when weeks after I forgot where I put the script and didn't want to rewrite it I ran that xargs version (very Taco Bell eqsue solution actually - https://news.ycombinator.com/item?id=10829512 ).. And that's an anecdote but it feels like people (actual 'professionals' making a paid product and working in $billion+ corps) ship stuff as bad as the original 1 curl per URL script as if it's not a big deal and then it gets justified with some handwaving, "focus on features and not performance and security", "no one is gonna hack a toaster for anything", "computers are fast and cheap", "optimizing for dev time", etc.

It's a typical high volume low margin situation, like Steve Jobs once said during original Mac building that improving a load time by even a few seconds saves lives of people because so many people will use the Mac so often that it will add to a few lifetimes.


In overall I mostly agree with you. However I doubt effective programming will only add 1-2$ per app in development costs. For better code, you need better and more programmers and more time and money. And excellent programmers don't grow on trees. There is limited amount of them, so they're really hard to get (event if you have money).

If you're company owner, which path will go? 1. Adding features less frequently, costly development, more people needed, but highly efficient code. 2. Frequent feature updates, cheaper development, less people needed, but shitty code base.

Even if you're brave enough to go for 2, there always will be competitor with 1. attitude, that will crush you into oblivion.

In case of game development, there is Duke Nukem Forever example. They tried to perfect it, changed game engine twice, but release took them so long, game looked dated anyway.


How much time and cost do most of the things I listed add? I mean really.

Building a 32 bit exe of a game that uses no 64 bit features, packing assets up to avoid FS chatter, loading lazily, closing up ports on an IoT appliance, not abusing NTP like TP-Link does, not pasting raw user input into an SQL query, having a dedicated security team that 24/7 monitors all tech deployed in the company for outdated versions of software?

These things are absolutely basic and most are one time efforts and others completely achievable. None of them require any degree of excellence. This is not about excellent code, this is technology 101. There are trade offs to be made like IDEs in Java vs. native ones on look and feel, features, start up speed, snappiness, etc. but there is no trade off in a situation where a program does less stuff, does it in a worse way and does it slower and taking more resources.

Look at amounts of money Equifax operates with and how touchy the information they handle is and try to tell me again with a straight face that what they did skimping on security and running outdated software was all okay because if they did better they'd be crushed by costs and competition into oblivion. And now there are already articles pointing at China with evidence as flimsy as "Chinese security blog reported the vulnerability day after it was patched by Apache and a week later Equifax got hacked".

Or explain to me what and why is TP-Link doing with it's repeaters querying NTP every 5 seconds (which actually takes more development effort to do than not doing anything would).

Or the recent failures of Apple, like password being stored in the hint field, that got deployed despite their (supposedly) stellar QA and polish that justifies the high price of their products.

This fail talk all reminds me of yet another crazy negligent story. There (and still is) an online shop in Poland that once was doing some "adjustments" on a world facing machine (that was supposedly not available from the internet due to high traffic causing the hosting provider to take it offline... I don't get it, the language and concept described is murky). Someone accidentally removed index.php (by renaming it to inedx.php), the web server had file listing enabled so what was shown was the webroot file listing and there was a textual backup of entire DB in it that had in it real names, phone numbers, delivery addresses, plaintext passwords and email addresses in it, it was of course accessible to the web server so all that separated you from data of 65 thousand people was a single click... The company of course bullshitted and gave 20% sale to everyone affected after lying for 4 days and saying they have "experts working on it"... They are also quoted as saying that "users agree that all their data is public when they sign up" (about real names, phone numbers, addresses, etc. despite the fact their terms and conditions said that all data is used only for order processing and never made available to anyone..) but it's murky and might have been a hoax. I'm not aware of anyone going to jail over this and the shop is evidently still open for business. Here's an article (I do not have an English one) if you're interested: https://niebezpiecznik.pl/post/kupiles-papierosa-przez-inter...

Tell me that stories like these are not absolutely surreal and that you'd never do as badly personally (I mean really - all it takes would be to try visit the website you just edited to see if it's okay and notice the file listing, lack of index.php, etc.). I'd not believe such a multi-layered fail story (file listing on, removing index.php, plaintext passwords, DB dump in web root and accessible, they way they didn't do responsible disclosure, etc.) if someone told me, it's too outlandish but it's also - evidently - true.

A university teacher would have crushed me into oblivion if for homework I submitted a web app vulnerable to SQL injections because "no one will guess to do that and it's illegal anyway" and that stored plaintext passwords as a "reminder feature". But I would just not submit something as bad in the first place, and as you can see I am not coy and can stand my ground if I think something right. But in real world both happen and then people scream China.

Even just recently someone had a laugh here in the comments under Mirai story about how it was considered (as always..) to have to been China, Russia, North Korea, etc. and then it turned out to just be few really smart Minecraft kids plus millions of devices with Swiss cheese security out in the world.

Duke Nukem Forever is a very special case of development hell, it doesn't exonerate games that don't even care. I have played games on my old laptop with no real GPU, including Unity3D ones, it's not the tool, it's how it's used. Today I can't play a 2D VN I paid for on an integrated Intel GPU and that's somehow okay.

I've already spend too much time replying to you and the "hurr durr we cna't all use cppluspluz!" gentleman/madam below. I won't be reading any more replies here, if I didn't convince you then nothing will (short of getting burned yourself by some company leaking your data in a dumb way - hopefully not).


That's a trade off. Have you ever estimated a game development budget? Sure, we can write our own engine in C++ and be running smoothly on 7-year old machines - which, theoretically, can bring some additional sales. But we can also use the money that C++ engine developers cost to hire much cheaper mid-level devs with a typical industry engine (Unity/Unreal), invest that time in additional polish/iterations and get a much better ROI.

And quite more often, it's a choice between doing a game using a modern engine or not doing the project at all.


No. There is no trade off. This game is a VN, this is a very simple genre, I could reimplement the engine easily in C++ and Lua and I do not consider myself a superstar. And I do not need to be Gordon Ramsey to be able to tell I've been given a plate of shit.

The C++ part is a complete strawman, nowhere did I say that everyone must now use C++ only. Unity, Unreal, Python, Electron, et al are not the problem here, the problem is bad practices and laziness. I ran simple and free 3D Unity games like The Very Organized Thief on that years old integrated GPU, today I can't run a 2D VN I actually paid for and that does nothing graphically stunning on a much newer and more powerful integrated GPU. Because reasons. Loading assets of the entire game all at once in a blocking way and storing them in loose files instead of packing and compressing them is not generating ROI or saving time - it's plain dumb and lazy. This is not okay in my book and if it is in yours then you are part of the problem. If you opened a hundred files one a time in C++ the result would exactly be the same and a fix is to actually test it and improve it, not decide it's good enough on your SSD and slap a '1.0 Gold' label onto that build.

It's absolutely not polished either. If you have watched the video you'd know about how "voice acting" amounts to c.a. 70 seconds of gasps and hellos, how translations support and control support has been cut and there is 0 ETA on them, how the developer ignored my questions about these issues, how new game plus is semi-broken, how lacking the VN features are (not even a dialogue history) and how clicking too fast can break the dialogue system (in a VN, for crying out loud, a genre that you do nothing in but click through dialogues). There are also a few writing mistakes that slipped through.

The game went through Square Enix QA (via their indie program) for months like that and took 2 years and about 200 000 euros to develop. There is still no promised Mac build despite them using a "portable, no code, HTML5" engine. I will not buy any excuses at this point, especially after being radio silenced while the developers take time to make cute replies to positive Steam reviews, and I will advise everyone to stay away from that borderline dysfunctional developer.

I have made plenty of sacrifices in the past, running really heavy IDEs on even my old laptop because features they provided me with justified the costs and the lag I experienced compared to vim, Notepad++, etc. When I pay for a product (or even get one for free) and it does less, does it worse and also uses more resources - that's not justifiable, that's bullshit and needs to be called out and stop. Clearly not everyone is cut out for tech related works and between all open and free tools the doors to tech world are open wider than they ever were, plenty wide to require at least a bit of decorum from everyone who gets in, not require users with 3-7 year old machines that are in completely working order to throw them out.

I can also forgive actual indie developers (that do not have a pile of cash and a corporate supporter taking care of promotion and "QA") a lot but there is absolutely 0 excuses for these guys or for Sony to get SQL injected, Equifax to handle people data in the way they did, TP-Link to misuse public NTP servers, IoT devices and drones to have their ports wide open, etc. zero (greed, sloths and other cardinal sins are not valid reasons).


I think the problem is two fold. Its this ideal that you should be working on "hard problems". Which for developers, means working on things one step above their current competency. Well that is great if you're doing research, but if you're shipping something or doing anything on production, you want to hire someone for whom this problem is easy, not hard. You don't want the wild-eyed fresh graduate with 'crazy' ideas, you want the old grizzled veteran for whom this sort of stuff is old-hat and boring because they've done it a million times. The first solution you come up with to any problem is never going to be the best solution. Its only when you've solved the same problem a few times that you will get better at solving it.

The second problem that I see is that of 'free' speedups. If you get a free speedup from hardware tech (like SSDs), you're thought is never going to be how can I match this speedup with my own code optimization, it means your production time is now cut in half or you can go focus on other things. Its only when you're forced to come in under a certain performance budget that people bother to optimize. As it is, this only seems to happen in fixed-hardware situations like console games/embedded systems, etc.


My point is about game development in general, not one exact game. The fact that in general, you make a trade-off between performance and features doesn't excuse the fact that there are actually bad games made by bad developers out there - in fact, I usually work in legacy codebases and have seen many examples first-hand.


Thanks for so many good examples and generally a decent writeup. You say you've considered writing articles; I'd say it's a good idea.

Favouriting this comment for reference the next time I have to bring this topic up here.


I might. I'll inform you if I do but I generally try to avoid controversial topics like these.


> Case in point - G.R.R. Martin uses a DOS machine to write

George RR Martin used a DOS machine to write. I'm pretty sure GRRM does not actually write anymore.


To add on to this, I am currently working on a project where the only source of persistent storage is [0], which offers 64B of general purpose SRAM. It is a $0.70 low power clock with built in support for battery backup.

Our main processor has more memory, but that goes away when we loose main power. Still, 64B should be enough for everyone.

[0] http://www.mouser.com/ds/2/268/20005010F-737592.pdf


I love tiny things like that, there's already so much you can do with it.

I wrote a thing way back when, probably a TI chip or something, which had like 300ish? bytes of memory. It got commands from a serial in (which was a bluetooth device, commands were sent from a java application on a laptop), the commands were a simple self-made thing which was a command (two numbers) and an argument (like, "set speed to 10"). That then controlled a PWM for the engine and things like that. RC boat with a few hundred bytes of memory.

I'm reinventing all that now with using raspberry pi's and arduinos and such. I just got an EPC32, which is a $5 device that should have wifi and bluetooth and such built in already.


You'll still get an increase in the number of things you can have loaded into RAM at once that's proportional to the reduction. That'll give you far more leeway in when trying to load resources on the fly, which most games still struggle with. So reducing memory usage is definitely something you want to do regardless of how much you have.

Also, if it's a PC game, you're looking at memory limits far below what the HW may offer - at least if you want it to run without annoyances. Windows limits the amount of memory it'll allow to be committed to the DRAM size + (current) pagefile size (may only be 4-8GB on SSDs); note how that doesn't include GPU memory - and allocating that counts as commit due to how residency works. Then you're looking at the OS eating 2-3GB of that, plus 1+GB if the user has a browser opened, plus anything slowly leaking memory. And a LOT of recent Windows applications leak memory or actually use insane amounts (including, amazingly, a built-in service on Win10) - if they never access it again it'll eventually get evicted from the working set, which means it only counts towards committed memory; task manager doesn't display per-process committed memory by default. That's only a fraction of all the problems.


I still use things like struct packing on a regular basis to reduce bloat. Just because systems have absurd amounts of ram doesn't mean that Chrome won't gobble all of it. If I can get my application's memory down to just a few pages (4k chunks on x86) then I'm much easier to page in after they switch back from Chrome. I don't have hard targets but I do still make an effort here because it's worth it not to get bugs about my applications being slow because of other bad actors.


  When finishing a level, the game would reboot the console
  and restart itself with a command-line argument
  (the name of the level to start) ... into the next level. 
  Voila, the perfect(?) way to clear all memory between 
  levels.
OH MY

At 22 years old, this is one of those moments where I'm in awe of the strange issues and workarounds that existed merely ~5-10 years ago which I'll probably never have to deal with. Very funny!


Maybe never had to physically blow air on a video game to make it work either, ha.


I bought a nintendo switch with zelda, and the cartridge wasn't detected the first time I put it in. Took it out, blew a bit on it, back in, working perfectly! Some things never change.


It's Nintendo. I wouldn't be surprised if they designed cartridges this way on purpose now.


You'll run into today's equivalent soon enough :)


Now I'm looking forward to the first website that decides it needs to restart Chrome between page changes for me...


As a web dev, I've often at least considered refreshing the page in an SPA due to some bug with client-side state—I guess that's the modern-day equivalent.


Some web sites make Chrome freeze my LG, forcing me to do an hard reboot to get my phone back, does it count?


Amazon's Lambda is probably similar enough.


Problem solving within a constrained system. It’s fun stuff and I always find it amazing how adversity and particularly time pressure can bring out the ingenuousness in people.

I guess wartime inventions are possibly an extreme example (google Hobart’s Funnies some time for some good clean ingenuity), but shipping console games to a deadline seems to have a similar effect.


RAM usage still matters a lot. Not only because consoles, also because RAM usage often translates to storage bandwidth, and CPU-GPU bandwidth.

Some of the tricks still apply today. For example, modern GPUs support all kinds of weird texture formats. Here’s a link for PC: https://msdn.microsoft.com/en-us/library/windows/desktop/hh3... Other platforms have conceptually similar stuff.


> Ultimately Crash fit into the PS1's memory with 4 bytes to spare. Yes, 4 bytes out of 2097152. Good times.

Wow. I am absolutely blown away by this.


I've worked on a couple of projects that only had a few bits of free ROM left. Not really all that surprising, you stop optimizing for space once it fits.

One of the projects was heavily squeezed to make it fit, the other didn't need much squeezing, but you couldn't tell the difference by looking at the free space.


If you like that, you have to read this (in 13 parts): http://all-things-andy-gavin.com/2011/02/02/making-crash-ban...


Here's another interesting story from the DOS days:

https://blogs.msdn.microsoft.com/larryosterman/2004/11/08/ho...


Makes me wonder why Rust isn't popular with game devs. Wouldn't Rust's borrow checker and resource management at compilation time make most of the described memory reclaiming issues obsolete?


> Makes me wonder why Rust isn't popular with game devs.

Too new to have a mature gamedev ecosystem. I only finally started tooling around with rust when someone tweeted they'd gotten it working on the PS4, but that still puts me in the early adopter boat.

> Wouldn't Rust's borrow checker and resource management at compilation time make most of the described memory reclaiming issues obsolete?

No. It might help you verify that the systems you're building to tackle these problems are correctly implemented, but it's not going to help you tackle the fundamental problem of e.g. "this game has more 'active' data than we actually have the RAM for".

In theory, this is the problem that virtual memory and page files solve. In practice, occasional small 50-100ms stalls which might be acceptable for a text editor are really nasty for a real time twitch and flow based game. 60fps gives you a frame budget of 16ms - if you want your animation to remain at a fluid 60fps, your absolute max frame time is 16ms.

So, games will end up writing their own memory management systems to e.g. partially load and unload textures at runtime. They'll build in knowledge of your level layout to try and prefetch textures before they're actually necessary based on where the player is moving, so they're available by the time the game needs them - and they might chose to render using a lower resolution version of a texture that was kept in memory if the high resolution version wasn't loaded in time instead of stalling the game.


Rust is definitely being looked at. The reason there are no big projects is that it’s still so new. For example the ability to write custom allocator is still going through the RFC process last I looked. Further it takes many man years of effort to write a AAA level game engine so there is quite a lot of inertia preventing moving to something new. We’re at the phase where people are experimenting within a relatively immature ecosystem. We’re probably a year or two away from a big project using Rust for part of their development process and at least four or five from a large project written from scratch. There are quite a few hobbiest and open source game projects though.



Awesome, it’ll be a good for there to be a commercial game project out there written in Rust when this gets released.


One reason is SIMD. Videogames contain quite a lot of math that benefit from SIMD a lot, be it MMX, SSE, AltiVec/VMX or NEON.

Another one is that borrow checker. Games are heavily multithreaded since X360. Game state + game assets = huge pile of data shared across threads. Borrow checker in Rust makes it harder to write parallel code that operates on that large shared state. For small state and/or plenty of RAM you can afford making copies but games can’t afford that. Sure Rust brings value here by eliminating a class of bugs, but for game clients (servers are different story) the optimal safety/resource tradeoff is at another place than e.g. for a web browser.

Finally, games often use many third-party libraries and frameworks (sometimes called middleware), and due to historic reasons majority of those are C or C++ libraries.


Rust has no issues sharing state across thread without jumping too much hoops, as long as you can make the borrow checker believe that only a single thread at a time gets to mutate state and when a piece of memory is accessible from multiple threads it's not mutable.


> and when a piece of memory is accessible from multiple threads it's not mutable.

In game engines, these pieces of memory often need to be mutable by multiple threads. E.g. see this old Intel article for a high-level overview how it might work: https://software.intel.com/en-us/articles/designing-the-fram... Now with DX12 / Vulkan it became even worse, because with them we no longer have _the_ render thread.

Sure, doing that the dangerous C++ way can introduce bugs. But the risks are different.

If you’re working on a web browser (that’s what Rust was created for), the worst thing that may happen, a compromised web site you’re viewing (maybe even by accidentally clicking a link) may infect your PC, steal all your data and turn your PC into a botnet node.

Most games can’t technically fail that bad, due to the following reasons (1) Most games don’t allow users to create content, only the developers do (2) Modern console games work in a hardware-assisted hypervisor (3) Some platforms even verify digital signatures of all the content they load.


I've only written a bit of Rust, but from what I've seen Rust makes the first 90% of optimizing for space-efficiency easy, but the next 90% (stuff like allocating from static buffers instead of on the heap, reusing/type-punning memory, tagged pointers, clever poking at the hardware/OS internals, etc) very difficult.


You can't do resource management at compile-time for games that have 50gb of assets. Also, Rust may help with accidental memory leaks, but you still have to deal with things like heap fragmentation.


Interesting point. Normally, your programming language can do little about fragmentation other than requesting a better allocator, if available (Windows?). Makes me think, maybe Rust's model would allow it to plan for better allocation requests? (Not saying Rust does that, just came as an idea.)


The main advantage I see Rust having here is making it a little easier to (ab)use the stack for temporary stuff in a way that can be verified to be safe. Nothing you can't already do in C++ at the expense of the occasional heisenbug.

Games often create their own allocators for a variety of reasons (specialized allocation strategies for speed or fragmentation reasons, adding debug statistics, enforcing memory budgets for (sub)systems, etc.) - although that's not terribly OS or language specific.


Game devs only change programming languages when console and OS vendors force them to do so.

That is how they moved from Assembly into C, C into C++, adopted Objective-C, adopted C# in their tools, accepted external middleware like Unreal or Unity.

So until one of the big desktop or console owners releases an SDK where Rust is the language, they will hardly adopt it.

Sure a few indies might do it, but it will be a blip on the radar of the usual Game Developer Conference attendees.


Rust's type system is intended to provide memory safety guarantees, which they define as explicitly not including memory leaks. You're not really any better off than just using C++ shared_ptr/unique_ptr.


These stories are from games made in the late 90s to early 2000s. Rust didn't exist then.


Yes, obviously. Yet, I'd assume space/memory problems persist today to - games are always pushing the PC hardware envelope.


This is giving me fond memories of things I had to do or had heard about from colleagues:-

- To improve game loading speed of a CD - Load level on PC from hardisk, log all filenames loaded to a txt file and then use that to order the files when writing to the final CD.

- Load all files into PS1 devkits memory, write out all memory in a binary blob to the harddisk, burn that memory image to CD to use for fast level loading (just load it in a single fread(..).

- have separate executables for different levels which had different features, to save memory.

- Write a small block allocator to make <256byte allocations quicker and more efficient.

- Find a tiny piece of memory in the PS2 IOP chip which doesn't get wiped on a devkit reboot (for some reason) and use that as 'scratch' space to write log messages to track down a hard to repro crash that rebooted the kit.

- Change the colour of the tvs border to different colours to track down a race condition that only existed on burnt disks and we had no debugger. The border colour setting code was quick enough to not affect the race condition, so choose some places in code to arbitrarily set to certain colours, burn the disk, test it, when it crashed see what colour the border was, then put some more colours in possible areas, re burn the disk and etc (so basically binary search the code areas using border colours).

- Use compiler optimisations settings for 'size' instead of 'speed' as the smaller executable code size meant you stayed in the DCache more which actually made the code quicker than compiling for 'speed' which resulted in generally larger code.

- Burn a master CD image for publisher, get the game ID code wrong, open up the disk image file in a hex editor and manually edit it rather than go through the whole build process again.

- have no build machine (Gold master got made off whatever code the leads machine had).

- Use sourcesafe (no atomic checkins....)

- Use a few batch files and a directory share for 'source control' of art assets.

- Have values in config files we gave to games designers which did nothing (this was accidental but they swore changing them made a difference to the game).

- Have a advertising deal with a company to have a special cheat code in the game to unlock some stuff, the programming code that does this has a bug that ships that means you have to alter the code incorrectly to get it to work....so tell the company that 'Your code was too easy so we made it harder'.

- Have a developer write code like this as he swore that passing a extra parameter would have slowed the game down:- (psuedo code, but original was in C)

Do stuff(int val)

{

foo * bar;

If (val<10)

{

bar = gStuff[val];

}

else

{

bar = (foo* )val;

}

}




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: