Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, but as a programmer that's not really my domain, and anyway throwing more servers at the problem fixes it, and doesn't require much change in the architecture of most apps.

The maximum speed of the internet, the maximum speed of your database, the maximum speed of your programming language are all basically constants that you can't improve by throwing money at the problem.



That is just plain appalling.

As a programmer, available memory is absolutely your domain or else you're increasing hardware/VM costs for yourself or your employer needlessly. Then you've transferred the cost of increased hardware to host your app/site/whatever off to users.

I can see today bit pinching isn't fashionable as it was long ago when memory wasn't cheap (it's not really "cheap" today either since you're paying proportionally greater per VM than the actual cost), but when a basic tenet like efficiency is ignored because DB all you're doing is compounding the problem.

BTW. Most people already deal with the DB issues with aggressive caching and/or reverse proxy so that still leaves the core application.


At an hourly rate of $120, thinking 10 minutes about saving RAM costs my client $20. With that money he could've bought 1GB extra RAM. I'm sure 99.999% of the times the saved RAM, would be less than 1GB. It's a simple cost/benefit equation. If you can save more than 1GB by thinking 10 minutes about it, you're writing shitty code to begin with.

There are times when the benefits are greater, for example when your software is running a million instances, or you're working on a hardware-intensive game, but those are certainly not common case.


The cost of 1GB of extra RAM is $20 in a certain range, namely where your system fits fairly comfortably on a single machine.

Once you get past a certain level, though, the cost of the next 1GB isn't $20, it's $20 plus the cost of another computer plus the cost of exploiting multiple machines rather than just running on a single one.

Then it's $20/GB for a while again, then $20 plus the cost of adding another machine, and at a certain point you need to add the cost of dealing with the fact that performance isn't scaling linearly with amount of hardware any more.

That last bit might be a concern only in fairly rare cases. But the first, where you make the transition from needing one machine to get the job done to needing more than one, isn't so rare. And that can be a big, big cost.

(Very similar considerations apply to CPU time, of course. Typically more so.)


We can't afford hourly rates of $120 and RAM most definitely doesn't just cost $20 when you factor in the inevitable new VM/machine. RAM is critical for maintaining concurrent connections plus there are times you need to keep large datasets in memory to avoid hitting the database.

This casual disregard for resources would explain why a lot of startups run into infrastructure issues so quickly and settle on terrible solutions (or outright snake oil) to mitigate problems that shouldn't exist in the first place.

People need to start realizing that The Cloud is only a metaphor. Hardware isn't intangible and neither is their cost.

Servers don't grow on trees; they live on a rack and grow on electricity and NAND.


Yes, $20 for the memory, and $200 in time spent getting approval, another $200 to physically install it, because you 'obviously' can't just open up that server as there are procedures fr that kind of stuff, and then $2000 in time wasted by the users while they spent 6 months waiting for that one ram stick to get installed.

It's easy to think everyone has their acts together like facebook or google, but most companies i've dealt with have hardware upgrade processes measured in months or years, not hours or days. You absolutely have to take responsibility for your work as a programmer and make stuff run fast instead of labeling it somebody else's problem.


> It's easy to think everyone has their acts together like facebook or google

Well... For those companies, obviously, dynamic languages are not an option. ;-)


Most garbage collected languages can't handle all the memory available on modern systems. Hardware costs pale in comparison to revenue and the cost of programmers. (Unless your service is free)

In general if your server is making you money, you'll make more money improving the service than reducing the amount of RAM the service takes to run.

In summary, test the conversion rate of a page/feature/etc, not how much RAM it uses. If you're going to performance optimize, attach a profiler and take the easy wins, don't spend too much time on it unless it's a ridiculous amount of resources or your margins on razor thin.


"Hardware costs pale in comparison to revenue and the cost of programmers."

That's not a law of physics. The number of servers you need depends on the number of users you have. The functionality you have to build usually doesn't. So the more users you have, the less true your statement becomes.


"Most garbage collected languages can't handle all the memory available on modern systems."

I don't think that has anything to do with the languages. I think that has everything to do with the quality of the memory manager implementation, and there is at least one memory manager that does deliver in this respect, namely C4.


Most apps just don't have computation patterns where RAM usage could even be a problem; most apps are IO-bound in some way. The companies I've worked for have deployed new servers because of high load averages (in the unix-load sense), not because of RAM shortages.


That's only true because most companies admit defeat before trying: they hit the disk when serving. The big Internet companies (Google, Facebook, LiveJournal, hell, even Hacker News and Plenty of Fish) all serve out of RAM: they keep everything a user is likely to hit in main memory so that a request never needs to perform I/O other than network. In this situation you're absolutely RAM-constrained.

I remember trying to optimize some financial software a couple jobs ago and hitting a brick wall because that's the speed the disk rotates at. We ended up buying an iRAM (battery-backed RAM disk) and sticking the DB on it. You can get this a lot cheaper by avoiding the DB and using a RAM-based architecture if you're willing to sacrifice fault-tolerance under power outages (or if you have some other architectural solution for fault-tolerance, like writing to multiple computers).


It's not that they admit defeat, it's that they admit success. Yes, there are ten companies that are pushing hardware so hard that every bit counts again because if it didn't they'd be in danger of exhausting the earth's supply of elemental silicon. But everyone else can make a good living without going there.


This is terrible. It's your job to know how much memory you're using and not to use too much if you can help it. In many environments it's set at a very low value - desktop, mobile, client-side web apps, embedded. Even in an environment where RAM is cheap and you can scale it as you want... using too much RAM can cause paging issues, cache-miss issues, increase your server startup time, etc. All of these are real world issues that you, as a developer, are responsible for.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: