Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I talked to them (the people I knew) some, but it was mostly characteristics of the app.

So various events were like this:

* Perf problem w/ the code (e.g. didnt handle this kind of spike)

* Perf problem with service (e.g. had $200 db instead of $400)

* Couldnt max cpu due to lack of memory

* Couldnt max cpu due to IO issues (db)

* Couldnt maintain a reasonable queue (had to use RedisToGo which is far from cheap)

The biggest one I couldnt get around was my queue workers required too much memory to operate (likely because of them dealing with larger JSON loads). Too much was like 600mb (or something along those lines) total on the Dyno (not just from the process). I routinely saw in the Heroku logs "using 200% of memory" etc, and thats when things would start going down hill.

Things could have been a lot better if I had more insights into the capacity/usage on Dynos (without something like NewRelic, which doesnt give it well enough)

A great analogy for me is this:

If SQL isnt scaling there are several options:

1. Stop using it (switch to another DB)

2. Shard it

3. Buy better hardware

Guess which one we always go to first? :)



Thanks, this makes more sense to me.

Looks like I misread your purpose.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: