Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Use case: I want to store some json values against some string keys. Which would be very incorrect to use for this?


How often are you updating the json values? 10/s? 100/s? 1M/s?

How many string keys? Millions? Billions? What happens if you lose an update?

Redis great for lots of rapid reads and moderate write speed for data that fits on a single server or can be manually sharded well (if it's straight k=>v, as you describe, that basically means your json fits in RAM; if you're using larger objects like sets/zsets/etc, it becomes a slightly different discussion), as long as your application can lose a few seconds without killing you (BGSAVE isn't instantaneous, of course).


If the data is critical and you must keep them, both. If not, either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: