Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Small objects are very inefficient in s3. Aggregate them together and form bigger log objects is critical to go from a small system log to a real environment.


The company I work for open-sourced a straightforward library that does exactly that: https://github.com/embrace-io/s3-batch-object-store


(author here)

definitely!

I plan to add a batch write API. Also, an API where it buffers till it reaches certain size or a timeout to write to S3

tracking the batch write here: https://github.com/avinassh/s3-log/issues/3


This is why systems such as WarpStream regularly runs compaction jobs to more efficiently store objects and cut down on API calls.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: