Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There was also an introductory blog post submitted 4 days ago. 245 points, 108 comments. https://www.xtxmarkets.com/tech/2025-ternfs/ https://news.ycombinator.com/item?id=45290245

Some notable constraints: files are immutable, write-once update never. Designed for files at least 2MB in size. Slow at directory creation/deletion. No permissions/access control.



(disclaimer: CTO of XTX)

These limits aren't quite as strict as they first seem.

Our median file size is 2MB, which means 50% of our files are <2MB. Realistically if you've got an exabyte of data with an average file size of a few kilobytes then this is the wrong tool for the job (you need something more like a database), but otherwise it should be just fine. We actually have a nice little optimisation where very small files are stored inline in the metadata.

It works out of the box with "normal" tools like rsync, python, etc despite the immutability. The reality is that most things don't actually modify files, even text editors tend to save a new version and rename over the top. We had to update relatively little of our massive code base when switching over to this. For us that was a big win, moving to an S3-like interface would have required updating a lot of code.

Directory creation/deletion is "slow", currenly limited to about 10,000 operations per second. We don't current need to create more than 10,000 directories per second so we just haven't prioritised improving that. There is an issue open, #28, which would get this up to 100,000 per second. This is the sort of thing that, like access control, I would love to have had in an initial open source release, but we prioritised open sourcing what we have over getting it perfect.


The reality is that most things don't actually modify files, even text editors tend to save a new version and rename over the top.

it is essentially copy-on-write exposed to the user level. the only issue is that this breaks hard links, so tools that rely on that are going to break. but yes, custom code should be easy to adapt.


Yes hard links aren't supported in TernFS. They would actually be really difficult to make work in this kind of sharded metadata design as they would need to be reference counted and all the operations would need to go via the CDC. It wouldn't really have matched with the design philosphy of simple and predictable performance.


well, that's at least consistent. if hard-links aren't even supported, you can't break hard-links by replacing a file with a new one through renaming either.


thanks for the open-sourcing!


So, it competes more with S3/minio than NFS it seems ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: