For our production PGSQL databases, we use a combination of PGTuner[0] to help estimate RAM requirements and PGHero[1] to get a live view of the running DB. Furthermore, we use ZFS with the built-in compression to save disk space. Together, these three utilities help keep our DBs running very well.
We were running very large storage volumes in Azure (+2TB) and wanted to leverage ZFS compression to save money. After running some performance testing, we landed on a good balance of PGSQL and ZFS options that worked well for us.
It is - depending on the read-vs-write workload. For our workload, we landed on a record size (blocksize) of 128K which gives us 3x-5x compression. Contrary to the 8KB/16KB suggestions on the internet, our testing indicated 128K was the best option. And, using compression allows us to run much smaller storage volume sizes in Azure (thus, saving money).
We did an exhaustive test of our use-cases, and the best ZFS tuning options with Postgres we found (again, for our workload):
[0] https://pgtune.leopard.in.ua
[1] https://github.com/ankane/pghero