wal archiving is piss easy. you can also just use basebackup. with postgres 17 it is easier than ever with incremental backup feature.
you don't need horizontal scalability when a single server can have 384 cpu real cores, 6TB of ram, some petabytes of pcie5 ssd, 100Gbps NIC.
for tuning postgres parameters, you can start by using pgtune.leopard.in.ua or pgconfig.org.
upgrading major version is piss easy since postgres 10 or so. just a single command.
you do not need pgbouncer if your database adapter library already provide the database pool functionality (most of them do).
for me maintained database also need that same amount of effort, due to shitty documents and garbage user interfaces (all aws, gcp or azure is the same), not to mention they change all the time.
I also encourage people to just use managed databases. After all, it is easy to replace such people. Heck actually you can fire all of them and replace the demand with genAI nowadays.
I also self-host my webapp for 4+ years. never have any trouble with databases.
pg_basebackup and wal archiving work wonder. and since I always pull the database (the backup version) for local development, the backup is constantly verified, too.
This is a retarded advice. Author clearly never tried to develop any serious web development.
> the build time is over 30 seconds!
that's silly. 30 seconds building time is nothing compare to the accumulated time you wait for micro changes to your frontend.
for typical web development using react/vue/svelte you have hot code reloading, which can reload the current website < 1 seconds after you hit [Save] on your favorite editor.
for htmx to update, you have to wait for your whole server to reload (which can be way slower even you use interpreted languages like ruby or python, due to complexity of the framework you use).
not to mention it does not keep any state of the current website, make debugging way more troublesome compare to a js mature framework.
only people who never have to improve their website incrementally think htmx is a viable option. or insane people.
obviously, for some small websites with minimal interactions or no need to change the content very often, then htmx can be good, but for that case, vanilla js also works, and they do not need 14kb of excess baggage.
Only backend developers that think frontend is trivial and we’re all just idiots think that HTMX is the solution. They saw it working in their hello world side project and think they discovered gold.
I want to be completely transparent here. Mephisto currently utilizes a few different upstream providers (including Guerrilla Mail and 1secmail) via a custom proxy layer to ensure high availability. The goal of this project isn't to build a new mail protocol from scratch, but to provide a hardened, cookie-free, and zero-persistence 'privacy frontend' that bridges the gap between these APIs and the end-user.
Regarding the AI claim: I've used modern dev tools to speed up the React/TypeScript implementation, but the architecture (RAM-only storage, IndexedDB caching, and PWA focus) is a deliberate design choice I've made to solve specific privacy frustrations I had with existing services. I appreciate the call for better attribution, and I'll be updating the 'About' section to clearly credit all upstream providers.
You got it. This is AI Slop at it's worst. And even the replies from the author reeks of AI. I read some of the source code, and it is slop, which I would never trust without scrutiny. Carefully read just the first words of benmxrt:
guilty as charged lol i tried way too hard to sound like a professional dev and ended up sounding like a bot... guess that backfired. im honestly just a solo dev who is totally overwhelmed by everyone watching my every move right now. lesson learned: less polish, more me. thanks for the reality check
reply