Hacker Newsnew | past | comments | ask | show | jobs | submit | argulane's commentslogin

Using /usr/bin instead of /bin comes down to that is much easier to mount one /usr instead of doing bunch of bind mounts for /bin /sbin /lib /lib32 /lib64,

Also some more background info https://systemd.io/THE_CASE_FOR_THE_USR_MERGE/


Proxmox has had cloud-init support for a while and we have been using it for several years in production.


Doesn't work for LXC as far as I can tell


I have used TimescaleDB in my last work place. We needed a easy way to store and visualize 500hz sensor data for few 10s of devices. We used it and Grafana to build a internal R&D tool and it worked way better than I imagined. Before I left I think the DB was using ~200GB on a compressed btrfs volume in DigitalOcean droplet and still performed fine for interactive Grafana usage.


I own a 2024 Toyota Corolla Hybrid (similar drive train to Prius) and it defrost the window way faster than my previous Volkswagen Passat. Ice and snow handling has been pretty similar here in North Europe compared to my previous car, studded winter tires are the key. Visibility is also quite comprareble.


We use it to send OTP messages through few telco providers who don't have HTTP API.



Unfortunally they didn't exist when we started with kannel


It's cool to see that they are still going. To any one looking to using this for SMPP connections should skip the releases and build it straight from SVN trunk to get the latest bugfixes.


I'm using lieer and mujmap to sync my Gmail and Fastmail accounts to a local notmuch mail storage on top of ZFS pool. That ZFS pool is in turn replicated off site.

I use neomutt to access my archive over SSH. And notmuch is very fast at searching all of my emails.

* https://github.com/gauteh/lieer

* https://github.com/elizagamedev/mujmap

* https://notmuchmail.org/


You can also `git fetch` and `git reset --hard origin/force-pushed-branch` to get your local branch up to speed with remote one assuming you don't have any local changes.


And if you do have changes, can create a new (temp) branch from diverged, reset the diverged branch to origin and cherry pick your own commits on top.


It's bit hidden but once you enable `git rerere`, it will remember your fixed up rebase conflicts.

https://git-scm.com/book/en/v2/Git-Tools-Rerere


Wow this is super neat. I wonder why it isn’t enabled by default.


It's mostly about performance. If you can store all the required info about the user inside the cookie then you can avoid a DB query roundtrip before sending a response.

Now that your cookie looks like this (probably also base64 encoded):

  {"id": 42, "display_name": "John", "is_admin": false, "session_end_at":1726819411}
You don't have to hit the DB to display "Hi John" to the user and hide the jucy "Admin" panel. Without HMAC, an attacker could flip the "is_admin" boolean in the cookie.

You could also create a cookie that is just random bytes

  F2x8V0hExbWNMhYMCUqtMrdpSNQb9dwiSiUBId6T3jg
and then store it in a DB table with similar info but now you would have to query that table for each request. For small sites it doesn't matter much and if it becomes a problem you can quite easily move that info into a faster key-value store like Redis. And when Redis also becomes too slow you are forced to move to JSON Web Tokens (JWT) witch is just a more standardized base64 encoded json wrapped with HMAC to avoid querying a database for each request.

But even if you are using random bytes as your session identifier, you should still wrap it in a HMAC so that you can drop invalid sessions early. Just for making it harder for someone to DDOS your DB.


Back in the Justin.tv days, we used this for some messages that were passed by the client between two of our systems: The main authentication was done by the web stack which gave the client an HMAC-signed viewing authorization. That was then passed on to the video servers which knew how to check the authorization but weren’t hooked up to the main DB.

Setting things up this way meant that we didn’t need to muck about with the video server code whenever we made policy changes as well as isolating the video system from web stack failures— If the web servers or DB went down, no new viewers could start up a stream but anyone already watching could continue uninterrupted.


Thanks for the clear explaination, I suspected as much. Wasn't sure however if that was all there is to it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: