Sometimes you want to run something written in NodeJS as a service user which naturally and usually does not have a home folder at all.
Additionally, having to sync up the user installation and service installation if you then create the home folder is a source for bugs both in service and user that can't be crossreplicated between the two.
A global installation is desirable when multiple users are required to operate on the exact same versions of a service.
> Sometimes you want to run something written in NodeJS as a service user which naturally and usually does not have a home folder at all.
The idea of using an isolated user and a shared filesystem makes no sense; it's the security model you come up with due to random accidents of unix history, not one that you'd design from the ground up. If you want your NodeJS service isolated, do it properly: use a jail or container, which work beautifully with npm (and are much more awkward to do with RPM).
> Additionally, having to sync up the user installation and service installation if you then create the home folder is a source for bugs both in service and user that can't be crossreplicated between the two.
On the contrary, it forces good practice by ensuring that you always know how to install the set of dependencies that you need; you use the same source of truth to install the dependencies as both. When you rely on system installs of packages, you create exactly the same problem when you come to run the program on multiple servers, only less fail-fast and harder to diagnose.
> A global installation is desirable when multiple users are required to operate on the exact same versions of a service.
If you get in a position where that's you're requirement, you've done something wrong. Take a step back and figure out what you really need.
>The idea of using an isolated user and a shared filesystem makes no sense;
I do run my stuff in containers, I have about 30 of them setup, each one with it's own service.
However, I still have to login into these containers, which happens as root (for the container). And then having to pivot to the app user is bothersome. Luckily or strangely a lot of applications don't work as root because nobody should be running any service as root.
>On the contrary, it forces good practice by ensuring that you always know how to install the set of dependencies that you need;
I have automated deployment for that. I simply add a python script to my fabric repository and signal which container should install it, the rest is fully automated and tested.
User-local installs are complicated here since it means the deployment script will have to pivot to another user temporarily.
>When you rely on system installs of packages, you create exactly the same problem when you come to run the program on multiple servers, only less fail-fast and harder to diagnose.
I also create a reproducable problem, user-local installs are less reproducable, in my experience.
I can replicate the exact environment of an app server installed via apt-get within a few keystrokes (I wrote a fabric task for that; "fabric copy-deployment <sourcecontainer> <targetcontainer>")
>If you get in a position where that's you're requirement, you've done something wrong. Take a step back and figure out what you really need.
You mean like running multiple instances of the same service? That happens. Maybe you want to deploy three seperate instances of AppSomething for three domains with three seperate datasets. Instead of deploying three containers, I deploy one container.
> And then having to pivot to the app user is bothersome. Luckily or strangely a lot of applications don't work as root because nobody should be running any service as root.
Again that's accidents of Unix history rather than sensible security design. In a single-purpose container running as root is fine (https://xkcd.com/1200/) - indeed we could go further in the unikernel direction and just not have multiple users inside the container. In the meantime it's easy enough to paper over the extra user transition in whatever tool you're using to log in.
> I have automated deployment for that. I simply add a python script to my fabric repository and signal which container should install it, the rest is fully automated and tested.
> I also create a reproducable problem, user-local installs are less reproducable, in my experience.
> I can replicate the exact environment of an app server installed via apt-get within a few keystrokes (I wrote a fabric task for that; "fabric copy-deployment <sourcecontainer> <targetcontainer>")
Interesting, I've found exactly the opposite (though I mostly work with maven which has always been good at reproducibility, maybe npm is less good).
> You mean like running multiple instances of the same service? That happens. Maybe you want to deploy three seperate instances of AppSomething for three domains with three seperate datasets. Instead of deploying three containers, I deploy one container.
Why? I guess you'll save a little bit of memory and disk space, but you've created a new intermediate level of isolation with weird characteristics that you'll need to keep in mind when debugging - those three instances of AppSomething are now able to interfere with each other a bit, but they're not quite as similar as you'd expect either. Do you really need that much complex granularity in your isolation model?
>Again that's accidents of Unix history rather than sensible security design. In a single-purpose container running as root is fine
In an ideal world, a user would be a container with shared filesystem. There are lots of uses for a shared filesystem (primarly backup tools).
A lot of container tools make it very hard to properly backup and restore containers.
>Interesting, I've found exactly the opposite (though I mostly work with maven which has always been good at reproducibility, maybe npm is less good).
Reproducable here means: I can pull up the same exact server environment, including all versions.
Build systems only do that for the language itself but system dependencies (OpenSSL, curl, etc.) might have different versions and don't help much in fixing that if it causes a bug to appear or disappear.
>those three instances of AppSomething are now able to interfere with each other a bit, but they're not quite as similar as you'd expect either. Do you really need that much complex granularity in your isolation model?
Interference does not happen if you properly isolate the users which is certainly possible on a modern system.
Systemd makes it extremely easy to essentially mount everything but the immediate app data as read-only. No interference.
> In an ideal world, a user would be a container with shared filesystem. There are lots of uses for a shared filesystem (primarly backup tools).
I see the shared filesystem as much more of a liability than an asset. It's too easy to have hidden dependencies between seemingly unrelated processes that communicate via the filesystem; when a file that should be there isn't, there's no way to ask why.
> Build systems only do that for the language itself but system dependencies (OpenSSL, curl, etc.) might have different versions and don't help much in fixing that if it causes a bug to appear or disappear.
Having a single dependency manager that can bring up consistent versions of all relevant dependencies is important, agreed, but I think the language dependency managers are closer to having the needed featureset than operating system package managers are. Operating systems are far too slow to update library dependencies and have far too little support for making local or per-user installs of a bunch of packages - of course the ideal system would support both, but I can live without global installs more easily than I can live without local installs. I'm lucky already in that the JVM culture is more isolated from the rest of the system - often the only "native" dependency is the JVM itself, and so using the same versions of all jars (and possibly the JVM) will almost always reproduce an issue. My inclination would be to move further in that direction, integrating support for containers or unikernels into the language build tools so that those tools can build executable images that are completely isolated from the host system.
> Interference does not happen if you properly isolate the users which is certainly possible on a modern system.
> Systemd makes it extremely easy to essentially mount everything but the immediate app data as read-only. No interference.
Sure it's possible, but again it's not the natural path, it's not the way the OS or a lot of the traditional-unix tools you get expect it to be. Things like CPU quotas for users feel very bolted-on.
> And yes, I need such complex granularity.
Why? What does all that extra complexity gain you?
For this purpose I would have a service user called "leftpad-srv" under which my leftpad.io server runs.
When I login, I am "root".
When I want to say, change leftpadding from spaces to tabs, I'd call `leftpad-io-ctl set-padding \t` which would use a socket to communicate with the leftpad.io server.
For this purpose it would be very important that leftpad-io-ctl and the leftpad-io server are the same version, otherwise the -ctl might support setting a rightpad even though the server hasn't implemented this yet because they have two different versions.
A global install is necessary for many deployments.
(This is hypothetical but many apps have special ctl-tools to control or monitor the running application and it can be useful to, for example, have moderators in your app that can access the console with limited permissions)
A package manager ensures that the install is correctly available for all users.
With apps, same story, it ensures everyone has the app.
Install via cp -r / /usr is not a good idea as the package manager has no idea you are doing this and won't help you out.
In a worst case, the package manager will trample all over the install.
Additionally, a simple cp -r / /usr will probably not set correct permissions automatically, which means either users can edit the binary or won't be able to execute it.
Lastly, it means any update will have to be installed manually for every single release.
Package managers do this automatically and with much less friction.
Additionally, having to sync up the user installation and service installation if you then create the home folder is a source for bugs both in service and user that can't be crossreplicated between the two.
A global installation is desirable when multiple users are required to operate on the exact same versions of a service.