>The idea of using an isolated user and a shared filesystem makes no sense;
I do run my stuff in containers, I have about 30 of them setup, each one with it's own service.
However, I still have to login into these containers, which happens as root (for the container). And then having to pivot to the app user is bothersome. Luckily or strangely a lot of applications don't work as root because nobody should be running any service as root.
>On the contrary, it forces good practice by ensuring that you always know how to install the set of dependencies that you need;
I have automated deployment for that. I simply add a python script to my fabric repository and signal which container should install it, the rest is fully automated and tested.
User-local installs are complicated here since it means the deployment script will have to pivot to another user temporarily.
>When you rely on system installs of packages, you create exactly the same problem when you come to run the program on multiple servers, only less fail-fast and harder to diagnose.
I also create a reproducable problem, user-local installs are less reproducable, in my experience.
I can replicate the exact environment of an app server installed via apt-get within a few keystrokes (I wrote a fabric task for that; "fabric copy-deployment <sourcecontainer> <targetcontainer>")
>If you get in a position where that's you're requirement, you've done something wrong. Take a step back and figure out what you really need.
You mean like running multiple instances of the same service? That happens. Maybe you want to deploy three seperate instances of AppSomething for three domains with three seperate datasets. Instead of deploying three containers, I deploy one container.
> And then having to pivot to the app user is bothersome. Luckily or strangely a lot of applications don't work as root because nobody should be running any service as root.
Again that's accidents of Unix history rather than sensible security design. In a single-purpose container running as root is fine (https://xkcd.com/1200/) - indeed we could go further in the unikernel direction and just not have multiple users inside the container. In the meantime it's easy enough to paper over the extra user transition in whatever tool you're using to log in.
> I have automated deployment for that. I simply add a python script to my fabric repository and signal which container should install it, the rest is fully automated and tested.
> I also create a reproducable problem, user-local installs are less reproducable, in my experience.
> I can replicate the exact environment of an app server installed via apt-get within a few keystrokes (I wrote a fabric task for that; "fabric copy-deployment <sourcecontainer> <targetcontainer>")
Interesting, I've found exactly the opposite (though I mostly work with maven which has always been good at reproducibility, maybe npm is less good).
> You mean like running multiple instances of the same service? That happens. Maybe you want to deploy three seperate instances of AppSomething for three domains with three seperate datasets. Instead of deploying three containers, I deploy one container.
Why? I guess you'll save a little bit of memory and disk space, but you've created a new intermediate level of isolation with weird characteristics that you'll need to keep in mind when debugging - those three instances of AppSomething are now able to interfere with each other a bit, but they're not quite as similar as you'd expect either. Do you really need that much complex granularity in your isolation model?
>Again that's accidents of Unix history rather than sensible security design. In a single-purpose container running as root is fine
In an ideal world, a user would be a container with shared filesystem. There are lots of uses for a shared filesystem (primarly backup tools).
A lot of container tools make it very hard to properly backup and restore containers.
>Interesting, I've found exactly the opposite (though I mostly work with maven which has always been good at reproducibility, maybe npm is less good).
Reproducable here means: I can pull up the same exact server environment, including all versions.
Build systems only do that for the language itself but system dependencies (OpenSSL, curl, etc.) might have different versions and don't help much in fixing that if it causes a bug to appear or disappear.
>those three instances of AppSomething are now able to interfere with each other a bit, but they're not quite as similar as you'd expect either. Do you really need that much complex granularity in your isolation model?
Interference does not happen if you properly isolate the users which is certainly possible on a modern system.
Systemd makes it extremely easy to essentially mount everything but the immediate app data as read-only. No interference.
> In an ideal world, a user would be a container with shared filesystem. There are lots of uses for a shared filesystem (primarly backup tools).
I see the shared filesystem as much more of a liability than an asset. It's too easy to have hidden dependencies between seemingly unrelated processes that communicate via the filesystem; when a file that should be there isn't, there's no way to ask why.
> Build systems only do that for the language itself but system dependencies (OpenSSL, curl, etc.) might have different versions and don't help much in fixing that if it causes a bug to appear or disappear.
Having a single dependency manager that can bring up consistent versions of all relevant dependencies is important, agreed, but I think the language dependency managers are closer to having the needed featureset than operating system package managers are. Operating systems are far too slow to update library dependencies and have far too little support for making local or per-user installs of a bunch of packages - of course the ideal system would support both, but I can live without global installs more easily than I can live without local installs. I'm lucky already in that the JVM culture is more isolated from the rest of the system - often the only "native" dependency is the JVM itself, and so using the same versions of all jars (and possibly the JVM) will almost always reproduce an issue. My inclination would be to move further in that direction, integrating support for containers or unikernels into the language build tools so that those tools can build executable images that are completely isolated from the host system.
> Interference does not happen if you properly isolate the users which is certainly possible on a modern system.
> Systemd makes it extremely easy to essentially mount everything but the immediate app data as read-only. No interference.
Sure it's possible, but again it's not the natural path, it's not the way the OS or a lot of the traditional-unix tools you get expect it to be. Things like CPU quotas for users feel very bolted-on.
> And yes, I need such complex granularity.
Why? What does all that extra complexity gain you?
I do run my stuff in containers, I have about 30 of them setup, each one with it's own service.
However, I still have to login into these containers, which happens as root (for the container). And then having to pivot to the app user is bothersome. Luckily or strangely a lot of applications don't work as root because nobody should be running any service as root.
>On the contrary, it forces good practice by ensuring that you always know how to install the set of dependencies that you need;
I have automated deployment for that. I simply add a python script to my fabric repository and signal which container should install it, the rest is fully automated and tested.
User-local installs are complicated here since it means the deployment script will have to pivot to another user temporarily.
>When you rely on system installs of packages, you create exactly the same problem when you come to run the program on multiple servers, only less fail-fast and harder to diagnose.
I also create a reproducable problem, user-local installs are less reproducable, in my experience.
I can replicate the exact environment of an app server installed via apt-get within a few keystrokes (I wrote a fabric task for that; "fabric copy-deployment <sourcecontainer> <targetcontainer>")
>If you get in a position where that's you're requirement, you've done something wrong. Take a step back and figure out what you really need.
You mean like running multiple instances of the same service? That happens. Maybe you want to deploy three seperate instances of AppSomething for three domains with three seperate datasets. Instead of deploying three containers, I deploy one container.