> What mattered to me most was getting away from "1 process per container" mentality. Phusion realized right away that this was an unnecessary constraint
It's a necessary constraint because otherwise developers continue to utilize a monolithic application development process. The goal should be isolated, interchangeable processes, not an entire virtual environment. It's just delaying the inevitable. Makes scaling a lot more straightforward as well etc...
If you want to have a single process that you care about, there is this thing called a process that fits the bill admirably. You can start them up very quickly, and kill them when you don't need them.
I think Docker is a great tool, but I also think the perceived benefits it brings get discussed a lot more often than the complexities it introduces.
Pushing your complexity to different parts of your stack does not reduce it - it just moves it. In many cases, that's perfectly fine and works better - but it's no panacea.
But since they can have only one process they will cram everything in that process. Possibly spawning hundreds of threads with mutexes.
Multiple processes on the other hand can encourages fault tollerance and modularization.
To put it another way, having just one process doesn't mean it is simpler. I have seen single processes applications built from millions of lines of C++ code.
Whether you have a single process or not, the "parent" application will responsible for maintaining and executing subprocesses/subroutines.
Applications still have the ability to spawn processes with single process containers, you're just doing it through a different OS/process management system.
By keeping the multiple processes in the same container, you're likely decreasing modularity by introducing interdependencies via the file system.
The thing is, we already have good tools for process management in UNIX-like operating systems. We have cron, process supervisors, systems for log handling. If you need a separate container for every single process you run, you need to get rid of all the standard UNIX tools and start from scratch. The alternatives for docker-style world do not exist yet and where they exist, they are far from stable. So you will end up reinventing them and most likely do it wrong.
I agree with most of your statement, except the part about "good tools" and the assumption that it will be done wrong forever. Mistakes will be made, but current nix tools aren't really anymore ready for the distributed paradigm.
What I meant is that the current Docker-based solutions are often done without much experience. I'm certainly not against experimenting, but I think that that there are many benefits of running a containerized environment and LXD takes the pragmatic approach that can be used right know with the tools everybody knows.
> you're likely decreasing modularity by introducing interdependencies via the file system.
Unless dependencies are already there. Then needlessly separating them into containers just makes the system have a higher overhead and have a higher fragility.
Say should the monitoring software be in a different conainter? But it looks at logs, so now it needs to mount a shared folder or coordinate sending logs via sockets. Or maybe it is a helper process. Like an indexer. Putting that in a separate container with its isolated OS dependencies might not make sense. It requires to reading from the file structures of the main process.
> you're just doing it through a different OS/process management system.
There is already software that does it. It is init, systemd and others. You can specify if restart on failure should happen, can specify dependencies between then and so on.
> Unless dependencies are already there. Then needlessly separating them into containers just makes the system have a higher overhead and have a higher fragility.
Advantages to the Docker approach are that the dependencies are explicitly exposed and you can homogenize your file systems. Ideally you're getting rid of using files in the first place. Unless you're using local storage on AWS, you're already going over the network.
> There is already software that does it. It is init, systemd and others.
It's hard to disagree that the old systems aren't more mature. They aren't as ready for distributing processes though. And that's where most everything is headed anyway.
> Or maybe it is a helper process. Like an indexer.
This is the motivation for the concept of pods that appears within rkt and kubernetes. Things like indexers, monitoring agents, service discovery ambassadors, syncers for static content file servers, etc. There are a number of use cases for multiple processes that live side-by-side and share a lifecycle. And these containers don't need to be coupled but it is nice if they can share a localhost interface and paths on a filesystem.
It's a necessary constraint because otherwise developers continue to utilize a monolithic application development process. The goal should be isolated, interchangeable processes, not an entire virtual environment. It's just delaying the inevitable. Makes scaling a lot more straightforward as well etc...