As a Guix person, I pretty much disagree with Joey's conclusion that some things—essentially QA and license checks—may “need to be jettisoned”. Those things, to me as a user, are one of the big pluses of Debian (and hopefully Guix).
Yet, it’s pretty obvious that today, traditional distros can no longer pretend to provide every single free software package that’s out there—let alone providing them with the level of QA and polishing that Debian, Fedora, Guix, and others provide.
One idea we discussed would be to offer as an option the ability to to “live imports”: users could install packages imported on the fly from these repos, while still taking advantage of the Guix user interface, transactional upgrades and rollback, and other goodies.
That could be a way to address user needs without throwing the baby with the water. There are technical challenges before we can get there, though…
I'm kind of surprised to see Guix described as having a high level of QA and polishing, akin to Debian and Fedora. It's not that I have experience with it being otherwise, but I thought that I couldn't expect it to be any more polished than NixOS, given that, at least as I understand it, it's basically a remake but using scheme in place of the nix language and running in an (almost) pure GNU OS instead of linux and friends like systemd. It's younger, uses less tested technology (linux-libre/Hurd and shepherd; with the exception of scheme (not that I dislike the nix lang)), and I imagine has less of a following to test it on this niche that NixOS and Guix belong to.
I'm going to install GuixSD one of these days to try it out. Installing directly from third-party repos is something I don't remember seeing in NixOS when I tried it a year ago. I'm interested to see what other differences in design choices have been made.
The QA part has to do with processes, not technique (Scheme vs. Nix, etc.) I think we (Guix) are doing pretty good with our review process for packages and general attention to packaging issues—bundling, security issues, package size, integration, etc. These are practices similar to those of Debian, though I must say Debian remains the gold standard to me. :-)
As for GuixSD, you are right that the Shepherd, for instance, is not as mature as systemd (it’s also much simpler though). My comment was referring to packaging QA.
I do think that the ability to statically check a number of properties on a GuixSD config before it’s even instantiated (something that using a general purpose language buys us) helps a lot, though.
On a fundamental level I really dislike the "always as root" operation of apt,yum and every other package manager.
In a security focused world we should be trying to minimise the privileges needed to do anything - and in that should gain a lot of portability.
I want to apt-get my desktop environment into my home directory, and then be able to migrate system to system by rsyncing that bundle and I want user namespaces to enforce security within that space.
Nix does this. And it has a global package cache, plus many more things.
It's also distribution agnostic. It even works on Darwin (macOS). Although under its own distribution, NixOS, there are some extra advantages.
As much as I like Debian, I think switching the whole distribution to a package manager like Nix would simplify and speed things up greatly. The whole package tree would not need to be in sync, and different releases would become just package channels. Besides, brittle dist-upgrades would disappear.
Actually whole Linux distros could become just Nix configuration expressions...
One advantage of using NixOS is that GUI apps works flawlessly. Otherwise there needs to be some hacks to make it work. This is because some graphics libraries are global and then is expected to be in a special location which is the default on NixOS.
Any package management tool can be used in any distribution. So any package management tool is distribution agnostic then. Fedora had 4 system package managers at same time: apt, yum, smart, dnf.
Sure, with enough work, you can probably make Debian work with pacman, and Archlinux with apt-get. The problem with those package managers is that they install everything on /. If you try to use both package managers at once, they'll most likely conflict with one another.
Nix on the other hand will install each package under it's own version- or build-specific directory under /nix. ~/.nix-profile will be a symlink pointing to your profile under /nix, and ~/.nix-profile/bin will contain the executables of what you installed with nix.
This isn't just a question of putting files under something other than /. pacman (and maybe apt, too) has a way of installing packages under different directories, but the files in the packages themselves will typically have references to the libraries, executables and other files they depend on assuming they've been installed under /. So, in order to use those installed packages, you'd have to chroot into the directory you've selected to act as root, so the references will work right. Nix, on the other hand, made sure to edit the references each package has to their dependencies so you can simply execute something in ~/.nix-profile/bin and each successive dependency will resolve its dependencies to build-specific directories under /nix.
You can use nix with pacman or apt-get or yum or any traditional package manager you'd like and they won't conflict. This is what is meant by being distribution agnostic.
So, if I will recompile whole distribution with other values for system directories variables, say /rpm/etc instead of /etc, /rpm/lib instead of /lib, etc., then rpm/yum/dnf/apt will be finally distribution agnostic in your eyes?
Nixpkgs is a whole collection of software (a distribution of sorts, but not a Linux distro), managed by Nix, which works identically on all Linux distros. It's also the package collection for NixOS.
Nix is cross-distro in the ordinary sense you describe, of course. It is also more deeply cross-distro in that:
• if you are trying to package for Nix in a normal way (the tools make it easy to do things this way and hard to do things otherwise by their nature), _all_ package collections for Nix will be cross-distro in this way
• The Nix community has a collection of thousands of cross-distro packages you can already use right now
`pkgsrc` is similar in this respect, and takes a more conventional approach, like the one you describe in this comment. I think that's a meaningful sense in which it is cross-distro.
Another way to think of this is that tools like `rpm` and `dpkg` or `dnf` and `apt` are portable between distros _as far as distro developers are concerned_, whereas package managers/package collections like Nixpkgs and pkgsrc meet the portability needs of users of _existing distros_.
It seems that the time is ripe to reconsider what it means to be a distribution. I'm not equipped to take a stab at an answer, so let me provoke a wishlist. What if...
- One could compose together (arbitrary) combinations of a kernel of choice and userspace of choice?
- In "userspace" one could cobble together dependencies as needed for any project, and run sandboxed applications, which can only share information through common parent packages.
- No single package version for the "system". (This is already being done, in many examples)
- Maybe the kernel is just another package to be managed. Just something that starts when the computer boots, and lets you invoke the package manager to change state.
- Distributions would then just become snapshots of how different pieces are combined together. Sort of like what Docker images are created from dockerfiles today.
It would be enlightening to understand which aspects are possible today or in the foreseeable future, and what are the impediments to doing this. Or reasons why one might not actually want such a structure.
> Distribution is a set if programs that are known to work together
this is only relevant for core utilities though. It does not mean anything for my mail client to work "together" with my music production application. Apps in appstores do not work "together" yet they are wildly successful.
Those applications may not work together with each other, but they do work together with user space libraries such as glibc. Linux distributions aim to provide binary compatibility with those applications. It's tricky because these libraries simply don't take binary interfaces as seriously as the Linux kernel developers do. While you can upgrade the kernel and expect programs to work afterwards and if they don't it's a kernel bug, the same is not true for user space libraries.
The distributions themselves are also a source of binary incompatibility. For example, one distribution may use symbol versioning while another may not, and this means users cannot download a random binary meant for one distribution and expect it to run on another one even if the versions of dependencies are the same. Compiling from source ensures the resulting software is binary compatible but the source is not guaranteed to be available since programs may be proprietary.
I use node, npm, js all the live long day but the last thing I would want is NPM packages as part of my distribution. Maybe I'm being obtuse - but I use js because it's rough and ready. Seems to me that being part of my actual OS is not what it's for.
The problem here is that, users generally have some small subset of things they really care about. A particular app, or their own dev environment, etc.
For these things.. they want that control. But the allure of distributions is for all of the other things that you don't care about, e.g. I just want to use this command line tool, you can just install the package and have it work. And also know you can uninstall it, in many cases receive security updates, etc.
The downside that I think about when I hear "pinning package versions" is how you go about distributing a patch to a security hole that affected many versions of a library.
Allowing multiple versions of the same library to coexist has always been a security problem.
With multiple versions (and pinning) available, developers stop doing the required work to keep forward/backward compatibility.
As the number of versions piles up, the amount of work required to patch vulnerabilities and backport those patches becomes unbearable. And distributions care about security.
* the producers of the package could produce a universal container that would run on all distributions.
* the distributions would only need to be sure they properly included a suitable container runtime. After all, Microsoft doesn't worry about packaging every third party software.
* the registry of software packages would need to include the software package
* this would work regardless of the technology a particular software package uses. (eg, whether it's written in C, Java, Python, .NET, etc) The package would pull in the dependencies.
* another thing that would be really nice, that I wanted since 1999, is that when you install a package it doesn't vomit files all over the filesystem.
* it might even be possible to install multiple versions of the same package (eg, Gimp 2.6, Gimp 2.8, etc)
Aren't Snap and Flatpack somewhat like this?
What I am describing may work great for end user software applications -- like Gimp, Inkscape, Audacity, Eclipse, etc. It might not work so well for languages, eg Python 2.7 vs Python 3.6.
Yet, it’s pretty obvious that today, traditional distros can no longer pretend to provide every single free software package that’s out there—let alone providing them with the level of QA and polishing that Debian, Fedora, Guix, and others provide.
In Guix we currently semi-automatically import package definitions from third-party repos like Hackage, PyPI, and the likes, for which we do some additional QA: https://gnu.org/s/guix/manual/html_node/Invoking-guix-import... .
One idea we discussed would be to offer as an option the ability to to “live imports”: users could install packages imported on the fly from these repos, while still taking advantage of the Guix user interface, transactional upgrades and rollback, and other goodies.
That could be a way to address user needs without throwing the baby with the water. There are technical challenges before we can get there, though…