> Nix, on the other hand, does have a concept of state. If you make a one-line change to a 200-line Nix configuration, it doesn’t have to re-do all the work from the other 199 lines. It can evaluate the state of the system against the configuration file and recognize that it just has to apply the one-line change. And that change usually happens in a few seconds.
The author seems to have some misguided ideas about Nix. Nix is not fast because it is stateful. It is fast because it is functional and reproducible, which allows for caching without compromising correctness. I don't want to split hairs, but referentially transparent caching like this is not quite what I'd call state.
Yes, there is some statefulness in system activation, but this is not what makes Nix Nix -- quite the opposite.
Explaining what Nix is seems like one of its biggest barriers to adoption.
Part of the difficulty is it means different things to different people. My colleague spent a whole lot of time trying to answer this question and ended up with this:
My main take away after spending some time learning about Nix is that it embraces the functional programming concept of a pure function.
If I give a function a certain set of inputs, it will return the same result every time, no matter what. Nix is about building software the same way, whether it’s your own software, someone else’s software, or your entire OS: You declare all your inputs explicitly and it will be built the same way every time.
What really mattered to them, regardless of how they were using Nix, was its ability to bring purely functional programming concepts to computing areas that were previously off-limits.
From that single idea you get a whole ecosystem of tools. We mainly covered the Nix language, the Nix Package Manager, and NixOS, but there’s also a continuous build system called Hydra, nix shell, and a deployment and provisioning tool called NixOps. Probably, there’s even more.
The biggest barrier to adoption unfortunately is not people's inability to explain what the tool is in my opinion. It's that the tool is incredibly complicated, extremely hard to walk someone through compared to alternative projects, and honestly... In my opinion the problem it attempts to solve doesn't really exist.
I'll take an "impure" os or package manager over a pure one any day if complexity is a thousand fold less and the learning curve doesn't require half a decade. Got stuff to do!
> The biggest barrier to adoption unfortunately is not people's inability to explain what the tool is in my opinion.
That's simple: nix is a package manager and the language used by the package manager, NixOS is a Linux distro.
> It's that the tool is incredibly complicated, extremely hard to walk someone through compared to alternative projects, and honestly... In my opinion the problem it attempts to solve doesn't really exist.
From someone who is working as a DevOps Engineer for some years and managing Linux servers for a few years longer that thought is incredible naive. The problem of undefined and undocumented system state is a fundamental problem I encounter everywhere especially bad with legacy systems. I often do things on them blind and just pray for the best outcome, realising months later that some system was broken by one change I did and no one realised that for months.
> I'll take an "impure" os or package manager over a pure one any day if complexity is a thousand fold less and the learning curve doesn't require half a decade. Got stuff to do!
I thought the same first but unwedging Debian once a week on a different system is also not fun and a waste of time and having servers in some undefined state and no one who how the config is supposed to be and why or when it got changed, too.
The result in the end is that every system is different and unique and your Ansible playbook to run a common and good thought out task succeeds on 15 VMs and sometimes completely blows up the 16th because no one could have thought that the state of configuration there is so widely different.
Yes that is a fair comparison. In the latter you have to write down or remember what you did to reproduce it, and even if you make a script it could screw up and leave your system in a bad state.
As someone who seems to be going along a similar path (started as a dev, no one was around to do sysadmin so I did it, and now in trying to modernize a bunch of really old/unpatched servers running a legacy system I'm learning how to devops) I feel better knowing I'm not alone in this struggle.
Yes. Think if you coordinated a software project with email and a Sharepoint folder for the code.
Then you use git.
Current server state management is the former. Nobody knows what is running where and if some performance differences over time or between servers exist, how can it be bisected etc...
You basically have two choices: you can take the complexity upfront, and in a predictable fashion by learning Nix, or you can deal with the complexity after the fact when you’re dealing with dependency hell and a deadline is looming and your boss and/or client is mad.
The Nix language is basically JSON plus syntax sugar plus pure functions. A Nix derivation can be thought of as a super-powered lockfile that includes not just the versions of the dependencies, but also the build instructions and the environment in which to build them.
The argument for Nix is basically the same argument as the one for writing pure functions as much as possible, or not doing so. Any amount of experience doing the former will demonstrate that it is superior.
Now, Nix may be complex, and some of that may be reducible, but the fundamental idea of treating a build like a pure function is NOT reducible, and is well worth the effort of learning, because it will apply to ANY future pure build and dependency management tool
They do, but the people who spend time on Nix issues don't typically notice or count the time saved because it's harder to notice.
When reproducibility issues take 16 hours every 3-4 months/12 weeks and Nix maintenance like updating pins takes 8 hours per month most will feel like the first option is less work.
Imagine if you had the data showing with Nix your build is:
- 99% likely to work
- without Nix your build is 90% likely to work, but 16 hours to fix it when it breaks.
- The non-Nix build also has a 10% chance of it breaking randomly at any time.
- The nix build initially takes 8 hours per month to maintain for 6 months, 4 hours for the next 6 months, then 1 hour per month thereafter
Which do you feel would be better? What I describe above has been what the situation seems to be in my experience.
I can say from personal experience I've seen many days devoted exclusively to Nix upkeep and maintenance. That was from junior people to people who had spent half a decade or so deeply in the community and using Nix for their daily driver.
I've never had to do much for Nix itself, but packaging something to build from source can often require quite some effort. Applications that use a pretty unconstrained build/install process upstream may expect to do a lot of things that are not allowed in the Nix build sandbox, like unconstrained network access and or overwriting files in existing packages on the system. To deal with that you really have to dive in, learn how the sausage gets made in the upstream package, make some choices about if/where to compromise, and then spend some time tweaking and debugging. That can be a pain and can definitely take a day or two.
I've only had 'maintenance' issues with Nix itself on macOS, where OS upgrades routinely nuke Nix's hooks into the OS or add restrictions that break things. (But they do that to other package managers as well.)
I'm taking that approach with the package I've been working on, which has a somewhat pathological (by Nix standards) Gradle build which does things like
- manually download a copy of Elastic search outside of the normal Java dependencies scheme
- run NPM to fetch remote libraries to build web assets at build time
- *also* run Yarn, for some reason
- use Git at build time
The ways it does all of these things are actually fairly thoughtful (for example, it does checksum the artifacts it manually grabs at build time to verify their contents), but they don't play nice with running builds in offline mode or under a user that has no $HOME. But it's one of those freeform 'my build tool configuration is a weird DSL in an imperative, general purpose, Turing-complete language' situations, and I'm not very familiar with either the language (Groovy) or the DSL. So it's a lot of quirks to cope with.
I've made quite a bit of progress in building it from source by making a few small patches and eventually disabling the sandbox for now, but it's still dying on a weird test failure for reasons I don't yet understand. At this point I'm just back to munging the binaries provided by upstream because I was mostly building from source to learn about the project and how it's distributed/deployed anyway.
I messed a bit with gradle2nix for a better-behaved, old school FOD-based build with Gradle in offline mode, but that was pretty brittle as gradle2nix is unmaintained, and due to some design limitations it couldn't actually capture all dependencies. I'm kinda interested in working out something better but on the other hand, this is a third-party package and I don't myself use Gradle or Groovy for any kind of development, so mastering Gradle's quirks and wrangling it into the Nix sandbox for this package is more of a yak shave than a practical skills investment for me.
> In my opinion the problem it attempts to solve doesn't really exist.
Almost all of Docker use-cases are for solving that same problem, but badly and with partial completeness. The lack of adoption is really not caused by lack of value.
This matches my experience with it so far. Extremely complicated and hard to understand, projects that use it have builds fail anyway except now with very hard to debug errors.
Good point. I'm much more likely to help others because I know I can get to the exact state they are in and reproduce their issue with a simple `nix build`.
That is absolutely not true. If you start to get the hang of it and follow the way things are supposed to be done then things get easier over time. You need to invest upfront more time into your configuration but on the long run it pays off and saves you from an entire error class.
> projects that use it have builds fail anyway
The point of Nix/NixOS is not to have no failing builds but that those are reproducible and deterministic as much as possible and that those failures are noticed early and before the point of no return. A system build is supposed to fail early and not mid way through a major update and prompting you to merge some config under /etc by hand.
You say it's "absolutely not true" that it's "hard to understand", then go on to explain how you have to "invest upfront more time", and also that you have to learn "the way things are supposed to be done".
You are very clearly describing that people have to work more to understand it. The person you're replying to even tried! Denying the experience of other people does not make that go away. It just means that the problem you're pretending doesn't exist will never get fixed.
I've never seen it actually pay off in industry. I've seen it be used as good job security while other devs just wrote docker files and got things done.
As the only guy maintaining the flake.nix in my team's repo, I don't think it's really contributing to my job security. I'm just happy that they don't mind the extra files and commits here and there because I value the ability to contribute from different devices without worrying about which versions of what are installed.
Maybe it'll be job security if people start agreeing that downloading binary tools in in CI without a hash check is unacceptable attack surface, but until then it's just this weird thing I'm doing on the side.
I do catch a lot of bugs where people are relying on dependencies that they happen to have installed but have not declared. It's the kind of thing that prevents newcomers from being successful out of the gate, or makes taking a local process and putting it in CI difficult, but fixing those is not exactly high visibility.
Then you're not paying enough attention. There are plenty of companies using nix to distribute a reproducible environment (if you don't believe me, why not go search GitHub for "flake.nix" and see how many "industry" repos you find).
I think it would be more productive for you to sit down and give it a fair chance than posting little rebukes all over this thread.
I gave it a fair chance and it was a deciding factor in why I left a company believe it or not. Only one person could maintain and fix deployments. Not from lack of trying from seasoned experts and no new comers. It was the worst user experience I have probably ever encountered. Meanwhile I was able to pick up terraform and docker in a matter of days...
In my experience those "others" that "just wrote docker files" are exatly the ones that don't know how to build the system in a reproducible manner if their ci environment gets reset for some reason as they find out that stuff that was "supposed to be there, pinned and configured" wasn't.
In my experience the months required to get a handle on Nix is not worth the benefit(which is shakey in my opinion) compared to competing technologies. We don't have to agree, but that's my take...
If you have something easy to deploy like a go binary you can just write a dockerfile but for big python projects that start to compile dependencies that is quickly no longer true.
The dockerfile likely is also not matching the software you run and test on your local machine, so sometimes debugging is not as easy. Ofcourse you can debug inside the container but then you are missing all your tooling and need to bring that with you. And rebuilding a dockerfile is often not reproducible, so if you want the container back from 1 year ago and you no longer have the artifact you are probably out of luck.
With nix you can easily open a shell with the packages used in the docker image or go back in time and reproduce that image from a year ago with the flake.lock from a year ago.
Also applying patches to dependencies used in dockerfiles is not dead easy as with nix.
Most people would opt to not apply patches to their dependencies in my experience. Seems kinda sketchy if that's something you have to do on a regular basis. I'd chalk that up as a possibly serious business concern depending on the magnitude of the fixes, the importance of the dependency, and the frequency.
I have written software that would have been 100% not package-able any other way.
Also, let’s not lie to ourselves, there are plenty of ridiculous contraptions out there, like docker-images used for ML that take up some insane space, and are updated each day. Packaging is a hard problem, and there is finally a tool that can actually solve it.
You can continue spending your time messing around with your system then. I learned nix in a short amount of time and it has supercharged my development workflows and reduced the overall complexity. I have too much stuff to get done to not use it.
Based upon how I manage my system, Nix appears to be something that I could use productively. The problem isn't so much of them being able to explain what the tool is, but one of them being instilling confidence that it lives up to their claims. We exist, after all, in an industry of hyperbole. It also doesn't help that their solution is layered on top of an operating system that has traditionally been managed in a very different way.
A friend of mine said that he is currently using it instead of packer at his current gig. He can use the same code to build any type of output, AMI, docker image, VM etc. I dig that. But I'm still not gonna learn nix because I don't do enough of that stuff to warrant the pain of learning nix.
You don't have to manage your system with NixOS to reap the benefits of Nix. It solves very real problems that very much exist, it might not exist if you're a one-man show deploying WordPress to GoDaddy though.
Barrier to entry:
1. Run the nix installer
2. Enable flakes
3. cd project
4. nix run
This ensures you run the package with every dependency except the kernel pinned to a hashed version. If dependency hell is not a problem for you, be happy!
Ok and this requires root access, sets up some global directories under root, and a new user. Me as the administrator: why the hell do I need a new user and what is the nix store and what are the conditions that mutate it? (I know the answers to this question, but it's a barrier for people who give a shit).
> Enable flakes
What the fuck is a flake? Reads a bit... what the fuck is a derivation? (again: I know the answers to these questions already, but the invention of jargon by nix devs is a massive barrier to entry that shouldn't be overlooked, it's extremely confusing)
> cd project
Ok now I'm comfortable doing things I know
> nix run
Fine, but what about auto envs and nix shell? I don't use these with make or cmake. I need to attach a debugger, where does it go? How do I set up my IDE that has no idea nix exists?
My point is, nix has a lot bigger of a barrier than these four lines, and it's really naive to think that's it.
Okay, so you're required to read some documentation ahead of time, that's where your problem lies.
What's a Terraform module? What is Terraform? What is a provider? Why don't I just build all my infrastructure with the AWS Console? Why is it it's own weird language? What is this state thingy that just ended up in my folder? Do i give it to the devs?
I think it's pretty much consensus that Terraform is great for provisioning anything with an API. Nix does the same for your packages, partitions, OS, containers, shells and many more things in the same functional manner.
In a company not everyone has to be a Nix wizard either, if a small team knows Nix they can build the Nix infra, then developers can reap the benefits of not having to mess with it at all.
Just because people are unable to comprehend the benefits doesn't mean they do not exist. And if you wanna reap great benefits you might need to spend an hour or two reading things.
Yes it's a novel way of doing things, but it's also one of the most actively developed projects with one of the highest amounts of contributors in the world.
> Fine, but what about auto envs and nix shell? I don't use these with make or cmake. I need to attach a debugger, where does it go? How do I set up my IDE that has no idea nix exists?
The people that know Nix well enough will assist the ones that doesn't know, if you enter a nix shell and start vscode from there it'll be aware of $PATH which Nix sets, meaning it'll find all your dependencies.
On a tangent: I wonder why this is still the default.
The nixStatic binary has, since quite a while, support to as a non-root user create a "${XDG_DATA_HOME:-${HOME}/.local/share}/nix/root/nix" -> "/nix" unshare chroot before running the rest of the command if "/nix" is missing.
It's only a real issue if you really need to run something as root, or something else that needs unshare chroot itself, but in that case, I guess you could just have a /nix store folder anyways.
Almost everything here is valid, but iirc the last time I ran the nix installer I thought it offered a home directory based install now that doesn't require root.
You can use in theory something other than /nix, but then you have to recompile everything yourself. So not many people do if, not sure about it’s state.
I just dipped my toe into the Nix pool a couple of weeks ago, and there are instructions to use root to create a /nix and grant ownership rw to your user account. No further admin required, but everything else seems to work as if using the Nix root user method.
The installer used to have an option for this, but nowadays it's discouraged. How come you wanted a single-user install so much? And j/w, are you on macOS?
Ok cool! The reason I asked is that the multi-user setup has more benefits on Linux than on macOS. That's because on Linux, the full build sandbox is actually available, and the store's immutability is enforced by default.
The tradeoffs are obviously yours to consider. But the normal Nix build sandboxing helps protect you from nasty things like crypto miners in setup.py or whatever, as well as improving reproducibility.
That's less relevant on macOS where the sandboxing story is not so great.
Personally, using Nix with a daemon seems like a better setup to me but adding another highly privileged process unnecessarily is obviously a real security concern. There is some ongoing work, btw, to reduce the level of privileges that the Nix daemon needs.
Is Nix harder to learn for somebody who knows nothing about computers and OSes? Probably not. It might even be easier.
But that's not Nix's primary audience. It's targeted at people who already know a fair bit about the current paradigm and have plenty of skill with it. For those people, it's very different. Things that are easy for them will be hard until they learn the new paradigm.
I get that people who have already learned Nix and like it are in the new paradigm. Understanding it is not a problem for them. But pretending that work doesn't exist for others is unhelpful. And this sort of casual dismissal makes me suspect that even if I learned Nix, the experience still might be pretty bad. If Nix advocates can't take seriously the difficult noob experience, maybe the experience is painful all the way through.
I think the parent was focusing on the first point about the install. That particular argument against creating users and writing to directories would be a barrier to install any software.
Imo the harder part is learning bespoke build processes that you may not own in order to get software that assumes it can perform arbitrary network access or other naughtiness at build time to build successfully in a restricted sandbox.
The language is maybe a little strange at first but there's really not much to it.
You don't have to know how to write Nix to write 95% of all Nix, it's just "JSON with functions" after all. There are definitely advanced things that the Nix and NixOS developers maintain for me.
Just like I don't know how to implement any crypto, or how to implement efficient 3D pathfinding I don't know how to implement NixOS. But I can write a derivation using the helper functions for the language I want to package, which aren't many these days since nixpkgs is huge already.
> Part of the difficulty is it means different things to different people.
That's definitely how it seems to me. The pro-Nix stuff I see is generally about the theory much more than the practice. Which was also my experience with functional languages when their hype cycle was last on the rise.
On the one hand, that's fine. I like ideas, and I think taking an idea and running with it can be really interesting. You can clearly see that in history's various art movements, for example. On the other, for people who are just trying to get things done, it's often alienating and tedious, because the people in the grip of their Big Idea often seem heedless of other perspectives, and frequently can be quite evangelical about it.
Personally, my strategy with Nix, as with the various functional languages, is to keep a distance from it, waiting and seeing. Perhaps it will influence more mainstream projects, bringing the benefits to me without a lot of upheaval. Perhaps I'll have a project that really needs its particular benefits, and so I'll take on the cost of a paradigm switch. But in the meantime, I have stuff to do.
This is probably a pretty good read on the situation. People who have really thrived with Nix are often at the intersection of 'FP people' and 'extremely stubborn Linux people', and that's because sometimes it takes getting your hands dirty and fighting a build system that belongs to a package you've never used before to get the Big Ideas to pay out. When the footwork is already relatively familiar to you, it makes it easier to push through whatever obstacles there are to playing with those attractive Big Ideas in practice.
The Nix community's roots are definitely with FP people, partly due to the language and its inspiration and design, and partially also due to early success using Nix to solve particularly painful Haskell dependency hell problems years ago. All of the original 'marketing material' for Nix focused on principles and properties that would be attractive only to people who already knew and valued those things, which was mostly FP folks.
> Perhaps [Nix] will influence more mainstream projects, bringing the benefits to me without a lot of upheaval.
This is definitely already happening. Off the top of my head, Nix has served as inspiration for Guix, Habitat, and Spack, which are all respectable package managers that try to make things a little smoother than they are with Nix in terms of UX. The latter two are also more conventional, with a relaxed notion of 'purity', and so it may be easier to get packages to build in them when those packages are built in problematic ways. (Guix, if anything, is even stricter about packaging conventions than Nix, but it has a really nice CLI and the language might resonate more with some people, so if Nix has given you pain it's definitely still worth trying.
> Perhaps I'll have a project that really needs its particular benefits, and so I'll take on the cost of a paradigm switch. But in the meantime, I have stuff to do.
I love Nix and its fundamental design, and I want to see it flourish and grow, both in general and in my own professional life. But at work, I try to maintain the same attitude as you describe here. I use Nix for myself everywhere I can (with some escape hatches in place!), but for projects that others work with, I only use Nix where I feel that some specific aspect of the project calls for it.
All of that is just to say that even to some folks who really are drawn to Nix in substantial part due to the Big Ideas that power it, your pragmatic stance is quite understandable and entirely welcome.
Feel free to just play around with Nix in a low-stakes way and advance your usage as curiosity or new problems drive you to do so. You don't have to jump all the way in to benefit from Nix or get a taste of it.
Thanks! Really helpful comment. Especially this bit:
> People who have really thrived with Nix are often at the intersection of 'FP people' and 'extremely stubborn Linux people'
As an extremely stubborn Linux person, that makes total sense to me. But I also almost never recommend Linux to average folks, because I'm keenly aware how far along certain bell curves I am. I would love it if more advocates reflected on whether the personal characteristic that makes a technology great for them is one that makes it bad for others.
Thanks also for the pointers to projects inspired by Nix. I'll check them out.
I often find difficult to explain ideas the most interesting ones. It’s a new paradigm, naturally we need to develop “the neurons” to deal with it, to reason with it, to hold the paradigm in our minds.
I find the best way to get the paradigm is to dive in.
I see from people that did that they are always enthusiastic, it must be worth it.
It doesn’t help that the term is hugely overloaded. Nix might mean the package manager, the shell, the operating system. There are so many valid permutations of letters; why they chose to reuse Nix for everything is a mystery to me.
The other issue is complexity. If you can manage to figure out the jargon, you’re greeted with the requirement that you completely port an entire project to get any benefit, and that’s non trivial. It requires learning a whole new language, and when the (highly opinionated) language conflicts with other tools’ ideas, for example pip, the documentation generally bashes the other tool, boasts how much better it is, and then proceeds to have devs write dozens of lines in a new language they invented when one line of Python used to be enough.
Just to elaborate a bit for those not familiar to Nix (slightly simplified to exclude recent support for content addressing). Nix work with derivations, a derivation is basically a data structure that specifies how a package is built. Derivations are normally not created by hand but using a function (eg. stdenv.mkDerivation).
When you ask Nix to build a package, it hashes a normalized form of derivation data structure. This hash is useful in various ways, but one way it is used [1] is to look up whether the derivation is already in the Nix Store. Because if it is, there is no need to build it. So Nix looks up whether
/nix/<the_derivation_hash>
exists. If it exists, the build is done. If it doesn't exist and you have a binary cache configured (which by default is the binary cache provided by the NixOS project), Nix will look up the derivation hash in the binary cache. If it exists in the binary cache, Nix will download the path to the local Nix store. After that
/nix/<the_derivation_hash>
exists in the store and the build is done (without building anything). Only if that fails, Nix will actually build the derivation.
Now, one of the cool things about Nix is that it is derivations all the way down. So, it's not that just what we traditionally think of as packages is a derivation, but people wrap up all kinds of things as derivations, including configuration, etc. Since derivations are usually generated by functions, there are all kinds of useful functions that make derivations for eg.: single configuration files, scripts, etc.
In the end, building a NixOS system generation is just building a derivation. nixos-rebuild switches to a different generation by just setting a bunch of symlinks to an output path in the store containing that system generation (/nix/<system_config_derivation_hash).
At any rate, when you make a one-line change to a 200-line Nix configuration, Nix does have state to keep track of what it needs to rebuild or not. Nix will just try to build the derivation (and its dependencies), but it hashes the derivations, finds that their output paths are already in the store.
Some might argue that then the store is state. But it's not, at build time you are evaluating a pure function with memoization (the Nix Store).
[1] There is also a package name and version in the store path, but lets keep it simple.
I agree with everything but the last statement. This all comes down to: do you consider memoization to be state.
I predict people's answers to this question will come from experience with memoization. Here's mine: I kept trying to get nix to build tensorflow locally, so that I would get the avx512 benefits of the big, but gpu-less machine I had. I hadn't realized some other derivation had already downloaded tensorflow from online cache, so didn't have avx512 enabled. I kept making shells, trying tensorflow, seeing it doesn't have support. The solution was to tell nix to disregard the nix store, in order to force the local build. This experience has left me with the concrete feeling that the nix store is full-on state, and I the user must be aware of it.
> The solution was to tell nix to disregard the nix store, in order to force the local build.
If this actually led to avx512 being enabled in the package, then that's a bug. Nix builds should not be dependant on the machine doing the compilation, all such autodetection should be disabled via configure flag or patched out.
Then, the right way to enable avx512 would be to pass some 'enable avx512 please' flag to the package's configure flags. Which would then trigger recompilation, without any 'disregard the nix store, in order to force the local build' options.
It is, but the possibility of such bugs is a downside to the approach (not saying it's a showstopper but it is a negative). Some of the nastiest software problems to track down are the ones that cause some fundamental assumption everything rests on (often a cache keying assumption!) to be broken, making everything behave wrong, including the tools you're supposed to use to track down problems.
If you're going to build an entire system on an assumption of referential transparency you want to be able to guarantee that everything really is referentially transparent, and one valid criticism of nix is that it can't really enforce that in all cases.
There is work being done to address outputs by their content hash instead of xor-ing their input hashes, which in theory should eliminate this problem.
Yes, you have unfortunately discovered that sometimes the hardware itself is an input that isn't always captured explicitly, but also isn't controlled for with sandboxing. Ideally enabling avx512 would be an explicit input to the tensorflow package, but based on your experience it sounds like this feature is detected during the build automatically.
I hope that issues like this get better over time thanks to projects like Trustix, which would make non-reproducibility like this more apparent.
> Ideally enabling avx512 would be an explicit input to the tensorflow package, but based on your experience it sounds like this feature is detected during the build automatically
Looks like it sets e.g avx2 on the flags, forcing the package to be most compatible, thus removing the hardware state (the builder may or may not have avx512, ideally Nix packages should remove hardware autodetection to make it pure and consistent in face of cross-compiling).
It should indeed have an input in some way, e.g to add more flags. Then AIUI (still learning Nix) one would be able to call the package function with that input from the dependent package function, thus defining another package than the default one, which would be reified as its own specific derivation for that package to depend on.
I think part of the problem ist that derivation hashs sometimes don't fully cover the intermediate states of a derivation during the build process. This might lead to two hashs pointing to effectively two different configurations. I've had that experience in particular with non-reproducible derivations.
Even one of those in the store will make the store, or at least a subset of it, a state.
Thanks for the clarification! I'm still new to Nix, so I'm trying to share useful things I'm learning without overstepping my expertise and saying wrong things.
My mental model of Nix was that if I'm in system state A, which is the result of performing task X + Y, and I want to get to system state B, which is the result of task X + Y + Z, then Nix would recognize that it's already in state A, so it only has to perform task Z to get to state B.
It sounds like what you're saying is that Nix's actual behavior is that if I'm in state A and tell Nix to bring me to state B, then Nix still performs tasks X and Y again, but the results are cached, so tasks X and Y feel as fast as a no-op, and the only task I perceive as taking time is task Z.
Nix builds don't know anything about state, and they work just like you describe. Nix builds power basically everything you do with Nix.
But for 'installing' packages, more happens than just creating builds and leaving them somewhere in /nix/store. In those cases, you also have a profile manager (nix-env, `nix profile`, nixos-rebuild, darwin-rebuild, home-managwr) which builds a symlink forest (pointing into the Nix store) that represents your complete configuration. That forest is called a profile generation, and represents how, e.g., your user's Nix profile was configured that particular time. (When you perform a rollback, your profile manager is just setting the current profile to a previous generation.)
Each generation of your profile has an activation script (and some profile managers may also have some bit of profile activation logic of their own, idk). That activation script may have to perform some state management, e.g., restarting a service or telling the service manager (systemd on Linux or launchd with Nix-Darwin) to reread its configuration files to recognize the availability of a new service.
This little bit of state management is hopefully as minimal as it can be. In fact, it's generally expected/communally enforced that Nix packages have to work normally when run directly from their respective residences in /nix/store, without having been installed into any kind of profile at all!
So during normal daily Nix usage, I've never really had to think about it. If you want to implement or contribute to a profile manager, you'll have to!
That's largely correct! To use more precise language, let's discuss two distinct phases: building and activation.
Nix will always "build" x, y, and z. But x and y might turn into a no-op by being cached in the nix store.
System activation is when all those things are symlinked into place. If x, y, and z are, say, systemd services, then there might be some logic that checks if x and y have changed and if the services are already running and decide what to do -- this is probably the most similar to how Ansible works. But this is also a small part of the big picture. Activation is pretty fast even starting from nothing.
When it comes to building derivations, yes. Which describes almost everything you can do.
For live system rebuilds in NixOS, the final step involves examining the system to decide which systemd services need restarting and restarting only those; that’s part of the process called activation. But that’s the only exception, and doesn’t happen if you reboot instead.
Memoization is state. As a functional programming maximalist myself, I know it hurts a little bit, but still, Caching/Memoization is statefulness.
That said, you are right that as a user of the system, that statefulness is abstracted from you and you don't have to worry about it (until some subtle caching bugs forces you to dive deep in the rabbit hole)
Yeah I really bumped on this. A cache is absolutely state. It is sufficient to say "a referentially transparent cache is a technique to store state while avoiding most of the drawbacks typical to storing state". This is already a big accomplishment! No need to also redefine the word "state" to make the technique seem even more magical than it is.
It's state at a different level than most configuration management software deals with. When Puppet reasons about the state of an existing system, it asks things like 'what's on the PATH right now?'. NixOS doesn't— it reconstructs the directory tree under /run/current-system/sw/bin without asking that. Same with 'modification' of files that live under /etc. What you and the GP write here is nuance worth admitting, but I wanted to highlight what people are getting at when they say Nix's approach is 'stateless'. What they mean is that Nix's approach to state is really qualitatively different, and hands state off to be managed by a lower level part of the system as much as possible.
Quibbling about what is and is not state isn't particularly productive. There is intrinsic and unavoidable subjectivity in the definition of state. The common pattern is to ignore out-of-memory errors, completely ignore the fact that in the real world time-of-execution is irreducibly an observable side effect, and that whether or not a given computation completes may even depend on whether or not an entirely separate OS process has or has not consumed the memory on the system. There's already non-zero pragmatism in the definition of "statefulness", and adding "whether or not the state of the disk has changed as a result of this 'pure computation'" isn't exactly an impossible stretch. It isn't a binary characteristic; when you really get into the nitty-gritty it is unavoidably a continuum. All real systems have state of some kind.
I'm not quibbling about what state is - the author is very much confusing Nix with something like a control-loop based approach that converges on the desired state by examining the current state. Regardless of your definition of state, that's not what Nix is.
It is not incredible fast and while it is running you can do something else like reading hackernews but there is definitely room for improvement and smart people are already thinking about that right now.
Did you read the entire comment? The core idea of functional programming is pure functions (given the same outputs, the output is always the same). Nix applies that to building systems.
Pure functions are just a concept that is used in FP world
Yet, pure functions can "stand alone". You do not need FP to use pure functions. They are independent from FP. You can be OOP maniac and still use them.
So unless you want to sound fancy and trendy then why call Nix functional instead of side-effects free?
> You do not need FP to use pure functions. They are independent from FP. You can be OOP maniac and still use them.
This is not true. OOP is fundamentally built around impure operations.
Objects are persistent references that you send messages to or that you call methods on (depending on your OOP language of choice).
A persistent reference that is stable across different invocations (as opposed to a new reference being created on each invocation) requires that those invocations be impure operations, because the same invocation performed multiple times must have different effects, otherwise there is no point to having a stable reference.
That's not necessarily a bad thing, but OOP is fundamentally impure.
(If you do create a new reference on every invocation, then you no longer have OOP. You merely have a namespacing system where the first argument to a function can alternatively be written with a dot).
I think this is a bit more subtle, and a bit less interesting to say, than you're thinking it is and responding to. You need to be impure in OOP world but not in every function you write. I write many pure functions in my OOP work; they make the whole thing easier to reason about.
Sure, but it's precisely those places where you're using pure functions that aren't object-oriented. That is it is possible in some nominally OO languages to write code that is not object-oriented and that's where pure functions come in. In a language that has only classic OO-style objects pure functions (and their counterpart in data space, immutable data structures) are generally not used or are not very idiomatic, e.g. as in Smalltalk (you will have things that are kind of pure, but the purity can almost always be broken in some way).
Sure in the same way you can create an object in FP by sticking functions in a data structure or create functions in a logic programming language by constructing certain relations.
Heck you can even build "objects" in C.
And all of these methods are indeed used in those languages.
But none of those capture the spirit of object-orientation, in the same way that CalculateSomething is not object-oriented, unless it in turn generates a mutable object. That's not to say it's bad code or that it is uncommon, even in say Java. Simply that it's not object-oriented (and these days Java is getting less and less object-oriented anyways, with records and pattern matching explicitly separating data from functions).
Indeed in Java materials and blog posts from > 5 years ago, using pure methods of the form `CalculateSomething` was generally something used begrudgingly, where too many of those methods was specifically called out as an anti-pattern (e.g. the dreaded "utility class").
Now that we have another Nix post, maybe someone can enlighten me about something I've been wondering about.
I'm one of the maintainers of a popular django application. Someone made a nix package of the project, but we've now twice gotten invalid bug reports from people using the package because the package depends on "django_4" and whenever someone updates that nix package, the package for our project breaks.
Of course we, like all other python projects, don't support using other dependency versions then the ones in the requirements.txt file. So when someone just uses a different minor version of django, stuff breaks. What's the disconnect here? Why does all nix packages that use django_4 need to use the same version, that seems super prone to breaking all kinds of stuff. Same for the other 35+ dependencies that run arbitrary versions instead of the ones defined in the requirements.txt file.
I am not an expert, but here’s my attempt at a useful comment.
On the highest level, `nix` is an alternative build system. So, if someone packages your app with `nix`, there’s now extra work to keep that working, and it’s on the packager to keep it working. If they packaged your app such that it’s using different dependencies than those required, that’s a bug in the package. As a maintainer, you can help here by making it clearer what versions are accepted, and by making it easier to run the tests for a package.
If we open a black box, there are two things in play here: Nix-the-build-system and nixpkgs package collection.
The build system is very open ended and can specify all dependencies precisely, but it’s on the user to define what that means exactly.
nixpkgs is a coherent collection of nix packages, a bit like a Linux distro. In particular, it _generally_ has one version of each package, and there’s some testing to make sure that all the packages work together.
Now, to package a Python app with Nix you can either pull dependencies from nixpkgs, in which case the situation would be similar to, eg, packaging for Debian.
Or you could create a hermetic environment, where an app gets an isolated copy of dependencies, specific just to the single app, a situation similar to using virtual env.
It sounds like what happened here is that your app got packaged in the fist way, but actually it can work only in the second way. I assume you do specify specific compatible version of Django somewhere, and if a package (be it .deb, .rpm, or .nix) doesn’t respect that, that’s a bug in the package.
Not really, nix is way more flexible and more up to date and nix also often runs tests and different pythons cannot interfere with each other that easily. On a high level things are similar but the details are wastly different.
> Or you could create a hermetic environment, where an app gets an isolated copy of dependencies, specific just to the single app, a situation similar to using virtual env.
That could also be done with nix but is often not because upstream pin quality is often lacking.
It seems like Nixpkgs aims to minimize the number of package versions in use at one time. Not just nix, most package managers do, it seems (i.e. you wouldn't expect to find different minor versions of Nginx in Debian, would you?)
So by that same logic, there is only one version of Django 4.
It is definitely possible with Nix to use the precise versions of what's in your requirements.txt, but I'm not sure if the Nixpkgs maintainers would allow all that extra duplication upstream.
I packaged some python applications in nixpkgs, and it seems the consensus is to try and relax the dependency so that the globally packaged version is used, but if it fails the you can override the version yourself. Though this is not done through the requirements.txt because that file does not have enough information (no integrity hash for example).
> but nothing you said works in practice for python packages
How do transitive dependencies in the Python ecosystem work, then? I assume Django works with multiple versions of python and bcrypt. I assume pandas works with multiple versions of scipy. Is there no semantic versioning? If everything requires an exact version, how do you prevent everything from grinding to a halt?
> Is it fair to summize that python applications with python dependencies do not really work well as nix packages and shouldn't be used?
Let's not conflate Nix and Nixpkgs. Nixpkgs has its reasons for minimizing redundant packages, however it is certainly possible to package your app with Nix and use the exact specified dependencies.
> How do transitive dependencies in the Python ecosystem work, then?
Not very well.
> how do you prevent everything from grinding to a halt?
I don't have a good answer for you.
> Is there no semantic versioning?
You can read django release process here [1], not sure how it's relevant. I'm not the maintainer of django, but of a project using django. Would it be better if all software was perfect, had no bugs and used perfect semantic versioning? Yes, I would say so. Is that a requirement for using nixpkgs?
> Nixpkgs has its reasons for minimizing redundant packages, however it is certainly possible to package your app with Nix and use the exact specified dependencies.
I'm not packaging it, someone else is, it breaks and they come to the project to raise invalid bug reports.
Well you said earlier that nothing I said works in practice for python packages. My only point is that it must work at some level in the python ecosystem, else the ecosystem would collapse.
Anyways, it sounds like you're unhappy that someone did a bad job packaging your application. That sucks. Elsewhere in this thread someone mentioned that there isn't a strict single version policy in nixpkgs, so this can probably be easily fixed. I'd suggest filing a bug in Nixpkgs.
> Elsewhere in this thread someone mentioned that there isn't a strict single version policy in nixpkgs, so this can probably be easily fixed. I'd suggest filing a bug in Nixpkgs.
There isn't one but we are not collecting multiple package versions for no reason and since python itself cannot well handle multiple versions of packages they are only allowed outside of pythonPackages where all end user applications should live.
In this case I think it is important to distinguish nix (the package manager)
and nixpkgs (the popular package repository / distribution used with nix).
Packaging python applications with nix is doable, but you have to specify the exact versions of your dependencies and for that you can't easily use nixpkgs.
Nixpkgs tries to keep a minimum number of packages (like Arch or Debian as well), so each of the dependencies will typically only occur with one minor version for each release of nixpkgs.
We could still use the nixpkgs to build our application but we have to override each of our dependencies to the right version, but that approach can get quiet tedious for a large number of dependencies.
Fortunately there are tools to automatically generate your dependencies from a requirements.txt such as mach-nix or pip2nix.
What's the point of a minor version change if it's breaking? Does Django not have a versioning policy that enforces non-breaking changes between minor versions?
This is such a condescending attitude. What you mean is applications that are maintained the way that you and the Nix developers think an application should be maintained.
It's incredibly naive for a package manager as ambitious as Nix to assume semver. I'm a big fan of semver myself, but the vast majority of software projects follow it imperfectly or not at all, and for good reason—it's nearly impossible to follow it perfectly, because even bugs are part of your API. Every project I've worked on has eventually had something break on a version upgrade because we were depending on something that was later decided to be a bug (but at the time was just how it worked).
Elm can mostly get away with enforcing semver because they designed it that way at the language level, but Nix wants to manage dependencies written in all languages and ecosystems, which have dramatically different versioning practices.
Ah, fair enough, I misunderstood. I thought the tests they were recommending were tests to ensure backwards-compatibility between version bumps, I didn't realize they were talking about the downstream pacakge's tests.
I still disagree with the insinuation that it's everyone else who's screwing up and if we all did things the way Nix wants us to then Nix would actually work just fine. That's just another way of saying Nix doesn't work in the real world.
It is unfortunate that some in the nix community come off that way, because I would say that in general Nix goes to great lengths to adapt to the world as it is. Especially compared to, say, Bazel.
I myself have been using nix in an org that is blissfully unaware of nix for about 2 years, if that's any indication of how adaptable it can be.
You're totally right. If you're a package maintainer and you find out some package is misbehaving even though all if its included tests pass, it might kinda make you feel like kicking the thing and calling it junk.
But we should recognize that some of what drives that is just defensiveness, and some is personal frustration. At the end of the day, Nix and Nixpkgs are for letting people run useful software more or less as it exists. It's not just for users or developers of perfectly tested, bug-free software. (Nix itself is certainly neither of those things, and neither is Nixpkgs!)
Nixpkgs does not assume Denver that's why we run if possible the package's tests, our own tests and build dependent packages to make sure the most obvious breakages are noticed before things are even merged.
Ah, I thought you were saying that if we all just used e2e tests to ensure we didn't make breaking changes in minor versions, we'd be fine. I didn't realize you were talking about the downstream package's tests.
I do still take issue with your insinuation that it's the package maintainers' poor practices that are at fault here. The real world is a messy, complex place and "best practices" don't translate well from situation to situation.
OP didn't ask for their package to be included in Nix. Presumably OP's system works for them and for their use case, but whoever created the Nix package made assumptions that turned out to be flawed. It's not fair of you to say that those bad assumptions are OP's fault because their package isn't "properly maintained" and doesn't "work as it should".
Someone (you?) made a bad assumption. Don't cast blame for that on someone who only knows Nix exists because it sends phony bug reports their way.
Sounds like the problem is with Python maintainers who don’t understand that breaking changes should only be made between major versions.
If that’s not possible though then as sibling comment said - you can override the dependencies and the nix maintainer should make sure the package works as expected
Sounds like the problem could also be with Nix maintainers who don't understand that "semver" is not a universal law of nature and that not all projects and ecosystems follow it. This kind of blanket dismissal can cut both ways.
Semver (the website and "spec") was created in 2009 by some guy. It's not an RFC, a standard, or anything like that. Yes, it gained widespread adoption. Yes, the guy in question is a cofounder of GitHub. So what? You cannot force it upon everyone. Python is about 20 years older than semver. Django is several years older. Should the whole ecosystem change their conventions because it's more convenient for a few people?
Except Django site says that a.b are feature releases which should be backwards compatible except for specific exceptions. If their software truly breaks “with every update to django_4” then it’s either a problem on Django’s side or a problem in how said person uses Django
> * Versions are numbered in the form A.B or A.B.C.
> * A.B is the feature release version number. Each version will be mostly backwards compatible with the previous release. Exceptions to this rule will be listed in the release notes.
> * C is the patch release version number, which is incremented for bugfix and security releases. These releases will be 100% backwards-compatible with the previous patch release. The only exception is when a security or data loss issue can’t be fixed without breaking backwards-compatibility. If this happens, the release notes will provide detailed upgrade instructions.
Going from "mostly backwards compatible with the previous release. Exceptions to this rule will be listed" to "should be backwards compatible except for specific exceptions" is quite the stretch. There are no "specific exceptions": incompatibilities can be anywhere and you need to read the release notes to know where. In semver, a minor version increment is backwards-compatible, no exception, no ifs or buts.
If you want to shoehorn Django's release process into "semver", then act as if the product is called "Django 4". If the version is "Django v4.X.Y", then X is the major version number, Y is the minor version number, and there is no patch version. It should be version in Nix as "django4 vX.Y.0".
They clearly say “exceptions to this rule will be listed in the release notes” meaning that backwards compatibility is the rule. There’d be no exceptions if there was no rule hence I said they “should” be backwards compatible except for specific exceptions, which shall be noted in the release notes.
Not sure how this conversation is productive, but there's never been a X.Y release of django without noted backwards incompatible changes to my knowledge. Just imagine that djangos X.Y releases are semvers major releases, not much more to it than that.
It also very clearly states that there may be exceptions to the rule. So a package repository that assumes that Django follows semver is unequivocally doing the wrong thing, because Django is very clear that they don't (otherwise there would be no exceptions).
Doesn't this auto-upgrade behavior punch straight through the reproducibility Nix is supposed to be giving you? It's not exactly a functional build system if the results you get depend on when you download the dependencies.
(I mean, I guess you could say that time is an input to the function, but that seems to miss the point.)
If the Django package in nix were upgraded, all packages that use it would be tested.
And you wouldn't get the upgrade automatically, instead you would only get the upgrade when you change the version of Nixpkgs that you are using.
And if you don't like that, then you can use multiple versions of Nixpkgs at the same time. Your old package will stay exactly as it was. This of course cuts both ways, and means you get no security updates for it or any of its transitive dependencies.
Which part of this isn't reproducible or functional? If nixpkgs never changed, it wouldn't be a very good package repository.
Using Flakes, you can lock the version of nixpkgs (and any other repository) to a certain commit, and that commit is an input to the function. When you update that commit, of course the build changes, but I'd say that's pretty expected. If you don't upgrade it, you'll keep the prior versions.
Now this only works as long as you keep your package outside of he main nixpkgs repository, once you upstream it you're locked into the versions of packages that are "currently" in nixpkgs in the same commit. Builds are still reproducible, because you select the commit you build, but your package might break if a dependency changes in an incompatible way. If that happens, there's a problem with either the definition of the application or the dependency. In the given case it sounds like there might be an issue with the package of the application since it seems it doesn't lock down the precise version of Django that it needs.
I mean, I get that, but that means that the reproducibility of my build depends on the whims of the nixpkgs maintainers, it's not a property guaranteed by the package manager.
You can however define inputs that are not the whole of nixpkgs. You would use something like this and you would pin it to a very exact version and hash of a package:
The goal of a downstream Linux distribution is never to reproduce whatever builds you run on your own machine as an upstream developer. It's to produce a collection of installable software that meets various constraints and goals, like cohesion (can all be installed and managed uniformly), minimal size, easy/manageable security updates, integration (compatibility and so on). That can involve things like building the software against particular library versions mandated by downstream needs or even patching it. Some distros try hard to avoid patching upstream and some don't, and in all distros there may be cases where other priorities take precedence over the value of leaving upstream untouched.
In the case of Nixpkgs and Python, the community wants to maintain a collection of Python libraries that are all interoperable, and Python doesn't support vendorization well enough to allow multiple versions of the same library in a single Python process, which is one reason for preferring singular versions of most Python libraries in Nixpkgs. The other factor is likely just reducing the maintenance across Nixpkgs by maintaining as few redundant versions within the tree as possible.
If you want to control/determine the entire runtime your end users use, you have to do the packaging work required to ship them that runtime with some tooling that's capable of the reproducibility you desire. Python doesn't have one a reproducible package manager, so your options are basically creating your own Nix package (probably as a flake.nix in your repo), Docker, and Flatpak.
That said, it's perfectly possibly to include multiple minor releases of Django 4 in a single snapshot of the Nixpkgs tree and maybe that should be done. Have you talked with the maintainers of your downstream package of Nixpkgs to let them know Django breaks things on minor releases, and so using different versions of Django 4 interchangeably is not tested or supported in your application?
I think there's a bit of confusion caused by equating Nix "derivations" with "packages" of traditional package managers.
Nix mainly concerns itself with derivations [1]. They're build recipes for creating binary artifacts that are meant to be consumed by the Nix daemon. The Nix daemon instantiates derivations by building the artifact and storing it to a store path under /nix/store. Store paths are unique to each derivation.
When people say Nix is reproducible, they mean that derivations are reproducible [2]. This is because anything that might cause the build to change is captured as inputs to the derivation. Every input is explicitly specified by the author of the derivation. This means that when a dependency gets updated, the resulting derivation and store path would change. The new derivation might fail to build, but the old one would still continue to build regardless of how much time has passed since it was first built. So if a latest package in Nixpkgs is broken, you can always go back to a known good commit to get a working derivation while waiting for the package maintainer to fix it [3].
Traditional package managers don't have a concept of a derivation. Instead, they have packages. Those packages have no reproducibility whatsoever. Even if they built successfully in the past, they might not build today. That's because a traditional package is only identified by its name and version, as opposed to a Nix derivation which is identified by its content (= the build recipe) [4]. Traditional package managers see two incompatible builds with the same name and version as the same package, replaceable with each other. Worse, most package managers don't require versions to be specified as part of dependencies. Whether a package builds or not is then dependent on the current state of the central package repository. Again, this isn't the case with Nix derivations.
[1]: Internally, Nix doesn't even have the concept of a package. A package is a concept that we humans use to group related derivations together.
[2]: To be clear, derivations aren't bit-by-bit reproducible. For example, CPU caches would be observable during builds because in general, process sandboxes don't prevent hardware information leakage. However, it's reproducible in a practical sense because people would have to go out of their way to make software builds dependent on things like CPU state. People might do that as a joke, but not for any serious reason.
[3]: Ideally, tests and reviews should catch any breakage but sometimes it happens. Hence the rolling release branch is marked "unstable." Fortunately, it's also easy to apply fixes locally before they're available in Nixpkgs because Nix makes it straightforward to create a custom derivation by extending existing ones.
[4]: Not to be confused with content addressed derivations, which identifies derivations by the resulting binary artifact.
for 2 there is sandbox from facebook to isolate tests (and builds) from cpu non determinism. i have raised ticket on nix. so really it is just another derivation sandbox.
> It is definitely possible with Nix to use the precise versions of what's in your requirements.txt, but I'm not sure if the Nixpkgs maintainers would allow all that extra duplication upstream.
They do for end user applications, but not for Python libraries. The libraries in Nixpkgs are expected to be interoperable, which requires converging certain versions because otherwise transitive dependencies on varying library versions mean that libraries used together are subject to serious, mysterious bugs. But applications packaged in Nixpkgs can pull in an exact set of libraries of their own if that's what it takes for them to run reliably.
It sounds like the package is implemented improperly. If the input from your repo to the package is not targetting a specific commit, it should be.
Building from "latest" is really not how nix is ever meant to operate. In that case, when you update your requirements.txt, it is now out of sync with the package definition; the inputs _have_ changed and your guarantees are gone.
When your project repo is updated, that should never result in a change to what gets installed by nixpkgs until you also update the package to point at that commit and do any work necessary to fix breaking changes. Once you do that work, that version of your package picks up a guarantee to always be producable.
Like another comment mentioned, this is all much easier to accomplish with flakes as they have a lockfile that sits next to the flake, both of which reside in your repo and can be updated atomically with your releases instead of also needing to make a PR for nixpkgs.
I've actually been working on learning how to better package python with nix and found the historical information on python packaging infrastructure in this talk incredibly enlightening (I think this landed on HN a few days back): https://www.youtube.com/watch?v=ADSM4vR2EQ0
They don't need to use the same version of the Django package but Python dependency pins are often either way to tight and can easily be expanded or outright missing, so they often get ignored.
Are you opposed to filing a bug in Nixpkgs for your application? Alternatively, are you willing to point to your application or its package in Nixpkgs so that someone else can do so?
It's not even really about static compilation. NixOS (and many Linux distros) include tons of dynamically linked C applications that just do a way, way better job of compatibility. Imagine if GNU grep were as fussy about only being built against 1 version of glibc as many Python libraries and applications seem to be about their dependencies.
What makes you think that the versions specified in the requirements.txt aren't reasonable ranges? All OP is saying is that if you're outside the version ranges in requirements.txt then you're outside the supported range. It's literally in the name of the file—requirements.
for rust or haskell i point nix to my version and lock and toolchain files. and it uses exact tools and versions default build uses, but built with nix.
Are you sure about that? I haven't seen a node app built from source on nixpkgs yet. That includes Electron apps like Signal Desktop, which is a bit disappointing.
There is this article about trying to package jQuery on Guix:
Guix has several different npm importers (none of them merged), but it's debatable whether it is desirable to build npm packages from source when it either creates thousands of barely useful packages.
You can package simple python projects, but as soon as there are too many huge deoendecies that use CPython and whatnot, it becomes impossible to generate the nix derivation. I just use imperative python-venv + pip install on those.
It does. Nix can package everything properly. What is depending on the language ecosystem in question is whether this packaging can be more automized or not.
There's plenty of python packaged in nixpkgs too. It doesn't mean that it isn't a dumpster fire disaster. Dealing with it has been trouble with every other distro I've used. It isn't just a nix problem. If anything I think the situation is improved.
Nix hasn't been a benefit when working with python for me, but again, python is the outlier. It has been a benefit for projects in other languages.
I guess the reason is because python packaging/tooling varies wildly between projects, and there are a lot of bindings.
BTW a colleague was setting up the python project on a non-nix machine, and also had problems with dependencies, and ultimately had to do some nasty workarounds (disabling deps/features). To me, it seems endemic.
> Of course we, like all other python projects, don't support using other dependency versions then the ones in the requirements.txt file. So when someone just uses a different minor version of django, stuff breaks
That sounds wrong. A Python package should not have a requirements.txt file at all. A requirements.txt file is for "freezing" and fully reproducing an environment (ie. in a virtualenv or docker container). This is useful for certain applications like deploying services or sharing notebooks etc. It is not for packages. A package should document its requirements via setup.py/pyproject.toml and do so in the loosest way possible. Django uses semver and Django apps don't generally need to pin to minor versions.
Stuff like this is why people think Python packaging is worse than it really is.
The application is not distributed via pypi, nor is it installed as a package and thus have no setup.py file.
> A requirements.txt file is for "freezing" and fully reproducing an environment (ie. in a virtualenv or docker container).
No, it's just for specifying which versions of packages should be installed by pip. There's no such concept of a lock file with pip. Poetry and the likes have lock files though.
Django doesn't use semver. It uses a Major.Feature.Patch release notation, not Major.Minor.Patch. Feature releases usually contain breaking changes, where SemVer minor releases never should.
With my ignorance of the python packaging ecosystem, I was always under the impression that requirements.txt was the version constraints, not the lock file.
I've occasionally encountered projects using Nix, and I've casually browsed the Nix and NixOS websites, but I still don't have a clear idea of what Nix is.
Is it a package manager? A build system? An operating system? A container platform? A sandbox? An automation tool?
Which widely used, existing software tools is it analogous to?
- Nix is a tool for building and installing software.
- Nix is a language for expressing how to build a package. Nix-the-tool reads expressions defined in Nix-the-language to know what to do. At the end of the day, this translates into normal commands that run in a sandboxed build environment.
- Nixpkgs is a monolithic repository of 80000+ packages, defined literally as one giant expression in the Nix language (this works fine because Nix is an extremely lazy language). This also includes lots of helpers and abstractions for building packages that can be handy in your own projects. It is possible to use Nix-the-tool without Nixpkgs, but nobody does.
- NixOS is a Linux distribution built on these foundations. Everything under /etc is built from nix expressions. You can not directly edit these. Mostly NixOS is about building systemd unit files from Nix expressions - viewed through that lens it's not really all that exotic of an OS. NixOS has modules that make it very easy to configure and run lots of software.
Yes. E.g. some guy's config with 6 different machines[1]: multiple desktops, laptops, servers, a Raspberry Pi, and a VPS. That's a rather advanced use of Nix for configuration, but definitely shows what it's capable of.
I wouldn't even say that's so advanced. I've got a similar scope config (not published, sorry). All built from source, with the resulting binaries pushed to less-trusted hosts. My cloud servers, router, wifi APs, Kodi boxes, etc, all running self-hosted self-built software with config kept in git.
Previously I ran Debian across my setup (with a homemade configuration templating/distribution tool) but it feels like Nix really bundles up the accidental complexity of installing/deploying most software in a contained way, much more than a traditional distro.
I'll be much happier when Nix gains full reproducibility and functionality like Guix's `challenge`. But even now it feels like one of the closest implementations to the Free Software dream.
I'm not exactly sure what you mean. NixOS doesn't have, like, a profile switcher to let you switch between work mode and play mode on the same PC. At least not out of the box.
But if you mean distinct computers, then that's just three distinct configs. And it's easy to factor out common bits and use it in all three configs, since you configure things using Nix-the-language.
And if you mean all three at the same time on one PC, that's exactly what I do with my home server.
> NixOS doesn't have, like, a profile switcher to let you switch between work mode and play mode on the same PC.
I thought that `sudo nixos-rebuild switch` was supposed to do exactly that; swap from whatever "state" your PC is on to the result of the nix expresison on "/etc/nixos/configuration.nix"
Nix is a distro that admits defeat against dependency management and entirely gives up on the idea of having system libraries. Instead every application you want to run has to have all of it's dependencies explicitly stated and provided custom in nix style configs for each program. Usually this means someone else has done all this packaging work for you. They hate it when people call it containerization but it's effectively taking containerization to the extreme for every single bit of software on your system with no ability to not use containers.
But if, say, you want to compile a random C program you found on the web then it is up to your to now set up the build environment and provide all the libraries and deps to be able to compile and then run the program. Just running make or ./configure or cmake . isn't going to do it because those configuration setups won't have a system lib environment to check against. It seems like a really weird distro to chose as your desktop but makes fine sense in commercial enterprises we're you're going to re-build the world for all your software anyway.
> They hate it when people call it containerization but it's effectively taking containerization to the extreme for every single bit of software on your system with no ability to not use containers.
It's not containerization - containerization means something very specific (user namespaces + chroot). It may attack some of the same problems, but it is not a container.
The sandbox that nix builds run within is more or less a container, however.
Well, sure, but setting up the build environment is pretty easy! You just specify the libraries as you normally would, and it works. The only difference is that those libraries are only available to that package, and not globally. The function `stdenv.mkDerivation` will actually build a standard ./configure && make style C project for you if you provide it some package source, you just have to copy out the build artifacts (with something like (cp project bin/project) and specify a list of libraries needed at build/runtime.
Agree that the documentation for all of this could be a lot better and more discoverable. It's good once you get over the initial hump though.
Nixpkg is a package manager, which uses the Nix programming language to describe dependencies and build steps. Like all package managers, it has components that could be called a "build system", but that's not its main focus.
> An operating system?
There's NixOS which is a Linux distro built on nixpkg. But you can also use Nix under other distros.
> A container platform? A sandbox?
Because nix is based on an underlying immutable store of installed packages (and doesn't rely on global system state), it is trivial to spin up a shell environment in which specific combinations of package versions are available without affecting any other shell.
This property allows it to be used as a lightweight alternative to containers, what people usually use Docker for: setting up an environment with well-known package versions that are the dependencies of your project.
As a very simple example, I've written a simple setup for running a chosen PostgreSQL version inside a directory: https://code.more-magic.net/ppq/about/.
You could easily build this out by adding additional software, for example if you add Python and Django from Nixpkgs to this, you'd have a complete self-contained (or "sandboxed") dev environment.
When you're done developing on the project, you simple remove the repo and the entire environment is dropped too.
At work we use something like the above, with Java and Clojure and a whole bunch of other software, all completely self-contained. I never had to install any of it globally and didn't have to mess around with $JAVA_HOME etc.
Technically nixpkgs as just nix would be pretty useless. Nixpkgs is much more than just package derivations, it contains a whole standard library for the language.
We already have names like NixOS and nixpkgs at the top, and individual commands such as nix-env or nix-shell at a lower layer. Perhaps it would be beneficial to adopt official names for other components currently named "nix", such as nixlang (when referring to just the language).
It's got elements of all of those things. It's basically a toolchain for building an 'environment', which is roughly analogous to a Linux distribution.
It starts with a language that lets you declare the desired state of your environment, including which packages are present and the configuration of those packages. The packages are installed and managed through the Nix package manager. The end result is an 'environment' that reflects the desired state you expressed. That environment can be a Docker image, an ISO, or it could be a running system you're booted into (in the case of NixOS). Or it can even be an ephemeral environment that exists on the filesystem of whatever distribution you're using (in the case of nix-shell). Each of these options offers different levels of isolation and reproducibility, depending on the requirements of your project or system.
There's lots of clever components that make something like this possible, and they're all wrapped up in the Nix umbrella.
It's maybe best described as a software ecosystem, which has the unfortunate problem that the programming language it uses and the package/deploy tools written in that programming language share the name "Nix". Nix the language (basically a customized version of Standard ML) is used to write package and OS definitions that Nix the tool uses to actually build and deploy software, containers, and operating systems. The "ecosystem" bit comes from the fact that the language is used to write the package/deploy definitions, and a large community of users have assembled a rather good and up-to-date set of package definitions, and these have been bundled together as an operating system, and these all share the rubric "Nix".
It confuses a lot of people, because Nix breaks down the boundaries of what we traditionally see as separate tools. If you have a sufficiently powerful language to describe how to build things, you can build packages, container images, operating systems, etc. with it. Probably the closest equivalent outside Nix/Guix is Bazel or Buck (not the same, but they have many overlapping goals).
> Which widely used, existing software tools is it analogous to?
It's kinda like Docker, but without the images/containers.
Docker solves two problems: distributing the same program everywhere, and running those programs using containers.
Nix solves the former problem. But, since it doesn't use containers, you can run the Nix packages without needing to worry about mounting into VMs or containers.
Afaiu it's trying to become all. Nix is on core, you can get packages from nixpkgs / write your own definitions, use it on OS configurations via NixOS, use its built-in helpers to run your containers with it, make a whole CI and cache it with the help of Hydra etc.
The problem the author hit with the Raspberry Pi is that the ARM image is meant for a standard environment (e.g UEFI ), like VMs.
e.g it'll boot on Fusion or kvm because they provide UEFI, a well known device tree, and don't require any firmware at that stage.
Pis (and many such ARM boards) don't have that so they won't be bootable. But there are Pi images built on Hydra. If one uses that then it boots right away.
I'm working on a follow-up post specifically about NixOS on the Pi 4, but there are several gotchas to the process. The biggest issue I've run into is that the latest versions of the NixOS SD card images don't work on the Pi 4. You can boot to them, but when you run nixos-install, they fail with a message about hardware.raspberry-pi."4".fkms-3d.enable.
The link you shared declares itself to be out of date in several places, so it's not super helpful as a resource for newcomers.
That is the only part I wanted to draw attention to: ARM boot is a peculiar beast and very surprising when you don't know about it, especially when you're used to PC (BIOS or UEFI) booting.
> but when you run nixos-install,
If one intents to run from the SD card that was just booted then the process should be changing configuration.nix to one's liking and nixos-rebuild switch to that (which is a testament to the power of nix: one can pivot to an entirely new "install" on the spot).
Or maybe you attempted to do an install on another block device, e.g to boot straight from USB?
> they fail with a message about hardware.raspberry-pi."4".fkms-3d.enable.
Does that error also appear with nixos-rebuild switch? If so, then pi4 support is borked in nixos. Otherwise, assuming the install to USB process, maybe nixos-generate-config was run and produced a non-functional hardware-configuration.nix? (which would be a bug too)
(it's been a while since I tried nixos on Pis, last time it did work OOTB for me on 3 and 4, save for a kernel / device tree bug for 3 that kills the virtual console)
> outdated
I agree, all the spread out bits are confusing. The ARM boot situation is common an issue enough that I would expect that to be part of the manual.
This comes down to ideological reasons. The engineer that is/was leading the embedded integration subproject feels like the platform support components (filesystem partitioning, bootloader, devicetree config, etc.) of installing a distribution on the hardware should not be a concern of the distribution itself[1].
I don't necessarily agree on all points as most distributions approach this quite differently, but it's an interesting premise. There are other projects that attempt to attack this problem but I'm unsure if any have gained a critical mass.
Thanks for that link, very interesting to have first party insights!
I seemed to have gathered as much from what has been happening at the code level but it's nice to get the insider version instead of whatever I think I have vaguely understood.
You can see similar things happening with Asahi Linux, pushing stuff to m1n1+uboot to have a uniform boot interface when reaching the kernel, plus (ultimately) using a mainline kernel.
I'm a huge fan of Nix, and I'm glad to have stuck with the often times daunting process of getting into it - and I have to agree with the author's point regarding the documentation. That's not a fault of the people who actually did sit down and document their process, or distill their learning path into a tutorial - I myself understand it well enough to use it, but not well enough to really explain it without confusing people even more, but it's still a pity that finding good, exhaustive documentation for Nix is really difficult.
Another pitfall is the usage of flakes, which, on the one hand are (imho) great, recommended everywhere and often times even assumed to be used implicitly, but on the other hand are still experimental. I myself started using flakes not because of the promised benefit (although I did realize the benefit later on) but just because multiple tutorials I've read gave me the feeling that flakes were the de facto way to do Nix from now on - and mind you, that was in late 2020.
I'm using Nix for setting up my work machine (Mac via nix-darwin), my private machines (NixOS), selfhosting that's not in my k8s (also NixOS), and some private projects (dependencies, ci, and containers) - where the issues I've described don't really bother me, but for professional projects, where I'd have to convince and/or instruct colleagues, Nix feels a bit too rough around the edges for me right now.
Edit: I also think it's important to distinguish between Nix the technology (fantastic) and Nix the language (meh) - which is why I'm still itching to try out Guix[1], which is similar to Nix (the technology) in a lot of ways while using Guile[2] as a language.
I'm curious if you have any pointers for whole Mac config with nix-darwin. This is something I've just started looking at and at the moment don't have much more than a nix-shell with some nice-to-haves. Any tips / tricks / guides are greatly appreciated.
not sure how approachable it might be, but i like to think the documentation isn't too bad for my personal stuff.
definitely nothing so well thought out as a tutorial, but i try to describe the structure & implementation of my approach + cross-link to relevant tools that i incorporate.
As someone who has nix-darwin on their daily driver machine I can confidently say that I spent very little time configuring it and my config is almost in full shared between both NixOS and nix-darwin
Upfront payment figuring out Nix is amortized, and is less than a lifetime of payments of dealing with flakey machines, assuming you have a decent amount of lifespan left.
My experiences with Saltstack and Ansible are inverted - I realise the comparison was only a casual aside in TFA, but the 'Ansible executed the idea better than Saltstack' comment invites so many questions, especially in the context of an article that dwells on how unpleasant Ansible is.
Ansible certainly defaults to slow, and I never got into the weeds for performance tuning it, but Saltstack felt fast, especially the example of 'install package foo' which he anticipated takes 15 minutes to run against one of his VMs using Ansible. I agree, that sounds unpleasant.
Others have noted the slight confusion about state (and where that state is or should be maintained), and certainly writing idempotent salt or ansible recipes takes some thought, just as writing performant recipes does. The 'have to rewrite everything' whenever Ansible releases a new feature doesn't sound right - perhaps I misunderstand the problem described there.
Author mentions apt, but ultimately sounds like they wanted something more container-y than a fat VM running a full GNU/Linux distro with managed packages + config files. In that light, the mention of Hashicorp - specifically Terraform & Nomad - felt tantalisingly prescient.
> When I specified packages to install, I didn’t specify an integrity hash, let alone a version number. If I ran the same Nix configuration a year from now, I assume I’d get a different system because it would install different versions of the vim and curl packages I specified.
That would be from the Nixpkgs [0] instance obtained from a Nix channel [1].
Nix flakes [2] provide an alterative way to specify inputs which pin them in a `flake.lock`.
This allows things like Nix expression caching due to hermetic evaluation (as opposed to just builds being hermetic).
This is the main of issue with Nix and other niche distributions: they are new operating systems, with their set of file layouts, package managers, and even syscall variations.
Linux ecosystem is awfully fragmented. If you ever want to use any software outside of your not-very-well-walled-garden provided by distribution authors, you have to hope your operating system (that is, distro) is sufficiently similar to one of few distros software authors have built and QAed their software on.
> Different distributions make different choices, and therefore they are closely related operating systems, but not a single OS. Even Linux syscall interface subtly changes from distribution to distribution, as they pick and choose options to build their kernels.
> Every niche Linux distribution that does not follow the interface of a larger one is a unique OS, closely related but not compatible with other Linux OSes. This means the applications have to be ported.
> Application developers have to choose what targets their applications support. With Linux distributions being just a blip on the graph of operating systems popularity, the application developers may not invest significant amount of resources into porting and testing.
Basically, NixOS = zero QA effort from application developer -> nothing works.
That's just not true. Nixpkgs has the largest package database of any distribution. It contains many closed source application that work flawlessly. Hell I can even run Vivado or Quartus on it which are two of the worst of the worst proprietary pieces of software on it. I also have packaged proprietary libraries for hardware we use at my company with no issues.
> I don’t care about anything that’s not packaged by the distribution
> This is totally fine. Just note that you are using a niche OS with a limited set of applications available.
Yes, Nixpkgs is large, but it does not package every piece of software out there, and you either at mercy of distribution maintainers to keep it packaged, or do it yourself. There is no way to disintermediate you and application developers.
The point is that with NixOS you are in a much better boat then all other distributions. Because the chances that an application is working on NixOS compared to $insert_distribution_here is much higher. I do agree that windows/OSX are superior here.
Than all other _niche_ distributions. But that's not a very high baseline. I'd say it's abysmally low baseline.
When I was using NixOS I regularely saw software that was tested on Ubuntu and Fedora and had installation instructions for these two distros, but was missing from Nixpkgs (because it was experimental, new or very obscure), and didn't work out of the box with NixOS' FHS.
Almost -- it's not quite as bad as you'd expect, because nixpkgs has packages for quite a lot of closed-source software, with whatever hacks are required to make it work.
I tried NixOS and dropped it after spending several hours trying to make an experimental FPGA compiler work. The instructions were reasonable, but they expected to have a classic Linux distro: Fedora or Debian-like.
I don't remember the exact set of errors (there was more than one), but NixOS+FHS failed so many times I gave up, and wrote that blog post.
This is the essence: you can't expect application developers to target your niche distro.
Nevertheless community of NixOS developers is many magnitudes smaller than the amount of developers churning out new software, hence once you get off the golden path of the widely or somewhat widely software, the probability of a random thing working without additional elbow grease from the one who tries to use it is indistinguishable from 0, due to NixOS' dialect of Linux not being perfectly compatible with other dialects.
At the very least, documentation for the random thing will never tell you what to do under NixOS, and documentation is very much the part of the software product.
Depending on what you are patching, it will invalidate a great part of the cache, and you will be looking at very long build times for everything in your system.
If that dependency is deep in your dep tree and is statically linked somewhere than there is no way preventing that.
If it’s only dynamic linking than yeah, it might happen that you need a huge recompile (but that is not that big of a problem nowadays in my experience - gentoo used to compile way longer in my subjective experience for example). Also note: nix will soon get content-based hashing which may solve this problem.
I feel a little lost on nixos. Been using it for a while now on my main personal desktop and it's fine, but whenever I want to do something like running a python project (ml based ones for example), or compile software from source, I'm lost because I don't know what I'm doing. Most projects target Ubuntu, so I feel you need to know a lot about how nix really works to get them compiling, which is often not what I'm in the mood for tinkering with when I just want to test something.
So in the end I end up using distrobox with Ubuntu which works surprisingly well, but feels very hacky as I'm supposed to try and use nix. The way I rationalize this is that I'll get rid of distrobox slowly over time as I learn how it works.
I've used Nix for 5 years now and use it heavily in production, and in my opinion, just do what works for you. If you try and use Nix 100% "correctly" then you'll end up like the countless other people who tried Nix and failed.
Especially for toying around in dev environments, be pragmatic and take advantage of its amazing strengths, but if distrobox lets you enjoy using NixOS and speed up your workflow, so be it.
Heck I'm spinning up some services in production just now and I'm reaching for Docker Compose as that's how the vendor officially supports deploying their software. Turns out NixOS can still bring some benefits to a Docker Compose workflow, deploying it via a systemd service, config managed in Nix, it's not as nice as building the service entirely in a Nix derivation and deploying it natively, but it's still better than without NixOS.
How are you managing docker compose with nix? Individual docker containers are easy to manage with `oci-containers`, but there's no obvious way for compose. Unless of course you're just writing the systemd configs in nix.
Yeah I normally use oci-containers but for this workload the only supported method of installation is a `docker-compose.yaml` file.
I've just written a systemd service that does `docker compose pull` and `docker compose up -d --remove-orphans` with the compose file being written to the Nix store, works well.
So give up all my baseline hardened VM configurations and go back to patchy Ansible scripts that leave the OS in a different state every time? No thanks
You are supposed to use shell.nix, list the packages that the project lists in README and then you can use the normal README instructions. Unless you really care about doing it "pure nix way", (e.g. writing mkDerivation, which is waste of time in a dev env)
That's why I like documentation with the comment section. Even if something's missing, there is a good chance someone asked about it. Also comments section provides an instant feedback to people writing posts to that documentation.
nix makes deployment feel bottom-up, not top-down. you understand how a system is constructed locally before you (optionally, if it's in your job description) graduate to doing devops stuff with it. that was the singular thing that hooked me; the functional reproducible stateless referentially-transparent cacheable stuff was just what kept me on board.
I'm not sure. Well, I mean, there are weird folks out there who may start doing large-scale things without understanding what they're building upon (Kubernetes is not exactly an OS, but is a notorious example), but they could make the same mess with any technology. They're just unlikely to pick Nix, NixOS or NixOps (or any alternative to it), because mainstream tutorials won't cover it [yet].
The deployment and learning process is not that different from, say, Arch Linux, or even Debian. You still learn the higher-level interactions (pacman/apt/nixos-rebuild), divert to individual programs, then dive in (.ebuild/dpkg-buildpackage/nix), then learn even finer details as you get hit with nuances. NixOS is absolutely not LFS, where you really go bottom-up. And nixpkgs is covering more and more every day.
Just an anecdote example: I've started with run-of-the-mill tutorial approach on setting up NixOS on single machine, and ran than for a while. Then I've realized my configs are a non-DRY mess and I want to manage my systems in organized fashion, so I've spent a significant amount of time unifying the configurations so I could manage and deploy it with deploy-rs (thinking of switching to colmena now, but that's not relevant). And only at that point I've realized that I'm missing some fundamental bits like signing (aka why nix-copy may fail, and how to deal with this without the trusted-users "see-no-evil" hack), or exact operation of substituters (aka, essentially, binary caches). Because with local nixos-rebuild it's all sort of hidden and I never had any issues.
What makes Nix/NixOS different are its fundamental principles, not how one approaches it. Starting somewhat more low-level than with "just works" tools is just a popularity issue: more rough edges, so one needs to learn how to polish them.
well, an anecdote for an anecdote: reading the definitions of nixos modules became a matter of course for me when i started doing strange stuff that didn't quite fit into the authors' expectations, and those module definitions contain the implementation, i.e. exactly where files are placed and what's written to them. i learned things about how a distro is put together that i never had any reason to look into in the past.
but i wasn't referring to how a system is put together at that level; i was referring to how a system is put together with the tool at hand, i.e. nix. and yes, you're right, that is top-down from the latter to the former
> What makes Nix/NixOS different are its fundamental principles, not how one approaches it.
you'll have no disagreement from me here; i was referring to what qualitatively hooked me, not making any statements about fundamental design
Ah, my apologies, I think I've misunderstood you. Yes, absolutely, I agree - reading definitions of those modules is how one learns Nix. Documentation is okay, but it doesn't cover neither the essential theory like patterns, practices and idiomatic approaches, neither implementation specifics (how some particular config file or startup helper script is generated).
Although, to be fair - reading source code of how things work is not really specific to Nix. It's just that Nix doesn't stop at packaging and also provides declarative configuration, so there's much more to read in nixpkgs than inside some .deb source (which typically stops at providing a generic systemd unit or rc script). But if someone would be using some other declarative configuration tooling - I'm sure they'll also do a lot of deep diving into the definitions, reading the source code. It's just that Nix is outstanding in this regard - I suppose I can, but I surely wouldn't want to use Terraform, Chef or Ansible for what it does.
nix rebuild does ssh on remote machines all possible ways, why colmena? if you need cloud infra, can use terranix. base image from nixos generators. reconfiguration nix remote rebuild.
This is definitely an option. But isn't colmena just a nice wrapper on top of the same things nixos-rebuild uses (nix-build, nix-copy, etc) plus some nice-to-haves for easier eval REPL?
I mean, yes, I suppose I can use `nixos-rebuild --fast --build-host foo --target-host foo --flake ".#foo" switch` and maybe even wrap that in a the flake itself (`flake run .#deploy-foo`), but why should I reinvent this wheel, if I can just `colmena apply --on foo` and let it do what I mean?
I definitely will consider nixos-rebuild if colmena won't support something I want. Just like I'm currently considering trying out colmena because deploy-rs can't do `activate test` and messes up `/boot/loader/loader.conf`, which I don't fancy as I want to eventually have a "one-shot reboot into a new generation and if something fails too badly, watchdog will eventually panic (or I'll ask someone to cycle the power) and reboot back to a good generation" ultimate magic rollback. Worst that would happen is that I'll learn a new tool (aka learn what it can't do for me) and won't use it.
No cloud infra here, all bare metal and one tiny auxiliary VPS.
>I see words like “flakes” and “derivations,” and I currently don’t know what they mean.
>So far, I don’t get how it’s deterministic.
To make nix deterministic you can specify a hash in non-flakes (or a git rev) for your dependencies, but flakes make this easier. When you "run" a flake (be it nix build, nix shell, nix develop), nix pulls the latest (if no explicit rev given in the flake.nix already) version of whatever is specified in flake.nix that it can find, and creates a `flake.lock` file that specifies the exact version that was used. This file is very similar to cargo's Cargo.lock, and specifies the exact version that was captured by ref/hash. The next time nix is "run", it uses the lock file to get the exact same version as it had previously.
One can develop a flake.nix locally, install, check if everything works, and alternatively change the refs in the lock file to make everything work. When this is done you can move the nix and lock file to another machine and get the same exact build there (with the exception of architecture differences).
Because you can put flake.nix and flake.lock inside of git, you can also share the exact same dependencies with other people using a repository. Whenever I see a repository using these I know that building will be a breeze because I don't have to do any dependency hunting.
Seeing a tweet like the one from Mitchell makes me want to try Nix. Or at least I want to want to try Nix. Then I read the comments here and am reminded that no one can even succinctly explain what Nix is. I've read dozens of comments here and I still don't have a clear idea!
I suspect you may take other technology you use as granted, and have forgotten the inherent subtlety that you've had to work through to attain the working understanding you have today. Consider the confusion you might have being an aspiring developer, on your first day cracking open a book on $INSERT_LANG_HERE, and then someone mentions Docker.
Imagine your confusion:
"What is docker? It builds things? So, like, a "build system"? But it invokes something called make, or cmake, or... well, aren't those build systems? If that isn't confusing, I don't know what is. Oh, you say it's software that runs on Linux -- what's that? So docker is an OS? No? It runs on an OS? And you're telling me it doesn't just build stuff, it also can coordinate the execution of stuff? Oh, you're telling me that's docker-compose -- that isn't docker itself? I mean, it had 'docker' in the name. Oh. So separate project to 'docker the container image making thingy', but can be used with that 'docker'. Good God this is confusing."
Much like the Docker and its ecosystem as whole -- along with all the enabling/leveraged tech underlying it, like chroots, user/pid/network/etc namespaces, union filesystems, seccomp, etc -- there is some inherent complexity in Nix (and its broader ecosystem of tools).
Just like docker has docker, docker-compose, docker images, stateful docker containers (is it running? dead? what subtree of my filesystem is mounted where in the container? etc), a whole syntax for "docker files", etc, we have multiple things that come together to make "the whole of Nix" what it is.
As a whole, for Nix we have (non-exhaustive, but hopefully broad enough to help paint a picture):
- Nix the language (analogous to Dockerfile syntax), which is used by
- Nix the package builder/manager (analogous to the `docker` binary)
- Nixpkgs, which is a collection of packages (a bit of a stretch if you take it too literally, but analogous to a collection of Dockerfiles, each for a different software)
- NixOS, an operating system that leverages the packages declared in Nixpkgs.
That hopefully explains the "WHAT", but says nothing of "WHY".
So to touch on the "WHY" a bit:
- Dockerfiles are not reproducible. What builds today may fail tomorrow, or next month, or on odd numbered days, or whatever.
- Nix packages are reproducible. If it builds today, it will build on any machine, any time. Guaranteed.
- On distributions like Debian/Ubuntu/etc (essentially anything that isn't NixOS or inspired thereby, like Guix), when you install a package, and something goes wrong, your system can end up in an in-determinant state that requires human intervention to manually sort out how to unbreak things. Just google (or just recall the last time it's happened to you -- and if it hasn't happened to you, it eventually will) people desperately asking for help to unbreak their system after, say, Ubuntu's apt (or whatever) gets screwy and says it can't install/uninstall anything else because $INSERT_CRYPTIC_APT_BUZZWORDS_HERE. Oh, and add to that the compounding effects of the fact that any package can include raw bash commands as post-(un)install steps -- that's just asking for packages to fail to (un)install cleanly depending on what else the user does/doesn't have installed at that point in time.
- On NixOS (and similar) the system state is guaranteed to either update fully or not at all -- as a consequence of its design, none of the chaos mentioned can happen. It's not that it's unlikely. I literally mean it simply can't happen. System updates are guaranteed all or nothing. Also, if you find out that some configuration change isn't what you wanted, you can roll back in instant (that's not hyperbole -- rolling back entails no slow error prone file copying/writing/etc, literally just one symlink change, and "poof" you're on the previous version).
In a nutshell, from top to bottom of the stack, the selling point is: imagine a world where "oops -- that failed in production, but... well, it worked on my machine, I promise!" simply can't happen. Imagine the time savings, and the reduction in anxiety if you knew that, if you could successfully build and/or deploy a particular piece of software once locally/in-staging/etc, you have full confidence that you can repeat that on any machine at any time.
Admittedly, if you enjoy those struggles with conventional non-Nix(OS) setups, there isn't much value to Nix.
Thanks, appreciate the thorough reply. This is very helpful. I need to think through this some more, but your example in the follow-up (where some dependency gets nuked) help concretize this. If I can get even greater reproducibility than Docker and it's faster, that's hugely appealing to me.
And as a pointed amendment here, here's a non-Nix experience I've had, which everyone eventually has:
My team discovers a major issue in a recent release. Conceptually, we need to rollback and we need to do so fast. However, we can't just deploy the same package/docker-image/whatever from the last release (perhaps because of a change in database schema, or whatever) -- we need the previous version plus some minor patches.
So I check out that previous version from source control.
I do `docker build`, and see a screen full of errors.
Huh. That worked when this ran in CI a week or two ago.
Oh, due to some petty political issues the devs of some third-party library (that is deep within our tree of transitive dependencies) have decided to nuke every posted version to pypi/rubygems/whatever and deleted the repo from GitHub.
Yaaaay. Okay. Fuck.
Would have been nice if we had a local cache/mirror of that, but there's nothing about docker that guarantees you'll have a copy like that (Nix does provide this).
Gotta push through the panic and weight options:
Do we try to replace every dependency that uses this lib?
Holy smokes that (transitively) touches a ton of our code base. Not impossible to do, but almost impossible to consider what the ETA will be for that, and remember that we're trying to put out an active fire here.
So... we could see if anyone online has a fork of the repo...
But then how do we know it hasn't been tampered with? It's not like pip/npm/rubygems has some content-based hash we can use to assert that this library's source tree is as expected, and it's not like Docker tracks/asserts such a content-based hash (Nix does track this).
Cue an hour or two of immense stress and customers being very unhappy. Oh, and there's nothing to say that this won't happen again -- after all, docker does nothing to help with and/or enforce determinism/reproducibility of builds. I mean, it does guarantee runtime reproducibility in that multiple machines can be running precisely the same image, but docker does not guarantee that you can reproduce the build of a given Dockerfile. These two different types of reproducibility should not be conflated.
Nix is specifically designed to prevent these types of scenarios.
Your mind must be pure to understand Nix. Work on your purity. Just kidding. You might explain why you want to try nix? That would help customize an explanation for you, because there are levels. Umm... here is my attempt. Sorry.
Nix is a programming language plus utilities that are useful to define and work with software packages in a reproducible way (https://github.com/nixos/nix/).
Each package is called a "derivation", which is a function that takes inputs and makes output. The inputs are everything that is needed to make the output. It is "pure functional" package management - for the same input arguments, the same output will be produced. Nix is really fast because each derivation is hashed and cached and the language is lazy-evaluated.
Builds are "hermetic", meaning only the inputs specified in the derivation are available at build time. Contrast this to some packaging systems, where the build is done against some staging area where packages get installed as they are built and the output can depend on the non-deterministic order that packages are built.
Nixpkgs (https://github.com/nixos/nixpkgs/) is a large collection of recipes for existing software. It contains both rules to build software as well as "modules" to configure it or extend it. NixOS the linux distribution is also part of nixpkgs. There are lots of design patterns here and it can go pretty deep. There are also tons of hacks and patches and workarounds to make software conform to the way nix works. Nixpkgs also has a lot of useful library modules built in.
Nix is the latin word for snow. Nix "flakes" are a way to combine multiple inputs as well as pin the version of inputs. Kind of like pipenv/requirements.txt or "cargo lock" or "yarn lock" but for anything.
The output of derivations go in the "nix store" which is a path like /nix/store/<hash>/, so all sorts of software can co-exist (think multiple incompatible versions of the same library) and can be referenced in a fixed way. Usually you will end up with an output that is mostly symlinks to other /nix/store/ paths.
Nix can make practically any combination of software you can cobble together trivially rebuildable/reproducible. You can write some nix code that will produce a a VM image with test scripts as well as a script to launch the VM with a patched version of qemu and run those tests. You can have all your dotfiles/configuration in code with nix installed just for your user on top of Ubuntu. You can generate a raspberry pi sd card image from a short nix source file and a single command, and then 6 months later change a single line and regenerate it without worrying it might be broken.
You can achieve a lot of that stuff with Yocto or Ansible or a Dockerfile and scripts, but it would be slower than nix and more fragile.
I don't have a good guide. I had to just start getting my hands dirty and piece it together.
My strategies:
- keep the documentation open
- install nixos in a vm
- search github for "configuration.nix" to find fully worked examples for nixos. I just cloned anything that looked useful[1] and would grep through it
- read the source code for nixpkgs
- search google and search https://discourse.nixos.org/ when you have an idea of what you want to do to find other people discussing it
It took me a couple days to have a very basic system running and a month to port most of my setup into nix (running some containers, overriding system packages, defining my own packages, running unmodified binaries using nix-ld).
I still use non-nix containers for a few services that I don't want to port over (but if I had known nix when I started I definitely would've used it).
Every step of the way was a huge struggle but the tools I have learned stay useful so I hope it is worthwhile investment. Still sometimes the extra nix friction is really frustrating but I can't help but think Nix or something like it is inevitably the future of software.
Maybe I’m naive but I’ve just never found various “states” to be desired in config management. It’s always binary: either in the “right” state or a bad one!
If you have drift in actual state using Ansible you have to account for all eventualities and you simply can’t declare “make my system(s) look like this”.
I’m talking “desired state” here and how to fulfill it - not “various states to switch between”.
With nix(os) I feel you treat a machine more like an appliance!
> With Nix, changes are atomic. Nix either gets your system into the desired state or it rolls back to the state before you tried changing the configuration.
Except the "activation script" is an imperative sequence of steps which can fail or hang. Atomicity really exists only if you `nixos-rebuild boot && systemctl reboot`, and there rollback means picking the previous entry from the boot menu.
The problem Nix wants to solve is a valid one. But there are better alternatives imo, such as the newer distros based on rpm-ostree. You can do atomic upgrades, and easily rollback to a previous known good state if needed. No changes are allowed to a live system. And best of all, it's practically the same in terms of management, there's hardly any learning curve.
I think nix falls under "second best at everything" in this regard.
If there's a single thing Nix solves, it's being able to declare packages with a pure/functional language.
This ends up enabling all sorts of useful things which relate to packages. e.g. being able to install multiple versions of some package without these overwriting each other. e.g. build container images, declare development environments, run software without installing it, etc.
I think you can get 80% of what Nix does in many use cases without paying a lot of the effort. -- e.g. `asdf` is a tool which allows for installing project-local dependencies of programs.
Sounds interesting. Is rpm-ostree config declarative? Can it install multiple versions of packages and libs simultaneously? Are adding custom packages to your setup a matter of a few lines of additional config, or must you learn RPM? Can you manage home directory config as well?
There's the treefile [1] for declarative config, no need to learn RPM, just add package names and any extra config. As for managing home and multiple versions simultaneously, I think those are non-goals for this tool.
I meant creating an entirely new package from scratch.
Regarding non-goals - I think that exposes the fundamental difference. rpm-ostree isn't "better". It's trying to solve different problems. The use case you described is a very small part of what Nix makes possible. Nix isn't just trying to fix or improve on existing systems - it present a fundamentally new abstraction that can be used for many purposes. Yes, you can configure and snapshot a list of packages, but that's a tiny part of it. With Nix, the capability allows you to also create any environment from scratch, isolated from other environments on your machine, e.g. for CI or development, or running some obscure python repo.
Yes you're right, better wasn't a good choice of words. My comment was in the context of replacing Ansible with Nix for making a reproducible system, as per the linked article. I wanted to point out that alternative which will be much more familiar to most people, who perhaps aren't ready to jump to an entirely new paradigm, but still want a declarative config. But undoubtedly Nix is much more powerful. I myself have been meaning to try out Nix on top of my rpm-ostree system, for dev environments.
Well, it sounds to be somewhat more limited than how Nix manages services. The beauty of Nix is the ability to stay DRY and declare things only once, then refer to them where you need it.
For example, most of my services that need to bind to some specific network interfaces but require an IP address, don't hardcode anything, they all in lines of `services.foo.listen = head config.networking.interfaces.vpn.ipv4.addresses`. Should I want to renumber my networking, it'll be relatively painless. Same with user IDs, passwords, paths, etc - they're all trying to be references rather than copies.
Of course, this can be done with an external template engine, but I like how Nix integrates all those aspects in a convenient package.
easiest way to get up and running w/value in the nix ecosystem is https://devenv.sh and as time goes on use escape hatches to utilise more of nix ala https://nix.dev
Disagree. The larger selling point is Nix reproducible builds. Getting a reproducible developer environment is a start, but everything inside that “shell” is then mutable. Hiding Nix behind YAML is obscuring a tool, Nix, to make the entire build stateless—and that YAML can often be a stopping point where folks aren’t going ‘deeper’ to unlock the reproducible build part.
There are things it does that are very helpful, like enabling a PostgreSQL server without much setup, but other things like forcing certain language toolsets goes a bit too far—especially when those tools are already in Nixpkgs and setting up a base devShell provisioned with a few console tools is one of the easiest things to do with Nix the language already.
i agree with some of your points but differ as follows:
getting an entire team or project up and running in less than 5 mins with devenv is a fanstastic introduction for folks who do not have experience with nix. devenv solves the “onboarding new developer in a couple seconds without having to learn nix” pain point.
the next step there is to migrate away from only using nix shell (via devenv) and go for making ci reproducible locally once folks are more comfortable with nix/less career risk for introducing new tech.
then the world is your oyster.
use build2dockerImage and start creating docker images that are reproducible and use them in production.
Getting someone up and running with a flake.nix with a defined devShell is just as simple and probably requires less knowledge since it's not another tool bolted on. Making the env and CI reproducible should be seen more like a good stepping stone and you can take the next leap when ready since it can all happen in the same flake file rather than a separate YAML one.
The world is your oyster approach is skipping Docker and just running Nix on the server without the overhead of containers.
fwiw, NixOS does not support reproducible builds as defined by the Reproducible Builds project. They support reproducible environments/configuration/deployments or how you want to describe it.
I have shell script with bunch of common functions to simplify particular tasks and then main script that when ran installs all needed packages, creates / copies various config files, pulls my C++ code from repo and builds my servers and installs those as daemons. It also installs cron jobs that execute backups / replication. It is smart enough to skip already performed steps and can safely run multiple times. It can run on Debian, Ubuntu, Arch and Mint.
It is fast and reliable and can deploy new infra and or restore state of that from backup. I use it for years without any problems.
Let me interrupt that in the middle and undo some steps by hand and it will probably fall on its nose. Nix can handle interruption in the middle of all build steps and can cleanly recover from them.
(fleek author here) Fleek is a simplified wrapper around Nix Home-Manager, which is used to install and configure the apps that make up your $HOME and shell environment.
This is what I love about Hacker News: post about a random tool I found, and the author shows up! :-)
I came across your blog (and from that, Fleek) as I'm rebuilding my home WSL setup, and I just want a simple way to destroy and rebuild my distros and move my installed apps easily. Would Fleek be a good use-case for this?
15 minutes to configure a local VM for development with Ansible? That feels very off. It used to take me about that long to set up a Kubernetes cluster on remote VMs from scratch (with etcd, load balancing, etc.). I haven't used it for a while though.
This a bit like saying a 15-minute commute sounds "very off" because it only takes you 10 minutes to drive to your office.
Ansible run times are going to vary wildly depending on what's in your playbooks and how fast your nodes are.
In my case, a lot of my VMs share a set of common roles I've been developing over 8 years of using Ansible. The roles span several different OSes and versions, and every play adds time to the run, even if Ansible just skips it or has no action to take. And then third-party roles from Ansible Galaxy typically take even longer to run because they're not optimized for speed either, and they're targeting an even larger set of possible OSes.
Great writeup of your first impressions! Looks like your questions are mostly answered, or at least you have starting points for most of them, so I'll leave those aside.
Thanks for giving such a detailed writeup of your early experiences with Nix!
It seems to me that this "hash all the things" setup seems absolutely made for IPFS integration, which AIUI would mean that everyone who chooses to seed those nix stores would incrementally reduce the bandwidth burden upon the "main" nix store. I'm not sure if nix hashes take the $(uname -sm) into consideration, so that could be a hurdle, too
I can very easily imagine that it would need to be opt-in, to keep security peeps from losing their minds, but my suspicion is that a non-zero number of actual nix users would choose to turn it on in support of the community
You propably have to pull all your sources from github and compile them yourself once, since there is no remote store that provides prebuild artifacts? (I dont know what use the S3 bucket/s is for.)
As far as I remember there was enough money to keep them alive on current setting about a year. Worst case they'll switch to an alternative provider which _might_ be a bit slower?
Binary cache for old builds will be lost, they move to some other storage solution, but they can't transfer the binary caches due to AWS's egress costs.
Nix reminds me about that xkcd comic about standards. It seems to be to be solving a problem solved in a much better way by other alternatives with the mindset of IT from a bygone era.
I may just not really be the target demo, or maybe am just a huge idiot, but I struggle to see the appeal, especially when you hear about the occasional horror stories about complex and/or broken environments, or the vim-like overhead to learn it properly.
No package manager before Nix solved the whole problem at all. Nix is the first of its kind.
A good litmus test: install the gnome and the kde desktop envs on your linux system with your preferred package manager. Now remove both. Will you get back to a fresh install state? And it’s not even a hard problem yet.
Now how would it solve installing a second chromium browser that uses a patched libc beneath?
In my opinion, it's somewhat simpler than learning how to do Debian packaging properly (with emphasis on "properly", following all the modern best practices).
Quite a lot of folks that run Nix or NixOS write themselves decent derivations that could be (and frequently are) contributed to nixpkgs (of course, there are a lot of quirks/hacks as well). But I think quite a few folks who run Debian make themselves high-quality packages - e.g. why bother setting up cowbuilder and do the proper repo for gbp with all the pristine-tar branch oddities, when checkinstall does the trick.
That's funny, I've gone through almost exactly the same path trying to use NixOS. Tried on VirtualBox and failed. Tried on Raspberry Pi than failed. Didn't have spare bare metal, so I ended up on a "failed" state for now
Go with Ansible unless you want to fully dedicate into Nix ecosystem and spend a lot of time learning it (and it's not easy, for example because of lacking documentation). Also Ansible is something you more often find in actual projects/workplace.
well, yes true, but ansible sucks and especially for small and chaotic projects where people just change things on the machines without reflecting it in ansible it is a nightmare
If you can adopt NixOS, then it's worth considering. But if you have to use a standard distro then Nix on its own won't solve your configuration-management problems.
To sum it up succinctly, I would say that Nix helps you manage $PATH while NixOS helps you manage /etc.
If you're planning on using Ansible for your job or to administer Linux systems, then stick with Ansible. Nix is very niche and you aren't going to see much return on investment until adoption is significantly higher, and I'm not sure that will happen in its current state.
That being said if you want to learn Nix for its own sake or because it is awesome, which it is, then have fun!
Hard disagree. In the context of a team, just about any mainstream tool will be more manageable in the long run. There aren't that many people willing to grok Nix and its massive learning curve.
I got into Ansible around 8 months ago and wrote up a blog post with detailed steps, and a companion repository, about how to get started with using it. The blog post goes into how to setup everything needed for Android development via Ansible.
Your link refers to the 23.05 stable release channel, so it'd likely stay at 2.11. Package updates don't get backported to stable channels except for fixes. Additionally, the package search page probably isn't updated in real time so the version might be slightly out of sync.
Nixpkgs is generally quick to update packages because Nix encourages automation.
nix: automating running scripts from random readme.md as root.
the amount of JavaScript devs just learning SE in this thread defending the maybe-good-enough-for-your-dev-box nix is so amusing.
it's like seeing second year CS students thinking they mastered system programming because they wrote one toy compilet that optimizes one loop they were looking at the time. not saying it's bad. it's a very essential first step and everyone will step on this starting their journey, but the amount of misplaced self confidence is too funny looking from a more experienced vantage point.
This is a pointless comment, nix is more than robust enough for servers, gaming machines, general purpose desktops and developer machines. If you don't understand the tech don't disparage the users or the tech.
The author seems to have some misguided ideas about Nix. Nix is not fast because it is stateful. It is fast because it is functional and reproducible, which allows for caching without compromising correctness. I don't want to split hairs, but referentially transparent caching like this is not quite what I'd call state.
Yes, there is some statefulness in system activation, but this is not what makes Nix Nix -- quite the opposite.