I'm giving it a go now in the easily undoable single-user "no-daemon" mode.
Super excited how approachable this makes Nix, I've put this off for many years due to what turned out to be an incorrect assumption about the level of commitment!
How is it logic defying? It seems straightforward to apply on any compositional entity: I could make one with "Apple Executive", "Tim", "Phill", and "Eddy"
While that breaks the Apple executive example, it's still just as easy to explain from a programming perspective: each of the sides could be a trait/interface which is implemented/part of the same entity/Singleton
Yes, but real world entities don't stop being a thing just because you refer to their one specific trait. The god example only works if you assume it works. That's why it's not logical.
Anyway, traits are "is a" descriptive thing, not a referential equality "is". The trinity relations are not "is a" to begin with, or we'd be having many gods and Christian priests are not into that.
Given it is a discussion of an entity that is already defined by properties not shared by any other real world entity its seems not just logical, but reasonably, to assume that it differs from them in other ways too.
I tend to agree traits is not an accurate description, but it is a reasonable analogy.
> The trinity relations are not "is a" to begin with, or we'd be having many gods and Christian priests are not into that.
Sayings "priests" seems and off choice of wording. Why not just Christians, or even "Christian theologians"?
Just because someone comes up with one illogical idea, it isn't reasonable to say it likely "differs in other ways too". Instead maybe establish one unusual thing to be verifiably true before stacking on more.
> Sayings "priests" seems and off choice of wording.
Meh. Priests set the norm for the regional communities.
Honestly I think you need to divide "Nix" up into two:[1] a language interpreter and the build system/sandbox. First nix interprets the expressions. It does that in your local context - so that, if it's configured to, it might read your .aws/credentials file to get access to resources stored in your S3 bucket. Once it's established what it needs to build, then the build system takes over and sets up a sandbox and runs bash scripts and other commands to produce output. I had conflated the two for some time and it caused my a variety of confusions that are resolved by observing the distinction.
(Also, I tend to think it's better to view NixOS as a tool for producing a linux distribution, which in many cases just happens to have an installbase of 1 - but that's not necessarily the case, and you could build your images with NixOS and deploy a dozen of them to aws.)
[1]: this was meant to be a joke, but it occurs to me that a person might think I missed that they had divided "Nix" up into two, the buildsystem and the language. But a language is not a command and doesn't do anything, so it can't possibly be an interpreter as well - and there could be and to some degree is multiple nix interpreters. So it is the first of the divisions of Nix that I sliced in two, and you end up with three Nixes: the buildsystem/sandbox; the main language interpreter; and the language.
I use it on macos for all of my dev environments. So instead of a readme that tells me to install tools X,Y and Z, direnv notices that I'm now in the project directory and it makes those tools available and then when I switch to a different project I end up with different sets of tools (pinned to the same versions on all my machines, tracked by git).
It's a bit more involved than homebrew, but you end up with a very precise definition of what your project depends on. This prevents all kinds of headaches re: Linux and Mac users depending on subtly different versions of grep or sed. Plus you can recreate the same environment in CI and Prod without having to define those dependencies separately.
Or you could, if your team was bought in. That's the hard part.
Just for a giggles, what projects do you work on that require such variation in tooling where something like this becomes worthwhile?
I always see these type of arguments for why nix is so great but it’s never been a pain point for me in 10+ languages and 20 years of development experience. I see your example of bash scripts but this can’t be all for writing scripts.
Not the OP, but I work in consulting. When I was still hands on keyboard, this would have been very helpful for the clients who don’t provide their own hardware or environment for us to use. I also do work for extremely large organizations who have literally dozens of different stacks accumulated over the decades.
In addition, I play with all sorts of open source tools and they often come with their own tool chains and expectations. Python version management in particular benefits a lot from this level of isolation. Instead of figuring out the different version management tools for each stack I use a higher order environment management tool in Nix.
Some others are solving these issues with containers, and that’s a part of the nix strategy as well.
I've previously used Nix to manage C/C++ projects and ended up with a really nice flow, so I really want to use Nix for Python, since I've had so many issues with conda. However, every time I've tried, I've ran into enough issues trying to get a lot of ML packages I use to work (dealing with transitive dependencies on esoteric packages, mostly) that I couldn't justify continuing rather than just hacking my way to getting the conda environment working with random pip packages, pinned versions, etc.
I've been considering an AI project for consuming a conda build recipe and digging into the codebase to extract extra info about the project and make it into a nix flake--which would be a bit more stable. I figure you could test for equivalence in a few ways and feed the outputs of that test back into the model. Hopefully there's enough context in the conda recipe, the project codebase, and whatever errors pop out to get some of them converted with minimal handholding.
Because regardless of what the cool kids are doing, important work is being done in conda, and usually by people whose expertise isn't software packaging.
Yeah I get the idea but I’m asking op for concrete examples. Python has its own environment management options that work well. I’ve read on this site over and over what it can do - I’m wondering if anyone has hard examples of tooling they switch about enough to make it worthwhile.
Scripts are the main place where it matters tbh - most language ecosystems have their own way of doing this stuff, if you can stay within the language you're fine. But if you (or your client) have a culture where people throw in awk/grep/sed then there's just no real alternative. Or if it's a polyglot project where you have three different languages (including shell) then you may not be able to use a single language package manager.
Agreed, but people have a tendency to do otherwise. At the very least you still have to install the right version of the language. And there are probably an few other tools, linters and such... Next thing you know you've got quite a pile that's not covered by your language's package manager.
I find different languages/ecosystems have different cultures around this. In Java/Maven land it's fairly common to have a self-contained project where all the helpers like linters etc. are set up in Maven so all you need is a vaguely recent JVM and vaguely recent Maven. But there are other ecosystems where people like to throw a bunch of shell scripts etc. in.
When Python comes up as an example of a problematic packaging ecosystem, Java often comes up as an example of it being done right. I think the key is the cultural difference you're pointing to. JVM folk are not tempted to stray from the JVM. Python folk think of Python as a convenient harness for that cumbersome bit of FORTRAN that they can't live without.
I only worked in a Java shop once, but I remember that they looked at me like I was an alien when I proposed that we involve a subprocess written in a different language.
At the time I thought they were insane for writing everything themselves but I've since seen how gnarly packaging can get and now think that they're... less insane.
The most recent offenders were nodejs and kustomize used as part of a test flow orchestrated by a Makefile, run both locally and in CircleCI.
People will just install the latest version and start hacking away, and now you've got all this code that depends on that version. Backwards compatibility ain't perfect, so maybe several years later the original author doesn't work here anymore, so tests are breaking in subtle ways when you install what's the now latest version and there's nobody to ask what the "right" version is.
But since we're a culture that uses these tools (though I wish we weren't), this story has played out several times so different projects need different versions--you can't just discover the right one once and leave it installed in your system, you have to install the right one for your project and change it when you switch to a new project.
For the most part these are go projects, so even though there is language-specific dependency locking via go.mod and such, dependencies which aren't go libraries but which are nonetheless needed to work with the project (e.g. make) are left as an exercise to the reader. Make is pretty well behaved, I haven't had to do much version antics with it, but I wouldn't say that's the norm.
When I find one of these repos I put my archaeologist hat on and write a flake.nix to provide whatever the dependency is, and then I walk the version backwards until it starts working. That way next time I'm in that project I don't have to go through that exercise again.
To make matters worse, people often try to help by adding entries to the makefiles which download the correct version for that project, but some people have the newer arm chips and others are still on x86 so confusion of a new kind ensues. Of course it's easy to fix these scripts to detect the local architecture, but that's a whole extra step.
And then maybe you're trying to make this stuff work in CircleCI or somesuch, and you don't want the workflow to just be reaching out via https and blindly running whatever comes across the wire because who knows if it'll be the same thing tomorrow, so you add hash checks, but once you've got the hash checks and the architecture checks and you're checking the right hash for your architecture... we'll you've basically got a poor man's flake.lock at that point, might as well use the same nix config in both places rather than use homebrew or apt or whatever locally and then figure out how to do the same thing via circleci orbs in yaml and god forbid you have to do it in prod too so now there's a Dockerfile... Having a single source of truth for dependencies and using it everywhere is super handy.
That's work. Another example is in my personal projects. I use helix, so there's a .helix/languages.toml in my repo which defines the language servers that I'd like it to use for that project. But merely pointing helix at the language server isn't enough, it also has to be on the PATH. My older projects are using mypy, and my newer ones are using pyright (python type checkers).
Sure I guess I could just install both at the system level everywhere I go, but when I clone the project on a new machine I want to have everything work right away--I don't want to start coding and then wonder why my editor sucks and then go discover and install the right LSP and then resume coding. I'd end up with a smattering of different versions installed across all of my devices, even for the same project. If I find a bug which happens on this machine but not that one, I'd have a much harder time knowing where to start re: debugging it.
Finally there's this idea that maybe you don't even have to clone the repo, you can just reference the app you want to run from anywhere. I invoke some of my tools like:
nix run git+ssh://git@github.com/myorg/myrepo -- mycli --best-arg
Nix knows how to set up the environment for running the app (the one I have in mind is written in python, so a certain version of poetry is involved etc...) so I really don't need the caller to think about that environment at all. I like this because it decouples the orchestrator from the executor. So if I can manage to get something working locally with one of these commands, then I can go put the command someplace weird like in an Airflow DAG (kubernetes pod operator, NixOS image) and I have a pretty strong assurance that it'll work the same in the remote environment as it does locally.
From the perspective of a nix user, these problems are all the same problem and they're everywhere and nix is the only thing that solves all of them at once. My feeling is that from the outside, they look like separate problems, and it's not clear just how many of them are solved by nix--so the juice doesn't appear to be worth the squeeze.
In fact, you can just download a statically build Nix binary and just start using it. The installer is IMO basically unnecessary for single user mode without root on Linux.
For context, Determinate is a startup made of the Nix guy and some of the senior community members. They explicitly support the Steam Deck (and used it as a test case to create their installer).
Not just senior community members, DetSys is the company of the creator of nix.
This installer and the relationship between DetSys and Nix has also been subject to major criticism about conflict of interest between community interests and DetSys, since everyone agrees the official Nix installer has major issues. "Determinate Nix" (as DetSys calls the nix configured with their installer) also enables features that are the de-facto way to use Nix these days, but are disabled in the default distribution because of... let's say... commitment issues.
If you want a community run alternative, try Lix. They have a version of the DetSys installer too - and they actually cut releases of nix instead of building moats around it.
There should (hopefully) be an ~official-nix version of the detsys installer in the ~near future. (That said, one slice of the reasons this has taken a while is that the upstream Nix variant is obliged to stick to official features for now.)
It's basically at the point where it just needs a redirect from the nixos.org domain, and for the project/community to work through how to manage its development/relationship to the Nix repo.
Lix is excellent. It is already faster (parsing), safer (better defaults, removed footguns), and easier to use (better errors, etc) than Nix. If anyone wants to get started using Nix then I highlight recommend you install Lix from the link in the parent comment.
Lix is a fork of Nix (the program, "CppNix") and also an alternative community. They provide a few ways to install, like nixpkgs itself, but also a DetSys installer fork.
It's generally a response to corporate interests (including Anduril, who put Nix on autonomous weapons) and elbow-shoving so-called meritocracy (including a very infamous concern troll) becoming dominant inside the Nix community and Nix leadership, by people who didn't want to work from inside the system to reform, but also know they can't maintain all of Nixpkgs by themselves either (which already bled a bunch of maintainers).
Just out of curiosity, have you checked out Spack, https://github.com/spack/spack, which has a lot of HPC users. Support for mixing and matching both system and from source dependencies has been extremely useful in my work.
I'm playing with Guix and even made a dedicated installation. I find it more attractive than Nix (more sane language, shepherd service manager), but it lacks in diversity of packages and many of the ones I require are very stale.
I was told you don't need to subscribe to the mailing list, but I've sent mailing list messages with patches, discussions on technical issues that were never accepted.
Last time I checked (a couple years ago) the version of nix in Arch repos shipped a subtly modified default configuration that rendered it broken, so yeah, a quintiessential Linux experience.
Agreed. I use a flavor called EndeavourOS. Very happy with it. I have had bad experiences with other flavors in the past like Manjaro. I think yay is peak for me so far.
I've been using it for developing, but for non-headless stuff IIRC there's a lot of issues with opengl drivers and hardcoded X11 paths and stuff. I dropped a good chunk of time before giving up.
It's really 50 years of Unix's dumb practice of installing everything at absolute paths in `/usr` that has come home to roost. I wonder if Nix will finally make people write relocatable apps.
It's designed like that because of this issue. They obviously don't want to do that, but if they don't install everything in global absolute paths then too much badly written software breaks.
I'm currently experiencing this with a Tauri-based app. Nix has been great for us for local dev and service builds, but building an application inside of Nix that needs to run outside of it has been challenging to say the least.
Yeah, I just went through this process myself on steam deck and the fact it can't access opengl or vulkan natively makes anything that isn't terminal based a massive pain in the ass.
There is nixgl which you can wrap around any executable to make it work and it's got support in home-manager (https://nix-community.github.io/home-manager/options.xhtml#o...) but it's a huge annoyance having to wrap `config.lib.nixGL.wrap <package>` around everything you want to install.
It really depends on how you start your programs. If you wrap your compositor with nixgl then everything started through your compositior inherits the relevant env vars (LD_LIBRARY_PATH etc.) and you don't have to wrap the programs separately.
They also have a simple uninstall command now as well. It used to be kind of a pain in the butt to do but now you can just install it and uninstall it in a couple minutes.
The determinate systems installer for Nix is honestly a really great example of a great command line tool. It very transparently tells you "this is what we're going to be doing. Will you allow me to do all of these things?" and it tells you what when it is doing them. Then they bundle the uninstaller.
Last I checked, there was a major caveat to this - it mucks with your PATH in intrusive ways. So it's not "just try it", it's "switch to a frankenstein system and don't turn back"
I'm not sure about using Nix (the package manager) on other distros/macos, but I remember that when I tried out NixOS years ago, one of the things that struct me was just how little PATH stuff was needed. Literally the only thing in /usr/bin was `env`, and everything else was specific to each given package. My naive expectation would be that most of the stuff to be self-contained, like maybe a single script to `source` per package that could get set up as part of an invocation at the end of a shell config, and that worst case, switching back would be as simple as removing that invocation.
- Normally, just deleting /nix gets you 95% of the way back to normal.
- If you want a bulletproof removal, the determinate systems Nix installer saves a file with a list of all the operations it made during installation so that it can be cleanly removed.
- If you want to stop home-manager from hooking into your shell without completely removing Nix, there's a separate uninstall command for that.
Hmm, didn't seem that way, I specifically did it the way I describe in the article to have it easily reversible, though I must admit it's just theoretical, I haven't tried uninstalling it.
https://forum.elivelinux.org/t/how-to-install-nix-packages-o...
I'm giving it a go now in the easily undoable single-user "no-daemon" mode.
Super excited how approachable this makes Nix, I've put this off for many years due to what turned out to be an incorrect assumption about the level of commitment!