They do, but the people who spend time on Nix issues don't typically notice or count the time saved because it's harder to notice.
When reproducibility issues take 16 hours every 3-4 months/12 weeks and Nix maintenance like updating pins takes 8 hours per month most will feel like the first option is less work.
Imagine if you had the data showing with Nix your build is:
- 99% likely to work
- without Nix your build is 90% likely to work, but 16 hours to fix it when it breaks.
- The non-Nix build also has a 10% chance of it breaking randomly at any time.
- The nix build initially takes 8 hours per month to maintain for 6 months, 4 hours for the next 6 months, then 1 hour per month thereafter
Which do you feel would be better? What I describe above has been what the situation seems to be in my experience.
I can say from personal experience I've seen many days devoted exclusively to Nix upkeep and maintenance. That was from junior people to people who had spent half a decade or so deeply in the community and using Nix for their daily driver.
I've never had to do much for Nix itself, but packaging something to build from source can often require quite some effort. Applications that use a pretty unconstrained build/install process upstream may expect to do a lot of things that are not allowed in the Nix build sandbox, like unconstrained network access and or overwriting files in existing packages on the system. To deal with that you really have to dive in, learn how the sausage gets made in the upstream package, make some choices about if/where to compromise, and then spend some time tweaking and debugging. That can be a pain and can definitely take a day or two.
I've only had 'maintenance' issues with Nix itself on macOS, where OS upgrades routinely nuke Nix's hooks into the OS or add restrictions that break things. (But they do that to other package managers as well.)
I'm taking that approach with the package I've been working on, which has a somewhat pathological (by Nix standards) Gradle build which does things like
- manually download a copy of Elastic search outside of the normal Java dependencies scheme
- run NPM to fetch remote libraries to build web assets at build time
- *also* run Yarn, for some reason
- use Git at build time
The ways it does all of these things are actually fairly thoughtful (for example, it does checksum the artifacts it manually grabs at build time to verify their contents), but they don't play nice with running builds in offline mode or under a user that has no $HOME. But it's one of those freeform 'my build tool configuration is a weird DSL in an imperative, general purpose, Turing-complete language' situations, and I'm not very familiar with either the language (Groovy) or the DSL. So it's a lot of quirks to cope with.
I've made quite a bit of progress in building it from source by making a few small patches and eventually disabling the sandbox for now, but it's still dying on a weird test failure for reasons I don't yet understand. At this point I'm just back to munging the binaries provided by upstream because I was mostly building from source to learn about the project and how it's distributed/deployed anyway.
I messed a bit with gradle2nix for a better-behaved, old school FOD-based build with Gradle in offline mode, but that was pretty brittle as gradle2nix is unmaintained, and due to some design limitations it couldn't actually capture all dependencies. I'm kinda interested in working out something better but on the other hand, this is a third-party package and I don't myself use Gradle or Groovy for any kind of development, so mastering Gradle's quirks and wrangling it into the Nix sandbox for this package is more of a yak shave than a practical skills investment for me.
It does, but some people are good at numbing themselves to it.
So they block losing a day or half day from lack of reproducibility out of their memory or recall it as "no big deal".