Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's another interpretation available: kubernetes is not a mature system, and that's why it's too much work to package properly.

200 library dependencies is not too many for Debian to handle, as long as they don't change very often and the work is useful for other packages. But Kubernetes' dependencies do change often, and there are not enough other Go systems around to spread the work around. This is not a stable system, therefore Debian can't ship it in a stable release.



I think there’s a difference between “stable” and “evolving”. Software with a much lower release cadence than k8s works well with Debian’s distribution model, but it falls down with frequent releases and a large dependency tree. I guess you’d see that with NodeJS applications as well - I don’t think it’s practical for Debian to re-invent NPM and try to distribute and ship the large and changing dependency trees you find in those projects. However that doesn’t mean the software is not _stable_, for the typical definition of stable.


I think there’s a difference between “stable” and “evolving”.

No, that's pretty much the Debian definition of "stable": software that doesn't need changing for multiple years. The Debian term "stable" has nothing to do with whether or not a program crashes regularly, but how often it requires maintenance. In that definition, "evolving" software indeed isn't "stable" (yet).


But by that logic no web browser can be in Debian stable.


From the article:

>Beyond Kubernetes, web browsers clearly fall into this category. Distributors have generally given up on trying to backport patches to older browser releases; they just move their users forward to new releases when they happen. The resources to do things any other way just do not exist.

Exceptions are already made for browsers, but they're browsers. They're practically essential to 99% of graphical debian installs and don't expose the really nasty unstable bits (like V8's api surface) to the world. I doubt the debian TC will make that exception for devops software with much less mindshare and a public API surface that is the software, at least not on the stable channel.


Debian just makes an exception for the Firefox and Chromium packages.

You can tell that is what the current maintainer was hoping for here. Instead, the previous maintainer-- who literally wrote that it would probably take two full-time devs to properly package this and maintain the package-- goes full Vogon and summons the great Debian bureaucracy to solve this with their poetry.


Note that there is no "firefox" package in Debian stable -- only "firefox-esr".


It kind of feels like the same problem OpenStack had, and failed due to.

That being said, OpenStack is technically still around, so it's not completely "dead". ;)


Also I don't get it, what's the problem with shipping a statically linked binary? Yeah, it goes a bit against the debian way, but it will facilitate things like out-of-the-box init scripts etc.


There's no problem with shipping a static binary, except that it means that the binary must be updated specifically every time a security issue or major bug is fixed in a dependency. For a normal major Debian package, a fix to openssl or libc or libgtk-something automatically fixes that bug in the major package as well. For a static binary, Debian has to notice the change, rebuild the static binary, and ship it out.

The benefit of a distro is shared, trustable work. Increasing the workload on the distro maintainers is not great, and Debian is an all-volunteer organization, especially sensitive to that workload.


> a fix to openssl or libc [..] automatically fixes that bug in the major package as well. For a static binary, Debian has to notice the change

Because this argument is often made, I think it's worth pointing out:

Just shipping a fixed dependency like `libssl.so` isn't enough to make the fix effective on the end users' machines. You also have to restart all the running executables that link in that dependency.

As far as I'm aware, Debian does not handle that for you. So even if it ships that small, nice dependency-only package update via manual or unattended upgrades, your long-running nginx will still be vulnerable until you manually restart it.


And that is actually how it's supposed to work. The last thing you want is unattended-upgrades silently restarting services (read: which can fail the start bit) behind your back.

If the upgrade fails to restart nginx properly, your customers won't be seeing the pages they need to. If the upgrade fails to start sshd, you have just lost access to the system(s) you would need to fix. Plus my personal favourite: if the upgrade-and-restart breaks your message broker, EVERYTHING is on fire.

In most non-orchestrated, non-cloud-native environments the right way for security upgrades is to have them available, preinstalled and configured, but not yet active. What you do need is monitoring to tell you these things are waiting so you can apply them as soon as feasible.

Although to be fair, once you have orchestration, robust zero-downtime rollouts and a good CI to rebuild new versions as upgrades become available... that's a different story.


> The last thing you want is unattended-upgrades silently restarting services

I don't buy this argument: It makes an arbitrary distinction between software in which the fix is reflected, and software where it isn't.

For example, if said upgrade breaks an on-demand job which isn't a permanently running process but invoked by your customers through nginx, your customers also won't be seeing the results they need to.

If you wanted unattended-upgrades to not have its changes reflect in the system, you might as well configure it into the mode where it only notifies you, instead of installing the new version automatically.


Needrestart handles that for you: https://tracker.debian.org/pkg/needrestart


Apt occasionally prompts to restart the ssh server, so it does have some knowledge of this.


Correct.

Furthermore, rebuilding and distributing a large number of large binaries every time a vulnerability is fixed is harmful!

- It encourage users to delay security updates. Hundreds of millions in the world have slow or expensive or capped Internet connectivity.

- Makes the distribution unsuitable for many embedded/IoT/industrial/old devices with very limited storage

- It gives the impression that the distribution is bloated.


Debian has an escape valve from its fundamental spirit-defining policies even for non-free software. And if Debian didn't have that policy wrt non-free, I could easily make a similarly general post as you did listing the very real dangers of shipping blobs, which probably carries more weight than the dangers of vendoring you outline.

So, why can't there be a repo for the all the vendored things, and make a policy for the maintainers there?

By not having that escape valve, the pressure shoots out on the maintainers-- in this example the maintainer's work is made impossible as evidenced by the statements of the previous maintainer that it would take two fulltime devs to package this according to the current policies. So you encourage burnout on your volunteers.

This also encourages passive aggressive software design criticisms. Look at the very first comment here that shifts to talking about the "maturity" of the software under discussion. I'd be willing to bet I'd see similar flip judgments on the list discussion of this-- all of which completely ignore the monstrous build systems of the two browsers that Debian ships. So apparently there's an escape valve for exactly two packages, but no policy to generalize that solution for other packages that are the most complex and therefore most likely to burnout your volunteer packagers.

Keep in mind you are already on maintainer #2 for this package that still does not ship with Debian because shipping it per current policy is too burdensome. Also notice that you've got a previous maintainer on the list-- who already said this is a two-person job at least-- calling out the current maintainer for being lazy. It seems like a pretty lousy culture if the policy guidelines put policy adherence above respect for your volunteer maintainers.


> the very real dangers of shipping blobs, which probably carries more weight than the dangers of vendoring you outline.

This is a false dichotomy.

> By not having that escape valve

Please do your research before posting. Building packages with bundled dependencies is allowed, actually.

Having a handful of small files from 3rd parties bundled in few packages is relatively harmless (if they are not security critical) and allowed.

Having 200 dependencies with hundreds of thousand SLOC creates a significant burden for security updates.

Put security-critical code in some of dependency and the burden become big. Make the dependencies unstable and it gets worse.

Now create a similar issue for many other packages doing the same and the burden becomes huge, for the whole community.

> This also encourages passive aggressive software design criticisms.

I would call it outspoken criticism of bad software design.


Statically linked means that a vulnerability in one of the libraries means that all statically linked binaries using the library will have to be patched.

In case of shared libraries, only the affected library package has to be patched.


Tracking security incidents. If every piece of software uses the system-wide copy of a vulnerable library, you only need to patch that library and the entire system is safe. For every embedded copy of that library, a separate patch and release must be made. That's a lot of management overhead.


You are confusing static linking with embedding library sources.


Both static linking and embedded copies have similar issues.

With static linking you have to track where the static library ended up (recursively) and then issue rebuilds in the correct order for all packages that contain the static library (directly or indirectly).

With embedded copies you have to search the source code archive for the copies and then patch and rebuild each copy.

In some ways static linking is more complicated to deal with, due to the case that the static library ends up in some other static library.


> There's another interpretation available: kubernetes is not a mature system, and that's why it's too much work to package properly.

If that were true, one would expect the packagers at Arch to be leveling similar complaints about the "maturity" of the software or the "awful" build system used by Kubernetes.

I'm going to rankly speculate that:

1. That is not the case.

2. Kubernetes was packaged in Arch in less time than it's taken for Debian to discuss the policy disagreement between the previous and current package maintainers, neither of which have been able to deliver a maintainable Kubernetes package in Debian yet.

3. This says more about the efficacy of Debian's packaging policies and package dev UX than it does about Kubernetes.


It's funny. I was discussing looking into packaging kubernetes with another Arch developer yesterday, and noticed this discussion today.

I'm the package maintainer for go and maintain a number of go packages in Arch Linux, along with writing the current guidelines and looking into packaging strategies for golang.

I'll offer a more cynic view then the one presented: golang is not a mature packaging ecosystem.

So 1) is partially correct; go just sucks from a packaging perspective. Not really the issue at kubernetes.

We can probably package up kubernetes within the next hour and drop it into `[community]` as Arch has less structure and QA around the packaging process. However our largest hurdle is that we package the latest version. Is kubernetes going to work properly on go version `1.15.3`. By experience container software brings out the worst of the go runtime and any changes to the syscall, goroutine or memory management is a cause of concern.

The other hurdle is the cadence of when kubernetes decides to support which container runtime. Docker is wonkey at best, but I haven't seen a lot of details on cri-o nor containerd.

So frankly our problems are not really related to packaging, but the challenges of providing recent versions of packages and making sure they work.


Also-- I'm willing to bet there are similar charges somewhere in a Debian discussion list wrt Virtualbox and its packaging. And I'm willing to bet that there's currently a virtualbox package in Arch and little to no discussion about how "unstable" or whatever virtualbox is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: