Hacker Newsnew | past | comments | ask | show | jobs | submit | jcgl's commentslogin

Here’s his original blog post from 2016: https://signal.org/blog/the-ecosystem-is-moving/

I go back and re-read this pretty much every time a decentralized thing has problems. It rings true.


> Try installing kde and gnome, then uninstalling both and check how many packages remain.

Sounds like a package manager or package problem. Competent package managers (e.g. dnf) remove unneeded packages and all their owned files. Albeit, I think with apt you need to do a manual autoremove to remove orphaned packages.

Not suggesting that they're equivalently powerful compared to nix, but this specific thing shouldn't be a problem with traditional package managers.


A common issue with most package managers is that if you have A installed, and then you install B which depends on C, and that C happens to also be an optional dependency of A, then uninstalling B will not uninstall C as C won't be orphaned (because of A).

That's interesting. I'm surprised. Just some cursory websearching and didn't see anything that gave a solution here for DNF at least. Funky! Seems like there should be a way to deal with this.

Those additional programs can be freely chosen by distros and/or users. So each of them has to stand on their merit. Though of course they do get some built-in credibility by coming from the systemd project. But for the most part, I think systemd software just tends to have competitive offerings with nice interfaces.

I'm annoyed at it replacing resolvconf. At reboot. At date. At logging. At cron. At ntpd. At network configuration scripts.

Some of these I'm sure make life easier for maintainers. Others just feel like change for the sake of change. Breaking workflows because someone wanted to design a better wheel.


Other than logging (journald is one of the few truly core systemd components), these are all basically independent programs chosen by your distro. As such, each is best evaluated independently. Let's give it an honest shot:

- resolv.conf: systemd-resolved is not unique here in providing a stub resolver and not just NSS functionality (it's been years, but isn't unbound often the same way?). And if you want to have systemd-resolved but not have its stub used in resolv.conf, you're free to do so! Just remove the symlink that is /etc/resolv.conf and replace its contents with whatever you choose.

- cron: systemd timers provide an alternative to cron. You're still allowed to create cron jobs and use cronie (or whatever traditional cron implementation) you like.

- ntpd: leaving aside the fact that most distros (I think?) nowadays use chrony rather than ntpd or systemd-timesyncd, you're likely free to switch to chrony or ntpd depending on your distro. Afaik, this isn't a daemon with deep system integration, and you should be able to plug-and-play without much issue.

- network configuration scripts: What're you comparing systemd-networkd to? NetworkManager? Debian's ifupdown scripts? RH-family's network-scripts? In any case, network management systems tend to be pretty pluggable (much like in the case of your cron daemon). You can even have them live side-by-side, managing different interfaces, e.g. have NetworkManager do WLAN, while systemd-networkd does Ethernet interfaces.

I don't know any of the story behind timedatectl, so I'll avoid opining on that one.

But generally, it really seems like each of these components is as pluggable and freely-choosable by a distro as one could reasonably hope for. And, like you acknowledge, they end up likely getting chosen because it's easier for distro maintainers. Which is kind of a big deal, imo. But if you don't like your distro's choice, it makes sense to complain to your distro.

In general, I think your suggesting that these new-ish (most of which are no longer very new) components were just made for the hell of it, I'd encourage you to look a little deeper into what they offer compared to the incumbents. For starters, they generally work together pretty cohesively, e.g. systemd-networkd and systemd-resolved do some mutual coordination stuff that's pretty nice. Systemd timers have numerous nice properties compared to cron. Etc.

Again, you (or your distro) are free to take or leave these components, since they can be picked on their own. But an analysis of "these new components from the systemd project 1) are forced on me, 2) exist primarily for the sake of change" seems both incorrect and uncharitable.


What makes ss different?

In any case, interesting to think of shared libraries (specifically shared libc) as a risk here. Makes sense, but I hadn't thought about it before.

That said, I'm having a hard time doing a threat model where you worry about an attacker only setting LD_PRELOAD but not modifying PATH. The latter is more general and can screw you with all programs (doesn't cover shell builtins, but it's not like those would just be one more step).


ss obtains the connections information via netlink directly from the kernel (besides parsing /proc):

https://manpages.debian.org/bookworm/manpages/sock_diag.7.en...

https://github.com/vishvananda/netlink/blob/main/inet_diag.g...

Not many rootkits tamper the netlink channel, so in most cases it's a bit more reliable.


Nowadays, there's only one rootkit that can hide itself so perfectly: the Singularity rootkit. It also hides from auditd by using netlink_unicast hooking and other evasive functionalities. Analyzing a machine compromised with Singularity loaded is a real headache, since it prevents memory dumps for analysis.

https://github.com/MatheuZSecurity/Singularity


Okay yeah, sure. So it's not intrinsically more reliable or anything, it's just not specifically vulnerable to LD_PRELOAD. And it's not clear to me why LD_PRELOAD would be a particularly interesting attack vector, but maybe that's just my ignorance.

How are TUI tools just as accessible as the terminal? Take a visually-simple program like neomutt or vim. How does a vision-impaired user understand the TUI's layout? E.g. splits and statusbar in vim, or the q:Quit d:Del... labels at the top of neomutt. It seems to me like the TUI, because it only provides the abstraction of raw glyphs, any accessibility is built on hopes and dreams. More complicated TUIs like htop or glances seem like they would be utterly hopeless.

When it comes to GUIs, you have a higher level of abstraction than grid-of-glyphs. By using a GUI toolkit with these abstractions, you can get accessibility (relatively) for free.

Open to having my mind changed though.


Completely agreed. Not sure what the historical reasons for lsof and ss are, but unix tools are structurally in a hard place when it comes to having sensible defaults over the long term.

Generally speaking, you can only have sensible defaults over time if you're able to change the defaults over time. New users and new use-cases come with time, and so what constitutes a "sensible default" changes.

However (and this is a drum I like to bang[0]), because unix tools only deal in usually-text bytestreams without any higher level of abstraction, consumers of those tools end up tightly coupled with how output is presented. Without any separation between data and its representation, the (default) representation is the tool's API. To change the default representation is to make a backwards-incompatible API change. A good example of this is how ps aux truncates longer than like 7 characters.

[0] https://www.cgl.sh/blog/posts/sh.html


Hah yes, I've come to unashamedly - by muscle memory since the 1990's - find myself always typing 'ps auxw[w...]', where [w...] is some arbitrary number of w's depending on how heavy my index finger feels at the moment of typing.

> change the defaults over time

however this breaks backward compatibility, as you noted. in the golden age of unix it was critical to maintain backward compatibility so that local tooling didn't magically break.

HP-UX seems to have an env var UNIX95 that affects XPG4 compliance in operation/output. Solaris always had a /usr/xpg[46] path (and /usr/ucb). GNU tools have POSIXLY_CORRECT. and so on.

I never liked using any of those because then you're on some other system, or in a break glass situation, and none of the tooling works as you expect. In the today world of a near monoculture of linux, it's fine I guess. And there's no reason today that complex commands like `ss` shouldn't be controllable via env var.

love your blog, thanks for the link.


> love your blog, thanks for the link.

Thank you!

Configuring configuration via env var is a good historical example. I think that especially works nicely when you Buy An Operating System. You know, one that is created and provided by A Vendor. In principle, the vendor can architect a unified metaconfiguration system, e.g. one or several env vars that align behavior to a standard.

But I dunno if it would work so well to to hypothetically apply that tactic to a modern bazaar-based OS like Linux. Distros do amazing, valuable work to unify things, but modern Linux is basically a zillion software packages in a trench coat. So either the distro carries a zillion patches to have a few env vars, or the distro carries no patches and there are a zillion env vars. Either way, total cost of maintenance explodes.

Maybe when people say "text is the universal interface," they really mean that once you've released a textual interface, the interface becomes universal, unchanging for all time.


What network effects? Like a sibling comment already pointed out, privacy addresses come standard on all consumer OSes.

What do you mean? Apps for iOS and macOS have had perfect v6 support for a long time because of this. Linux has unified address families for netfilter and internet sockets that abstract the details. Various programming languages have perfectly fleshed out standard library data structures and functions, etc etc.

Those naughty incoming packets can hit your private devices even with NAT-without-state full-firewall. The details depend on how your NAT actually implements the translation, but it’s perfectly possible for $randomHighPort to send all its incoming traffic straight to some device. Said another way, a NAT is not guaranteed to do something like match entries based on the layer 4 4-tuple.

Providers can do v6-only in their core while still providing public v4 to users. SIIT if they can still afford a public IP per customer, and MAP-T if they can’t.

Misspoke: more like a CLAT thing/464XLAT, rather than SIIT, I think

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: