I don't like Unix filesystem structure in general. What's the point of having directories like /usr or /lib in the root directory, when they could be all under for example, /ubuntu24? And the user could keep files in the root directory and not in /home with lot of system files.
Also I don't like that some distributions suggest partitioning a drive. This is inconvenient, because you can run out of space at one partition, but have lot of free space at another. It simply doesn't make sense. And if you have swap as a partition, you get slightly faster access, but cannot change the size!
> you can run out of space at one partition, but have lot of free space at another
that's exactly the point — you can run out of space in your /home but that does not affect, for example, /var. or vice versa, log explosion in /var is contained within its own partition and does not clog the entire filesystem.
very important for /var/log, pretty easy to have log spamming app fill the drive, and you don't want logs filling get your database into out of disk space state
There are a lot of reasons. Just three from the top of my head:
1. The way Unix works, a directory is a file, so if you can write in a directory you'll also be able to move directories around (and thus break the structure you mentioned completely).
2. Doesn't make sense for multi-user. Yes, I understand most people have their own computers, but (1) why design it in a way that breaks multi-user unnecessarily? (2) there are a lot of utility users, and having them get access to user files because of the way this is structured is silly.
3. `grep -r` is going to be a pain in the ass when searching your own files, because it'll also search all the other system subdirectories too.
It’s just historical. Believe the large number of top level directories was a result of ken not having enough space on a single disk on his PDP, when that was precious.
For years I’ve been putting all user data into a separate /data partition and have kept the OS partition small (~30gb). But you have to fix the system when first installed. When I still used Windows I had the same c:/d: split.
More recently started putting kernels into a bigger ESP (EFI) partition with sdboot or uki.
With terabyte system disks, running out of space mostly doesn’t happen anymore unless you made the system partition(s) small. Don’t do that, give them plenty of GB, each of which are now thousandths of the disk.
Not at all. Having your data on a separate partition makes system upgrades and backups a breeze, the opposite of what you said. Only one percent of my disk is currently "wasted," and I can live with that.
I’m honestly having issues deciding if this is bait or not. Surely you understand that UNIX is a multi-user operating system and that partitioning drives exactly for the reason you describe is critical to ensure that, for example, runaway log growth doesn’t cause a database to shut down?
Today, in 2025, neither are safe assumptions to make. Much in line with the Internet meme's of "new college freshmen in 2025 have never known a world without cell phones" and the like, in 2025 there is now some rather large subset of the computer using population who have never known of nor used a "multi-user computer" and have only ever seen and used "single user computers" (even if the OS on their computer is inherently multi-user, the overall 'computer' is 'single-user' from their viewpoint).
And, if they have never seen nor used "multi-user computers" they also have not encountered "runaway log growth" or the like -- or if they did it was from their own process that they immediately killed, not by some other user on the same computer filling /var/log/ in the background.
AI startup idea: A plugin that scores HN posts on likelihood of bait. ChatGPT when prompted "Give [the post] a score from 1 to 10, where 1 is complete sincerity and 10 is low effort bait" thinks this is 7.
Logs should be limited by size. One could also use quotas in a filesystem. Also, what if some other application, like npm cache, uses the space for a database? Do you suggest allocating a partition for every program?
Also, databases usually store data in /var so it won't even help. Also, mysql simply hangs instead of shutting down in this case.
Also I don't like that some distributions suggest partitioning a drive. This is inconvenient, because you can run out of space at one partition, but have lot of free space at another. It simply doesn't make sense. And if you have swap as a partition, you get slightly faster access, but cannot change the size!