Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Kernel Hardening: Protect Linux user accounts against brute force attacks (github.com/kicksecure)
153 points by CHEF-KOCH on March 10, 2024 | hide | past | favorite | 104 comments


> Kexec is disabled as it can be used to load a malicious kernel and gain arbitrary code execution in kernel mode.

Only root can kexec anyway. "Hardening" guidance that suggests limiting the root user feels like an attempt to boil the frog of widespread Linux DRM.


The intention is for it to be accessible by root only. But if you get a situation where either that check fails, or you can convince some root process to kexec without further checks, you have a problem. If you know you're not using kexec, you can get rid of that whole issue.


Disabling kexec would prevent unauthorized operation in the event of a zero-day root compromise. It would require a reboot to re-enable, which should generate a security event in a SIEM or central monitoring for unauthorized change in operating state.


How would it require a reboot to re-enable? The root user can write directly to kernel memory by a number of means, patching the kernel live to re-enable kexec.

Closing those holes requires using a pretty extensive MAC policy restricting root in all sorts of ways.


Closing all means for root to access kernel memory is part of hardening. Modern mainstream distros don’t expose /dev/kmem and /dev/mem can be limited to less dangerous pages like MMIO, /proc/kcore can be disabled, to, etc. It goes beyond MAC policies, some functionality is just omitted from the kernel via compile-time config.


But... root can load kernel modules, and kernel modules can write to arbitrary kernel memory, no?


Typically 'hardened' builds disable loading kernel modules shortly after boot. Compromising root at some point after this point doesn't guarantee the ability to load additional kernel modules.


You usually can't run unsigned kernel modules if you have EFI secure boot turned on, depending on the way your distribution's kernel is configured.


Right but then we're back to the "it's not my computer any more" problem. Disallowing me from running my own ring 0 code on my own computer isn't an acceptable trade-off.


You can opt in or opt out as you wish. You get to decide whether the added security of only allowing signed kernel modules is the right trade-off for you. Am I missing something here?


That's what everyone said about TPM and Secure Boot, and now you can't, e.g., play Valorant without enabling both.


In cases where you can genuinely just flip a switch and get back proper access to the machine I don't mind it that much, but these "we must restrict the user for 'security' reasons" things usually don't stay like that forever.


But this is Linux so you can use a different distribution if your current distro is daft like that.

Or do you mean that certain programs (like one mentioned by sibling) require the switch to be on?


Yeah, that's one of the ways these "optional" "security measures" often become non-optional. Android is a prime example of a system where this has already happened: sure, you can root your device (if your manufacturer allows you to), but none of the software you need.

Another way is simply for the manufacturer to lock down the bootloader/BIOS and not let you disable "secure" boot, as is also common in the Android world.


If it’s me who disallowed “future me” (or someone pretending to be future me because my account was compromised) loading unsigned modules, I don’t see it as a problem.


True, its assumed you'd also be running selinux with contexed users as well. So far the only players that care to publish contexts for the OS are doing so as part of DoD postures for security relevant objects in a stig.

Disabling kexec still feels relevant though as part of a defense in depth approach to stop threat actors that aren't state sponsored (eg, metasploit folk)

Other tools like aide may help detect the event as well. Fapolicyd could also help this effort.


This sounds like it’s doing something similar to system integrity protection on mac+iOS where the security model is “root cannot write to kernel memory”. Because otherwise all you have to do is get root access to control a system, which is pretty easy given that getting root generally just means tricking the user into entering their password (because they’re generally in the sudoers group), or spying on other user processes (like xterm or whatever). Gaining root on a normal user machine is not hard if that’s your goal, and given a security model that depends entirely on not getting root it’s the obvious first step.


This reminds me of https://xkcd.com/1200/ - what would someone who gets into a normal user machine like that care about kernel access for? This type of "protection" feels like it only hurts legitimate users without even slightly inconveniencing actual bad guys.


It's to prevent the user from dumping the DRM keys.


Even on a single-user system, if an attacker can't elevate to root then at least you don't have to reinstall the whole computer from scratch. You can just wipe the home directory, log in to a fresh account, and start putting your digital life back together.

If they get root, you can't trust anything on the system any more. You have to do 10x the work just to get to the point of having a system you can even start to use. (Do you have a bootable install ISO handy anywhere? Is it current?)


Not to argue, but the reinstalling part isn’t so difficult these days.

I don’t have a current bootable ISO, but I have a bunch of old 8…16 Gb USB drives just for that. Also, there is Ventoy that makes it even easier. So making an installer is a trivial task these days. For me, it’s either downloading Arch Linux media, which is 400 MB or something. Or it’s downloading a Fedora installer, which is 3…4 Gb and takes minutes to do. If I go the Fedora way (which I would, if I need a working computer right away), it’ll take up to an hour to get back to work.

I understand that for an average user that could be a serious issue, but if that user is managed by someone competent, it’s really the non-issue. The only thing that needs to be done is a proper backup system for the data of that user.

Recently, I had a case with some distant friend whom I help with his computer. He needs just a ‘Word’ and a browser, so he uses Debian and LibreOffice. I haven’t paid him visits for years, so the Debian was 10 or even 9, while the current one is 12. He wanted to install something, and as the system was very obsolete, I insisted on updating it first. I did the proper way (from version to version), but at some point the installation just broke for no reason. I tried to fix it, but even the help of ChatGPT didn’t allow me to do the job quickly. So I just nuked everything on his ssd (his data was on a separate hdd), went all-in Fedora and went talking and drinking tea with him. I was surprised how easy the whole reinstatement was.

I’m talking Linux, which is very easy when you know what you do. But macOS or Windows not that difficult these days, when all you need to do is to reinstall. Again, considering the data is saved separately. Preferably beforehand.


How do you accurately reinstall Arch to the same operational point as before? I’ve wanted to reinstall but it feels like a large long tail to reinstall all the same applications and fix up all the config files to the right state that I want things (e.g. is there some way to list all the packages installed & feed it back during the installer?).


Yes, there is a way of restoring your packages.

Arch has some decent wiki entries for this:

- https://wiki.archlinux.org/title/migrate_installation_to_new...

- https://wiki.archlinux.org/title/Install_Arch_Linux_from_exi...

Although, my way of work is quite different. I do keep a lot of notes about what I do. And when I need to reinstall my system (e.g. to a new laptop), I reconsider what I need. My Arch Desktop systems are very minimalistic. I use Arch on my laptops and servers.

I use Fedora on my desktops, so it’s different from Arch. Usually, I don’t mess with Fedora, and treat it as I treat macOS. My way of working with Fedora or macOS is just to note what I need to change, in plain text with screenshots (when needed). As most of the changes are either in Settings app or just a couple of terminal strings.

Also, I keep all my custom configs in a git repository. Usually, that system is called dotfiles.


Why not setup your system the way you like it and take regular Clonezilla snapshots, so you can just restore from a known good ISO with all your programs and settings?


Personally, I don’t know. Although, a friend of mine does exactly that with his Windows install. The first time he described this to me was like 15 years ago! I never went that way, maybe because I don’t need the exact same system most of the times. When I reinstall, that happens voluntarily these days. And I just reconsider what I need.

But overall that’s a great idea.


You do make some good points, but...

> For me, it’s either downloading Arch Linux media, which is 400 MB or something.

...that's fine, if you have a separate non-compromised computer handy to download the installer from. Which not everyone will.

...and, is the only software you use included in that default 400 MB base system? How much else do you have to install? Did you keep a list of the extra packages you used, or are you going to have to keep going back to `pacman -S ...` every 5 minutes for the first couple of hours as you run into that thing you use frequently suddenly not being there?


Software installation is trivial on linux, and periodic re-installations aren't a bad idea anyway - seems like that should be extremely preferable to, as GP said, "boiling the frog of linux DRM". These seem like weird justifications IMO.


Maybe someone have NixOS configs with all the settings, generating live ISO based on those configs. So the matter is just generating/booting those ISOs and you have a clean system with your configs. Put in some impermanence, and you have a clear separation of irreplaceable user and OS data, and replaceable data. Regularly back up the irreplaceable data, boot the ISO and mount this data (or an assumed not compromised version), and you effectively have your system back.


> If they get root, you can't trust anything on the system any more.

... and that includes the firmware of all components involved.


The existence of the root user is bad security. Any distro serious about security should disable it and strip it of as much power as possible. It is such a juicy target for attackers due to how much power it has and most distros protect it poorly by having programs such as sudo installed allowing an attacker to easily escalate their privileges.


Nah. Root user actual capabilities can be limited in a number of ways.

On RHEL this is done by default via SELinux.

If you define a policy that prevents anyone from doing kexec, and enable the boolean to prevent root from disabling enforcing mode for selinux, there you go, done with tools that have been available for ~15-20 years.


Has SELinux become configurable on the command-line with well documented tools? Does it still occasionally decide to "relabel files" for 10 hours on a random reboot?

It's a great mechanism in theory, but the practice leaves much to be desired.


Yes and no. But the thing is... SELinux does complex stuff. You can simplify that, but only to a certain point.

If you dumb it down too much it becomes useless. Simple tools usually get their simplicity by making some choices for you. You don't always want that.

SELinux is one of those things where you don't want that.

That being said, without diving too much, I have configured many custom services to run under SELinux by just running them in a dev/staging environment, collecting violations and creating policies via the audit2allow tool. So yeah, things have improved I guess.


In my experience, SELinux is the sort of thing you avoid unless you're forced to use it, then when you are forced to use it, you're encouraged to stumble around until you get it out of your way, then forget about it until it gets in your way again.

Good documentation and good tools would encourage more people to use it and understand it willingly and effectively.


It's a great mechanism in practice, it just has a bad rep because too many were scared by the complexity.


The complexity is a self-inflicted problem. This is a rake that many computer security people refuse to stop stepping on. If it's not usable then it doesn't work.

FWIW it does look like Gentoo stepped up and filled in the gaps since I last looked. Their Wiki has good docs starting from https://wiki.gentoo.org/wiki/SELinux.


Can selinux be configured to prevent root from loading a kernel module..?


yes.


Well TIL. I had no idea SELinux was so user hostile.


Yes, it is hostile to malicious users, that’s by design and it’s a feature.

That being said, it can be configured that way but (of course, duh) it’s not the default setting.

I’ll stop replying here because this branch of the discussion doesn’t seem to be any constructive on your side (or even conducted in good faith, to be honest)


The problem is, there's a very very narrow line between restricting what the user can do on their system for the user's benefit and restricting what the user can do on their system for your benefit as the manufacturer/whatever. That's what makes the idea of restricting the root user so uncomfortable.


Now you’re being dishonest and posting actual bs.

There is no such line. Everything is foss software, you can change any policy.

You can still do everything you want.


This is literally what people said about Android until it wasn't true anymore though.

We're fundamentally talking about building up pieces in a "chain of trust" which allows the entity in charge of the UEFI to decide what can and can't be done with the system. That's a scary thing.


How do you propose that the owner of a system should be able to administer it?


The distro maintainers should make sure the operating system works well on its own. For what needs to be customized there can be a settings application and individual applications can expose ways for themselves to be customized.


I don’t think anybody would use or develop a system like that. It would put too much responsibility in the hands of the (volunteer) maintainers. Trying to guess what use-case everybody will have is a horrible tooth-pulling and bikesheding exercise that nobody would do for fun.

That said there’s nothing stopping you from trying it. It just seems unlikely to succeed.


It's no more responsibility than what already exists. Telling end users to go and fix the operating system themselves was never acceptable to do.

>Trying to guess what use-case everybody will have is a horrible tooth-pulling and bikesheding exercise that nobody would do for fun.

You seem to be misunderstanding what I am saying. People could still install new applications, or uninstall preexisting applications just like how it works now.


I guess I don’t know what you are suggesting


You want distro maintainers to have final control instead of hardware owners? Isn't that arguing for the iOS-ification of Linux?


I want distros to make an operating system that actually works and doesn't need to be fiddled with. Hardware owners shouldn't have to be the ones fixing the operating system. Most people just want their computer to work and don't want to learn about the internals of how it works. If a user really cares about changing the internals he can make his own operating system.

>Isn't that arguing for the iOS-ification of Linux?

Linux distros could learn a thing or two (or 100) from how iOS handles security.


> I want distros to make an operating system that actually works and doesn't need to be fiddled with.

Doesn't need to be fiddled with? Sure. Can't be fiddled with? No way.

> Hardware owners shouldn't have to be the ones fixing the operating system.

Ditto. Owners shouldn't have to fix it, but they should be able to.

> Most people just want their computer to work and don't want to learn about the internals of how it works.

Most people, sure. So don't make them learn. But don't prevent people from learning if they want to.

> If a user really cares about changing the internals he can make his own operating system.

Wouldn't this mean that incremental improvements wouldn't be possible anymore?

> Linux distros could learn a thing or two (or 100) from how iOS handles security.

How so specifically?


>Wouldn't this mean that incremental improvements wouldn't be possible anymore?

One can make a fork of the distro and then make the change you want to incrementally improve it.

>How so specifically?

Reading the documentation and reading information about its internals.


Why though?

How is that better than what we have now, where a system can be easily modified in-place without every distro needing dozens of forks for each slight preference change?

What exactly is it you're advocating for?


>Why though?

Because most Linux distros have terrible security.

>How is that better than what we have now, where a system can be easily modified in-place without every distro needing dozens of forks for each slight preference change?

Preferences can be done via configuration. Most people do not actually want or need to modify the operating system.

>What exactly is it you're advocating for?

The root user to be effectively removed.


> Linux distros could learn a thing or two (or 100) from how iOS handles security.

Sorry no, I don't want my Linux to become like iOS. iOS security model "all powers to the manufacturer" is broken model and doesn't work.


> If a user really cares about changing the internals he can make his own operating system.

Don't we already have that, and it's called... Linux?


iOS handles security by having a completely integrated ecosystem. Linux has to work on a wide range of hardware, basically none of which is under the control of anyone involved in Linux (kernel, GNU, distro, etc). Even Microsoft can't exercise Apple levels of control over the PC platform, and they have 100 times the revenue of the largest Linux distro.


> individual applications can expose ways for themselves to be customized.

They do. That way is exposing files in /etc that the administrator account can edit.


Unfortunately these files in /etc usually can only be edited by root. What could be done is allow a user from the administrator group to edit such files.


A user from the administrator group has all the same power that the root user does.

Or, labeled differently, the root user is the administrator group.


No, for example the root user can read any file on the system. If someone gains access to the root user they shouldn't also gain the ability to read and write to every file on the system.


How should I be able to read any file on the system on a device I own, if not with the root user?


I don't think this makes much sense to have to avoid empowering stealer malware. It would be better to find what the actual user need is and make a proper API to expose that functionality.


How do I install a custom kernel module in your model


You should install it from the distro's package manager. If it's not cryptographically signed by the distro you have to fork the operating system to add another key to trust. A custom kernel module has immense security implications so it shouldn't be easy to do as an attacker shouldn't be able to trick someone into installing one for free vbucks.


Have you ever used anything that uses DKMS? Should you have had to make your own operating system kernel instead?


DKMS already supports signed kernel modules.


It supports them by creating its own private key and signing them on your system. That's inherently incompatible with only trusting modules from your distro vendor.


You are right only the distro's build servers would be able to use DKMS as the distro couldn't ship the private key.


That's fundamentally not how DKMS works.


My distro's repositories don't contain my newly written custom kernel module


Most users aren't writing their own custom kernel modules and using them on their current install instead of qemu. This is a niche use case that the average user's security shouldn't be compromised for.


It's an important use case that the system shouldn't lock the user out of.


Using something like sudo. Parent is not saying remove root entirely but that it should be super locked down and maybe broken up into different accounts.


I think they are saying to remove root entirely?

If the solution is just "use sudo" then what's the difference compared to today's model?


Instead of sudoing to the root account, you would sudo to the network account that could only do network admin, or sudo to the software-install account that could only install packages, etc.


I think sudo done well could be exactly what you're advocating for: configure all possible commands that a user should be allowed to execute as superuser for ordinary and extra-ordinary maintenance and prohibit everything else (especially sudo -i).


I make liberal use of sudo -i on my personal Linux machines, because otherwise I have to prefix sudo to 99% of my commands and be arsed to enter passwords if I dilly dally too long between commands.


If you have to prefix almost all of your commands with sudo, either your usage is highly irregular or your problem is mostly caused by poor file permissions resulting from overuse of sudo.


I don't as doing that is as bad as suid binaries except now the binaries aren't even hardened. suid binaries shouldn't exist either.


What would you replace suid binaries with? Just getting rid of suid without replacement would totally break your system.


Normal binaries that talk to services running as other users who have the minimal possible permissions. For example instead having users run ping as the root user using suid, you instead have a ping client talk to a ping service on the system. The ping service runs as a user with the raw sockets capability and nothing else.


How is that an improvement over just giving the ping binary on disk that capability? And how would "su" and "sudo" work in your model?


>How is that an improvement over just giving the ping binary on disk that capability?

That would be okay too, but not all suid usages can be replaced like that.

>And how would "su" and "sudo" work in your model?

They wouldn't exist in my model.


Parts of this seems dated. I'm pretty sure the entropy changes suggested doesn't really add anything after the recent (~2 years ago) rework of `/dev/random` in the kernel.

It doesn't mention `systemd-homed` which might be useful in some cases and the kernel module signing portion is just... wrong or misguided.


The title is actually shortened, the project offers much more, make sure you read the Readme.


This list is pretty awesome and the work is impressive.

I wonder what percentage of applications break or have such slow performance that they become unusable, when this set of mitigations is enabled.


I generally dislike such lists because nobody goes through them and analyzes which ones work and which things just seem like they improve security without doing anything at all.


Is there some sort of way to to keep a browser profile inaccessible to anything run by the user that isn't the browser, while still having the browser itself being usable by the user?


Yes, you can run Firefox as its own user. I wrote about this about a decade ago; https://www.openwall.com/lists/oss-security/2023/10/24/2


The gentoo wiki also has some information about this

https://wiki.gentoo.org/wiki/Simple_sandbox#larry_runs_firef...


QubesOS? I would love to know the answer, but I think it is going to be a long time before we have proper application sandboxing on Linux.

The browser is likely the biggest threat vector running on my machine, yet there are no easy ways to lock it down to just reading/writing Downloads and its own cache.


That's part of the problem, but remember your browser has all your cookies and such - the sites that keep you logged in even when you close the window (like email).

So if anything else can read the browser's data, it can get that.


AppArmor is fairly easy.


I use firejail. At least the browser cannot see arbitrary files in my home directory, like ssh or gpg keys, password manager, and everthing else. To share files between the browser and the rest ~/Downloads can be used.

Disclaimer: Critics say firejail is too complicated to be audited and adds too much attack surface of its own.


There are lots of ways to protect your system from a browser exploit via containers, another user, a vm, etc as brought up by other people responding. However protecting a browser from other applications is basically impossible unless you also sandbox everything else you are doing. Even if you run a browser in a VM some other process run as your user could just automate clicking the UI to do whatever. If you go qubes style and isolate everything from everything then it is fine.


Protecting the browser from other applications is arguably more important now than protecting applications from the browser; as the browser contains all your security information (cookies, etc) necessary to login (or be logged in) on all your important sites.

People have been hacked by malicious Minecraft mods uploading browser data to nefarious places.


Sure; take your pick: Firejail, flatpak/bubblewrap/bubblejail, docker/podman.


Run your browser on a remote machine? Using say BrowserBox: https://github.com/BrowserBox/BrowserBox

Full disclaimer: my company develops it.


You can run Firefox in a docker container. Not sure how much GPU performance you can squeeze out of it but it's doable.


You can (allow) access to GPU for cuda applications in docker containers, so I'd be surprised if it wasn't possible to get video acceleration to work normally


I prefer to rely on Qubes OS for a reasonable security.


If you are using qubes, you are likely using kicksecure. Whonix (used by many qubes users) is based on kicksecure.


It's true, however the main security measure of Qubes-Whonix is not Kicksecure but hardware virtualization, which isolates Tor Browser (anon-whonix) from VM establishing the Tor connection (sys-whonix). Kicksecure is of secondary importance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: