Another example of a vulnerability that is purposefully obfuscated in the commit log. It is an insane practice that needs to die. The Linux kernel maintainers have been doing this for decades and it's now a standard practice for upstream.
This gives attackers an advantage (they are incentivized to read commits and can easily see the vuln) and defenders a huge disadvantage. Now I have to rush to patch whereas attackers have had this entire time to build their POCs and exploit systems.
I've described how we (the kernel security team) handles this type of things many times, and even summarized it in the past here:
http://www.kroah.com/log/blog/2018/02/05/linux-kernel-releas...
Scroll down to the section entitled "Security" for the details.
If you wish to disagree with how we handle all of this, wonderful, we will be glad to discuss it on the mailing lists. Just don't try to rehash all the same old arguments again, as that's not going to work at all.
Also, this was fixed in a public kernel last week, what prevented you from updating your kernel already? Did you need more time to test the last release?
Edit: It was fixed in a public release 12 days ago.
I don't know what you don't understand: EVERY single kernel fixes a few vulnerabilities. If you lazily refuse to update because none of those say "hint: there is a vulnerability here", then you are taking the deliberate action of skipping some security fixes. Greg's announces always say "all users must upgrade". If there was sometimes a different signal such as "all users must really really really upgrade", then for sure you would simply skip all other ones, as it already seems like you're waiting for a lot of noise before deciding to apply due fixes, and you would remain vulnerable to plenty of other vulns for much longer.
Here the goal was to make sure that all those who correctly do their job are fixed in time. And they were. Those who blatantly ignore fixes... there's nothing that can be done for them.
I obviously am very comfortable disagreeing with people who work on the kernel or adjacent software. Working in those areas does not at all make them correct, or even informed, especially with regards to security.
It's been working semi-well, and gives us a way to deal with longer embargo times (like months instead of weeks and days), but it does not integrate well into the linux-distro-like way of working just yet, which is an issue that hopefully will be resolved sometime in the future if the linux-distro members wish it to be.
> When doing kernel releases, the Linux kernel community almost never declares specific changes as “security fixes”. This is due to the basic problem of the difficulty in determining if a bugfix is a security fix or not at the time of creation. Also, many bugfixes are only determined to be security related after much time has passed, so to keep users from getting a false sense of security by not taking patches, the kernel community strongly recommends always taking all bugfixes that are released.
> Linus summarized the reasoning behind this behavior in an email to the Linux Kernel mailing list in 2008 ...
Since severity can be a moving target, it seems like there is no straightforward solution. With that said, by hiding the known ones, older distros don't have much of a hope in hell of getting all reported CVE fixes back-ported.
Why isn't there a public index mapping known CVE fixes to git commit IDs? This seems totally doable and would make the world a more secure place overall.
> older distros don't have much of a hope in hell of getting all reported CVE fixes back-ported
Older distros have always had a ton of privilege escalation bugs and I don’t think that’s ever gonna change. If you can’t keep everything updated, your machines have to be single-tenant.
What should they do instead? You have to rush to patch in any case. If the maintainers start to label commits with "security patch" the logical step is that it doesn't require immediate action when the label is not there. Never mind that the bug might actually be exploitable but undiscovered by white hats.
If you do not want to rush to patch more than you have to, use a LTS kernel and know that updates matter and should be applied asap regardless of the reason for the patch.
When someone submits a patch for a vulnerability label the commit with that information.
> You have to rush to patch in any case.
The difference is how much of a head start attackers have. Attackers are incentivized to read commits for obfuscated vulns - asking defenders to do that is just adding one more thing to our plates.
That's a huge difference.
> the logical step is that it doesn't require immediate action when the label is not there.
So I can go about my patch cycle as normal.
> Never mind that the bug might actually be exploitable but undiscovered by white hats.
OK? So? First of all, it's usually really obvious when a bug might be exploitable, or at least it would be if we didn't have commits obfuscating the details. Second, I'm not suggesting that you only apply security labeled patches.
Don't know why your other comment got downvoted. Silently patching bugs has left many LTS kernels vulnerable to old bugs, because they weren't tagged as security fixes. Also leads to other issues..: https://grsecurity.net/the_life_of_a_bad_security_fix
I've read the post before, I've seen the talk, and frankly it's been addressed a number of times. It's the same silly nonsense that they've been touting for decades ie: "a bug is a bug".
They don’t need to label it security even, just a “upgrade now, upgrade soon, upgrade whenever”.
But they clearly don’t want nor care about making that call (and even more clearly basically expect everyone to run the latest kernel at all times (and if you run into a bug there no doubt you’ll be told to not run the latest kernels).
I think you missed my point. Attackers will go through commits regardless of a "Security Patch" tag.
But going about your normal patch cycle as normal for things not labelled "Security Patch", just means if the patch for some reason should have been tagged but wasn't, you're in the same situation.
I do see the value in your approach, but it just does not change anything for applications where security is top priority.
Well Xen for instance includes a reference to the relevant security advisory; either "This is XSA-nnn" or "This is part of XSA-nnn".
> If the maintainers start to label commits with "security patch" the logical step is that it doesn't require immediate action when the label is not there. Never mind that the bug might actually be exploitable but undiscovered by white hats. If you do not want to rush to patch more than you have to, use a LTS kernel and know that updates matter and should be applied asap regardless of the reason for the patch.
So reading between the lines, there are two general approaches one might take:
1. Take the most recent release, and then only security fixes; perhaps only security fixes which are relevant to you.
2. Take all backported fixes, regardless of whether they're relevant to you.
Both Xen and Linux actually recommend #2: when we issue a security advisory, we recommend people build from the most recent stable tip. That's the combination of patches which has actually gotten the most testing; using something else introduces the risk that there are subtle dependencies between the patches that hasn't been identified. Additionally, as you say, there's a risk that some bug has been fixed whose security implications have been missed.
Nonethess, that approach has its downsides. Every time you change anything, you risk breaking something. In Linux in particular, many patches are chosen for backport by a neural network, without any human intervention whatsoever. Several times I've updated a point release of Linux to discover that some backport actually broke some other feature I was using.
In Xen's case, we give downstreams the information to make the decisions themselves: If companies feel the risk of additional churn is higher than the risk of missing potential fixes, we give them the tools do to so. Linux more or less forces you to take the first approach.
Then again, Linux's development velocity is way higher; from a practical perspective it may not be possible to catch the security angle of enough commits; so forcing downstreams to update may be the only reasonable solution.
Not OP, but please do try to influence this policy if you can:
1. The commit message [1] does not mention any security implication. This is reasonable, because the patch is usually released to the public earlier and it makes sense to do some obfuscation, to deter patch-gappers. But note that this approach is not a controversy-free one.
2. But there is also no security announcement in stable release notes or any similar stuff. I don't know how to provide evidence of "something simply does not exist".
3. Check the timeline in the blog post. The bug being fixed in stable release (5.6.11 on 2022-02-23) marks the end of upstream's handling of this bug. Max then had to send the bug details to linux-distros list to kick off (another separate process) distro maintainers' response. If what you are maintaining is not a distro, good luck.
#1 is intentional, for better or for worse. It’s certainly well-intentioned too, although the intentions may be based on wrong assumptions.
#2: upstream makes no general effort to identify security bugs as such. Obviously this one was known to be a security bug, but the general policy (see #1) is to avoid announcing it.
#3: In any embargo situation, if you’re not on the distribution list, you don’t get notified. This is unavoidable. oss-security nominally handles everyone else, but it’s very spotty.
Sometimes I wish there was a Linux kernel security advisory process, but this would need funding or a dedicated volunteer.
As far as I know, this doesn’t get information from upstream maintainers. For this to work well, I think we would want actual advisories generated around commit time, embargoed early notification, and a process for publication.
TBH the thing annoyed me most in this story is the "Someone had to start the disclosure process on linux-distros again and if they didn't no one would know"-part. There are certainly silent bug fixes where the author intentionally (or not) does not post to linux-distros or any other maillists even after stable release. It would take an hour to dig a good example tho. (Okay, maybe 10 minutes if I'm going to read Brad Spengler's rants)
I guess a Linux kernel security advisory process is needed to fix this, but yeah :(
This is about the commit that fixed the bug, not the commit that introduced the bug. The accusation is not that linux developers intentionally introduced a vulnerability. Instead it is that linux developers hid that a commit fixed a vulnerability. Linux does this to prevent people from learning that the vulnerability exists.
> Linux does this to prevent people from learning that the vulnerability exists
No, not at all, just to leave time to users to deploy the fix before everyone jumps on exploits. This is important because every single backported patch is a candidate for an exploit already, and it's only a matter of time before any of them is exploited. Reason why embargoes have to stay short. It takes some time to figure whether a bug may have security impacts. It takes much less time once this is figured, to develop an exploit.
By the way it could have really happened that the fix for data corruption would have been merged first, and only later the author figured there was a security impact. And the patch wouldn't have been any different. That's why leaving 1-2 weeks for the fix to flow via distros to users, and having the author post a complete article is by far the best solution for everyone.
Nobody is arguing that users having a 1-2 week patch window is a bad thing. However, this frankly seems incompatible with open-source projects. Silently patching issues does not work in practice; it frequently leads to missed fixes, misapplied patches and other incompatibility woes. The situation with backports and LTS releases showcases this well— the only truly well-supported kernel is latest. Everything else is a patchwork of best-effort fixes, not all of which may have been applied correctly. Brad Spengler of grsecurity fame talks frequently about this (primarily via Twitter): https://twitter.com/spendergrsec
Not really. As you say there's an extremely difficult balance with opensource and not exposing everyone at once. You can't get a fix deployed everywhere without it being public first or it ends up in a total unfixable mess. But if the fix is public and gives too many info (exploit procedure) then you put everyone in danger until the fix flows to users.
Thus the only solution is to have a public fix describing the bug and not necessarily all the details, while distros prepare their update, and everyone discloses the trouble at the same time. Those who need early notification MUST ABSOLUTELY BE on linux-distros. There's no other way around. As soon as the patch is published, the risk is non-nul and a race is started between those who look for candidate fixes and those who have to distribute fixes to end users.
This is not about silently patching or hiding bugs, quite the opposite, it's about making them public as quickly as possible so that the fix can be picked, but without the unneeded elements that help vandals damage shared systems before these systems have a chance to be updated. Then it is useful that the reporter communicates about their finding, this often helps improve general security by documenting how certain classes of bugs turn to security issues (Max did an awesome job here, probably the best such bug report in the last few years). And distros need to publish more details as well in their advisories, so details are not "hidden", they're just delayed during tha embargo. Those who are not notified AND who do not follow stable are simply irresponsible. But I don't think there are that many doing that nowadays, possibly just a few hero admins in small companies trying to impress their boss with their collection of carefully selected patches (that render their machine even more vulnerable and tend to make them vocal when such issues happen).
In addition it's important to keep in mind that some bugs are discovered as being exploitable long after being fixed. That's why one MUST ABSOLUTELY NOT rely on the commit message alone to decide whether they are vulnerable or not, since it's quite common not to know upfront. I remember a year or two ago someone from Google's security team reported a bug on haproxy that could cause a crash in the HPACK decoder. That was extremely embarrassing as it could allow anyone to remotely crash haproxy. We had to release the fix indicating that the bug was critical and that the risk of crashing when facing bad data was real, without explaining how to crash it (since like a kernel it's a components many people tend to forget to upgrade). Then after the fix was merged, I was still discussing with the reporter and asked "do you think it could further be abused for RCE?". He said "let me check". A week later he came back saying "good news, I succeeded". No way to get that info in the commit message even if we wanted to, since that was too late. Yet the issue was important.
Speaking of Brad, I personally think that grsec ought to be on linux-distros, but maybe they prefer not to appear as "tainted" by early notifications, or maybe they're having some fun finding other issues themselves. We even proposed Brad to be on the security list, because he has the skills to help a lot and improve the security there. He could have interesting writeups for some of the bugs, and it would probably change his perception of what happens there. Maybe one day he'll accept (still keeping hope :-)).
Can you say what you're hoping to do? LK devs tag security fixes with "[SECURITY]" and then what? You would merge individual [SECURITY] commits into your tree?
Currently the situation is that you can just follow development/stable trees right (e.g. [0])? Why would you only want the security patches (of which there look to be a lot just in the last couple weeks). Are you looking to not apply a patch because LK devs haven't marked it as a security patch?
Assume I patch my Linux boxes once a month. I see a commit where an attacker has a trivial privesc. I read the commit, see if it's relevant to me, and potentially decide to do an out of cycle patch. As in, instead of updating next month I'll just update now.
Gotcha. Yeah it does seem like there's some space between the overpromising "I am a Linux Kernel Dev and I proclaim this patch is/is not a security patch" and the underpromising "I am a Linux Kernel Dev and have no knowledge of whether or not this is a security patch". It doesn't seem unreasonable to mark it somehow when you know.
On the other hand, just on that page I linked, there's... a lot of issues in there I would consider patching for security reasons. I don't know how reasonable it is, given the existing kernel development model, to tag this stuff in the commit. The LTS branches pull in from a lot of other branches, so like, which ones do you follow? When Vijayanand Jitta patches a UAF bug in their tree, it might be hanging out on the internet for a while for hackers to see before it ever gets into a kernel tree you might consider merging from.
I guess what I'm saying here is that it seems like a lot to ask that if I find a bug, I:
- don't discuss it publicly in any way
- perform independent research to determine whether there are security implications
- if there are, ask everyone else to keep the fix secret until it lands in the release trees with a [SECURITY] tag
- accept all the blame if I'm ever wrong, even once
That too is a lot of overhead and responsibility. So I'm sympathetic to their argument of "honestly, you should just assume these are all security vulns".
So maybe this is just a perspective thing? Like, there are a lot of commits, they can't all be security issues right? Well of course they can be! This is C after all.
Like in that list, there's dozens of things I think should probably have a SECURITY tag. Over 14 days, let's just call that 2 patches a day. I'm not patching twice a day; it's hard for me to imagine anyone would, or would want to devote mental bandwidth to getting that down to a manageable rate ("I don't run that ethernet card", etc.)
So for me, I actually kind of like the weekly batching? It feels pragmatic and a pretty good balancing of kernel dev/sysadmin needs. Can I envision a system that gave end-users more information? Yeah definitely, but not one that wouldn't ask LK devs to do a lot more work. Which I guess is a drawn out way of saying "feel free to write your own OS" or "consider OpenBSD" or "get involved in Rust in the kernel" or "try to move safer/microkernel designs forward" :).
I think some important context here is that the people who want commits obfuscated are never the ones making a decision about the security label. The people writing the commit already know it's a security issue.
> The people writing the commit already know it's a security issue.
For this special case, yes. But for the vast majority of bugs it's the opposite and existing bugs get exploited later, thanks to some people who think that some patches are not security-related and do not apply the fixes.
Then please just consider that every single stable kernel contains 1 or 2 fixes for similar vulnerabilities that nobody took the effort to try to exploit. THIS is the reality.
Are you saying that you are able to read all incoming linux patches, and easily identify changes which fixes a security problem, so that you can come up with a POC by the time the security issue is announced?
If the patch was flagged as a security problem from the beginning, it would give advantage to attackers, since they would know that the particular patch is worth investigating, while the defenders would have to wait for the patch to be finalized and tested anyway.
> Are you saying that you are able to read all incoming linux patches, and easily identify changes which fixes a security problem, so that you can come up with a POC by the time the security issue is announced?
Their point is that a full-time attacker (and there's enough money in it to do it as a full-time job these days) can look for obfuscated commits and take the time to deobfuscate them, whereas a defender doesn't have that kind of time.
I agree, that is definitely possible. That said it requires lot of work, since there are lot of incoming patches. I wonder how many people would have to review every proposed patch, how to select subset of incoming patches for human review, and how much one have to pay a team doing all this, to get reasonable results and return of investment.
My point was that if security patches are flagged as such from the start, it saves attackers lot of time (and money), as they will no longer have to go through (almost) every patch and evaluate whether it could be fixing a security problem. This means that such scenario will get a lot cheaper, while the defenders won't gain much from that, as one still needs to wait for the fix to be finalized and tested before deploying it in a production environment.
Security researchers already know that they're submitting a patch for a security flaw - there is 0 additional overhead.
> My point was that if security patches are flagged as such from the start, it saves attackers lot of time (and money), as they will no longer have to go through (almost) every patch and evaluate whether it could be fixing a security problem.
Not really.
1. They can just check to see who made the commit - if it's a security researcher, it's obviously a vuln patch
2. The commits are obfuscated in hilariously obvious ways if you know what to look for
3. It's not that hard to look at a commit, it's kinda what they're paid for
> while the defenders won't gain much from that,
When the vuln is found a race begins between attacker and defender. The difference is that attackers know they're in a race and defenders find out two weeks later.
Your 3 points above are true but this is a perfect example where that didn't apply. A perfectly regular bug found by someone affected by this bug who then started to wonder whether or not it could lead to more interesting effect. Also, the "attackers" you're talking about are more interested in the bugs that are not yet fixed as these ones are more durable. The goal here is mostly to protect against vandals who do not have such skills but find it fun to sabbotage systems. Multiply the max lifetime of critical bugs and the number that are found every year and you'll figure the number of such permanent issue that affect every system and that some people are paid to look for and exploit. This is where their business is. These ones will at best try to sell their exploits when seeing the fix reach stable as they know that within two weeks it won't work anymore, so better get a last opportunity to make money out of it.
You have it the wrong way around. Tagging the release as security allows nation-state level attackers with large budgets to investigate the fixes, while normal people have to wait to patches. This gives nation-state level attackers with large budgets a heads-up, making it worse for everyone else. Furthermore, nation-state level attacks with large budgets are more focused on offense than defense.
Attackers with the resources and patience to read and deeply analyze all the commits, over time... those guys were fairly likely to notice the bug back when it was introduced. Plain vs. obscure comments on the patch don't much matter to them. Low-resource and lower-skill attackers - "/* fix vuln. introduced in prior commit 123456789 */" could be quite useful to them.
What is your threat model / situation that you care about attackers who reverse engineer patches, but are not in the small circle of people who would be informed before hand.
To me, it seems like the average corporate security team is not going to worry about these kinds of attackers. Security for state secrets might, but they seem likely to be clued in early by Linux developers.
> What is your threat model / situation that you care about attackers who reverse engineer patches, but are not in the small circle of people who would be informed before hand.
Virtually every single Linux user. I think what you're missing is how commonplace and straightforward it is for attackers to review these commits and how uncommon it is for someone to be on the receiving end of an embargo.
Most exploits are for N days, meaning that they're for vulnerabilities that have a patch out for them. Knowing that there's a patch is universally critical for all defenders.
For context, my company will be posting about a kernel (then) 0day one of our security researchers discovered. You can read other Linux kernel exploitation work we've done here: https://www.graplsecurity.com/blog
By threat model I mean, who are you worried about attacking you.
I get that every linux user could be attacked. But why would someone with the relevant knowledge that could pull this off attack a given linux user? Why are you worried about it? (Not trying to be sarcastic, trying to get a sense of what threats you are worried about).
My point is that this is basically just how exploits work for Linux, so it's pretty universal unless your main concern is 0days. As for me personally, I run a company that uses Linux in production. We happen to explicitly do research into Linux kernel security (we'll be publishing tomorrow on a 0day we had reported) https://www.graplsecurity.com/blog
This is why stable branches are a thing. I don't know the branching scheme that the Linux kernel uses, but the idea is that for the oldest (most stable) branch, everything is a (sometimes backported) bugfix with security implications.
This gives attackers an advantage (they are incentivized to read commits and can easily see the vuln) and defenders a huge disadvantage. Now I have to rush to patch whereas attackers have had this entire time to build their POCs and exploit systems.
End this ridiculous practice.