This effort to harden the Linux kernel is not new. Kees made a very similar presentation at the Kernel Summit last year[1] and it was quite well received. After his presentation, the kernel-hardening mailing list was revived, and new features to add additional hardening to the kernel has been rolling into Linux for the last couple of releases.
There are engineers from multiple vendors collaborating on this project, and they have been using the kernel-hardening list for traffic control and for an initial review of the patches before asking that they be pulled. The progress section of this LWN article[3] shows that they've been quite successful so far --- or perhaps, it would be more accurate to say they've made a good start.
Here is Kees Cook talk from the Linux Security Summit, "The State of Kernel Self Protection Project", he makes the same analogy to car safety in his talk:
It's not like car safety. There's an enemy. The enemy is not random.
Patch and release only works against inept opponents, and there are more non-inept opponents. It won't work against opponents who can develop or buy their own exploits. Patch and release gives the illusion of working because it stops vast numbers of inept attacks from clowns who just want to send spam emails.
Someone in military base security once said that you have to avoid tying up too much of your resources chasing kids throwing rocks at the perimeter fence. The real threat is the janitor who has access to the spare parts stores. Patch and release, and pattern-matching virus scanners, are chasing kids throwing rocks.
I think you're also missing the point that adding mitigations into a platform is about reducing the number of exploitable bugs and ideally making the ones which are still exploitable take more time, are less reliable and cost more. I think this fits the car safety analogy pretty well.
If you want to think of this in the car safety analogy then consider all possible ways of killing someone in a car before and after the changes in safety standards. In the bad old days random events could kill you but if you had a targeted attacker they could use the same techniques and just drive a car into you, or force you into a wall. Now with newer safety standards the chance that a random event would kill you is much reduced, but the targeted attacker (assuming they don't just fire a bunker buster at you) needs to spend time researching the type of car, finding weak points, developing something to exploit those weak points etc.
So it goes with exploit mitigations. It wasn't that long ago that running a fuzzer on a product (and sadly fuzzers are still useful) would yield a massive amount of trivially exploitable bugs. These days, not as much, at least in mature platforms. You could think of the fuzzer discovered "random" bugs to be the case exploit mitigations are trying to protect against (re random collisions which car safety protect against). Even the simplest stuff like stack cookies make exploitation significantly harder. Are there no stack overflow bugs? Nope they still exist. Are they completely unexploitable? Nope, especially in the hands of a skilled attacker. Are you protected against randomly introduced bugs which cause a stack overflow and have a low chance of having some useful behaviour which makes them exploitable? Absolutely.
Honestly if someone _wants_ to _kill_ you I'm sure they could. If the attacker is willing to spend the time, effort and money to find better, more exploitable bugs and you're a target worthy of their efforts then you're screwed no matter what exploit mitigations you put in place (this is why imo bug hunting still has some semblance of value). But are you _that_ important? Would the FBI be willing to a throw a $1m iOS exploit at you for example?
His point is that cars used to be built assuming there would never be an accident. They were very comfortable until you crashed into something, and then everyone died. Today cars are built defensively, so that when the unforeseen happens, you are still protected.
That's the idea. Building in defenses that protect you against unknown bugs and exploits.
There should be some limit to the threat model, though. It should not attempt to remain secure when something can do arbitrary read and writes (VM), for example. Even determined adversaries will merely be armed, not equipped with WMDs.
In the car analogy, the car crashing is the equivalent of the programmer introducing a bug.
The adversary is out of the picture here - you want to write your kernel in a way that makes accidental bugs hard/harmless. (i.e. prevent “random crashes”)
The article was surprisingly light on details. They didn't mention specific fixes or techniques they wanted to move upstream. They also didn't mention why they thought Linux was architected to be unfriendly for end users. Ubuntu, Mint and Android are way easier to use than windows for example, or do they think that the kernel needs to do something special for an end user that will never see it?
I am not saying they are wrong, but they didn't give me much reason to believe they are right.
I don't really see ease of use differences between any polished OS besides OSX, which is just because finder is odd.
Maybe they think the terminal is too powerful?
You're both missing the point. They're talking about kernel security -- when they refer to "usability" they mean compatibility and ease of programming for user-mode applications interfacing with the kernel ABI.
I think the deal is that Linux is notoriously open sourced, such that it is somewhat difficult to think of the one without the other. Those bearing a kind of grudge may be tempted or inclined to derive a certain pleasure from the intellectual satisfaction that might come from ascertaining some personal (read: idiosyncratic) knowledge of how to entertain the 'open-sourcedness' of that mindset to an extrema.
If that is so, then the vulnerability described is likely to be an issue, perhaps later, perhaps sooner, but within some given time frame during which the ubiquity of the 'net will be ever expanding.
Perhaps the solution is to guide the users in some very 'user' friendly adaptation of a 'man. page' that describes with elegant confidence the methodologies germane to creating a keycode that would allow for said 'users' to give themselves the safety from the perchance nefariously driven 'code poets' who would otherwise be able to derive social insecurity from whatever access might be available to them.
Long winded reply to say we need to not think about stoppping using Linux, but instead use a nice little intro for those who don't know enough (or even very little) regarding how to code a patch in their own kernel.
Given that the tech will be integrating NLP more and more, I could even imagine a 'talk through' with some 'Ask Jeeves'-esque personal assistant.
The whole thing is essentially a trust issue. Who's writing the code and who's debugging it to give all of the nUbs (yes, I use the old abbrev.) enough insight to give a little polish and shine to their appliances and machines.
A lot of research has been made in this direction by the PaX team/brad spender of grsecurity. In fact, many of these techniques were first implemented on modern kernels by that bunch. However, political moves have always clouded that -- pax/grsec is run as a project of passion and not as a day job, and therefore there has always been reluctance to deal with those (and get involved in sometimes very territorial code review/maintenance sessions).
Unfortunately, by not really crediting or even acknowledging this research (and also ignoring a lot of work thats been happening in academia on hardening commodity kernels and applications), the Linux kernel is a)playing catch up to other operating systems (instead of adapting newer techniques) and b) alienating researchers even further.
grsecurity offers commercial support, and customer-only stable updates. (Only the latest version of the patchset for the latest linux kernel is available at no cost.) (Of course, if you pay and receive a stable update, you can re-distribute it under the terms of the GPLv2.) grsecurity is not a hobby.
A lot of their work has not been accepted into the mainline linux kernel ... but it's hard for me to see how this is a problematic sort of "not really crediting or even acknowledging". If anything, it pushes people to go to the source, grsecurity itself, for their "hardened" kernels. It seems that grsec/pax do have a lot of recognition as a result.
Yes, the inventors and developers of these hardening techniques want to be praised and have their work immediately used as-is to save the world. But we don't all have to want the same thing, to have the same priorities. We don't all face the same threats, nor performance and feature requirements. If people want to make the various trade-offs in favor of a significantly more "hardened" kernel, they've been free to use grsecurity or openbsd for quite a while. No need to blame linux-mainline maintainers for the people who have not made that choice. There's really no unfair lock-in going on here...
See this other post https://news.ycombinator.com/item?id=12595936 about a botnet of 150k cameras and DVRs which is DoSing OVH at 1 TB/s. Chances are that they run on Linux. The exploit could be as simple as telnetting to them with a default password, but more sophisticated attacks could exploit old kernels left unpatched forever. Actually, some devices might be impossible to patch, as much as you can't make that Chevy from '59 any safer. You have to buy a new one.
Besides a bit of lip service to code correctness, what can they do really?
They aren't going to move the kernel to a microkernel type design and besides, using advanced hardware features is out, because that would too closely tie the kernel to a single architecture. You could argue there is a business case for Intel and ARM to produce hardened Linux forks for their respective architectures as a form of competitive advantage, but since it hasn't happened in the past, I wouldn't be optimistic.
Also historical precedent has shown they aren't going to get tough on the vendor drivers. The kernel is filled with binary blobs (mostly firmwares), and as the article points out, vendor drivers are where the lion's share of the bugs are coming from.
The kernel does use hardware features for this kind of protection. It's done by using lightweight abstraction layers that are implemented per-architecture.
For example, accesses to userspace memory all go through copy_from_user() / copy_to_user() / access_ok() and related functions. Behind this, S390 implements separate kernel/user address spaces; x86_64 implements SMAP; ARM64 implements PAN.
it clearly can be done, there are a finite number of system calls. and while they aren't very well documented in some cases, the code is there for a reference.
but I think Linux is more than the kernel. If you break user land assumptions about filesystem layout, shared libraries, permissions, etc - are you in a better place as a 95% Linux with better security? From experience people aren't very happy with partial measures to make things linux like.
If you use the MS model (I really wish they had taken posix more seriously 20 years ago), you inherit all of Linux as model, with a different implementation.
If you leave the user/kernel boundary intact, have you improved the state of the world? To what extent are the security issues a result of the model and to what extent the implementation?
To answer my own question, I do like the car safety analogy. Since computers one day might driving cars at scale, I think the obvious strategic decision to prevent huge catastrophes is a simple one: Keep the cars un-networked.
True robustness arises from design thinking, not detail obsessing.
I'd like to avoid robo-cars that happily take the left lane on curves whenever they can't hear other cars coming the other way. After all there could be some Luddite tooling around in his ancient 2015 dumb-car. I never need to get where I'm going that fast.
To me this article sounds more like political propaganda than talking about real issues.
Correct me if I'm wrong, but I didn't read any explanation of why the author think linux has huge security issues (whether or not that is true is a different matter).
Unless there is any substantial claim, articles like these are bullshit to me.
You didn't read, but the (original) author did write.
One of the problems (already solved now) was that the kernel had unlimited access to userspace, ie. once you've found a kernel bug that permits the kernel to do something wrong involving user-supplied arguments, you could prepare data/code and then use the bug to make the kernel read or act on your prepared data. Now that most kernel code is blocked from reading most userspace, you have less ability to direct the kernel's behaviour when you want to exploit a bug.
Eh. A compromised kernel can do all sorts of bad stuff... Which brings us bad to the question: do we have substantive reasons to think it's vulnerable?
Yes, that's a question. We can safely assume that it's buggy, though, since most/all prior kernel versions have had bugs of one sort or another. That assumption doesn't depend on the question you raise.
Given that it's buggy and that linux is widely used, we can also safely assume that someone will try to exploit the bugs, which makes it sensible to limit the likely possible consequences of bug. Or put differently, to minimise the risk that a bug is a vulnerability.
As important as security is, and as much as I think Linux should be secured more proactively, I have to take issue with two things mentioned:
1. Most devices are not administered by IT Professionals
This is true and must be taken into consideration, but while every layman can't be a sysadmin, neither can you, I think in 2016, simply take a position of "I don't need to understand any of this, and everything should Just Work." I don't think that's realistic, if you want the home automation we see in the Jetsons, if you want smart devices in your home setup to make your coffee at exactly 5:56 AM, then you need to step up and take a little responsibility for learning how this stuff works and how to set it up. Now is it realistic for end users to perform kernel updates, testing functionality, and changing out for new packages as needed? Of course not. But a basic understanding of the underlying technology helps in a long way in troubleshooting the inevitable problems that will arise, and in letting you choose, intelligently, what OEM's to do business with.
2. Manufacturers leaving devices un-updated, unmaintained after production
This is asinine, especially for the devices that require online connectivity. You already have a more-or-less constant WAN connection, so why on Earth can't these devices update their firmware when required? The answer is of course that after-sale support is not profitable but that too is unacceptable, if you're going to bake computers into your products, you as a company need to step the hell up and support them, not just throw them on the market for a quick buck and then run away with the cash.
Android is probably a good example of the update problem in action, but it's hard to blame Linux directly for this. Rather it is indeed the product cycle that seems to be driving the problem as you point out in #2 there. But then again, isn't it reasonable to expect some sort of deprecation or end of life for most products?
Also, even if manufacturers are held to account more, we will still see severe bugs, IMO, in Linux or even some hypothetically more secure, future OS.
As it's framed in this article at least, I think the automotive manufacturer metaphor falls down at some point (even if most cars now run Linux too). I think software is just vastly more complex and we are just now learning how to cope with that complexity.
> I think software is just vastly more complex and we are just now learning how to cope with that complexity.
If an engineer builds a bridge and it falls down, s/he will get punished for it.
If a doctor or a lawyer makes a similar hard mistake, s/he will get punished for it.
The same needs to happen to software engineering, specially given that in most countries software engineering is a degree with the respective Engineering Order.
I agree that liability has its' place in software engineering. But I don't think, at this point in history at least, we can apply it broadly across the many subdomains thereof.
For example, we can't hold Linus Torvalds liable when the Linux Kernel has a bug (perhaps not even if it was deliberate!). If we even tried to, the entire project would likely come to a screeching halt.
How is this model of liability compatible with large open source codebases?
Why would Linus be responsible? That would be like holding Sir Henry Bessemer (inventor of steel) responsible for the Tacoma Narrows bridge. Of course it wasn't his fault: his contribution was used incorrectly, as Linux is frequently used incorrectly by IoT companies.
And frankly, if your thermostat malfunctions and runs your heat full bore for a weekend while you're out of town, I don't think it's unreasonable to say that, assuming basic requirements were present for functionality (power, connectivity to the furnace, etc.) that the company should be responsible if their system went wrong solely because of poorly designed software. If for nothing else than your outrageous heating bill.
And again, if a company's response is "well we don't want to be responsible for your furnace" then my response is "then don't be making fraking thermostats, because that's what they're FOR!"
I absolutely agree and that's exactly my point: although Linus shouldn't be held responsible, when your IoT thermostat breaks, that still doesn't mean the breakage hasn't been a trickle down result of some technical decision he made at the kernel layer (if it was a kernel bug that caused it anyway). But it's an open source project and he makes the rules and companies have a choice as to whether they want to adopt it.
So I would agree that in this hypothetical case, the IoT company needs a custom kernel patch on their device, and hopefully it gets upstreamed.
But if all this occurs after your furnace is shot, you're left asking the manufacturer, "Why did you use Linux at all in this product?" You would think they'd then accept responsibility but what if they do try to blame the Linux project and have expensive lawyers to back them up?
This all seems to add up to a potential spaghetti mess of blame where the legal model is "doctors and lawyers and bridge builders." In fact I think these problems need to be addressed outside current legal and regulatory systems which frankly can't keep up with the pace of open source software.
> But then again, isn't it reasonable to expect some sort of deprecation or end of life for most products?
In a cell phone? Sure, because the replacement process is a somewhat annoying sales appointment at $carrier. In a thermostat? No, because it should last years. Maybe decades if it's a good one. IoT falls apart in this arena because the hardware is designed to be replaced often, and that's exactly what house hardware should not be.
And if the software is too complex, then don't do it. Do it right or don't bother.
I wonder if the solution for IoT things like the network-connected door lock mentioned in the article is the emergence of third-party companies that are contracted by the initial manufacturer to provide ongoing updates for the expected product life (eg 15 years).
The idea being that these patching companies would become recognised brands, and the manufacturer would get to put a "Supported by FooCorp until 2031" badge on their packaging, and consumers would start to expect to see this.
My other thought relating to what you said would be why on Earth does a door lock need Linux? This is not exactly a massively complicated device, it controls a 2-state mechanism and needs to work at about a five foot radius. The fact that something like that, in an IoT mindset, needs an entire webserver is ridiculous. This is much more suited to a micro-controller with custom firmware and a tiny attack surface.
No, maybe you won't be able to set it up with a smartphone and 2 minutes, but it also might WORK BETTER if it wasn't able to do that.
It's a pity that the easiest solution we have for this security problem (managed languages[1]) is also at odds with the environment: real-time. Maybe a Rust operating system designed for real-time could help here.
Personally I think it is at odds with political wishes to do so, not technical.
If there is political support, the research will eventually produce the desired results.
We already had such managed environments in the 60's and 70's, those researchers and companies would be in heaven if they could even have a Raspberry PI like hardware at their disposal.
Probably not as Chinese devs will be unable to cheaply write crummy drivers in it. Plus it is a huge cost.
Better replace the kernel with L4 and a good set of services. Except those platforms have way fewer developers also because they are not as easy to develop for.
I agree that the article was very light on details.
I like the philosophy of trying to squash whole classes of bugs and generally trying to reduce the attack surface, versus one-by-one bugfixes.
I am curious about the solutions the Linux developer community will come up with to address this. There are many, many very smart developers and I look forward to some creativity :)
I am optimistically hopeful that proposed solutions will not involve anything resembling a central authority that must be trusted -- something based on Certificate Authorities (or any analog to Microsoft's Trusted Computing model), would be really disappointing.
I am optimistically hopeful that proposed solutions will not involve anything resembling a central authority that must be trusted
ISTM that most distros use central software repositories, already? And that is very much a good thing? Definitely anything run by M$ or a similar firm could have conflicting goals, but don't paint e.g. the Debian Project with the same brush.
I wasn't painting with any brushes, just expressing optimism and hope for a good solution.
Any Linux user is free to use whatever distro they want. Once you've chosen a flavor of Linux, in theory you could even run your own apt/rpm repository. There is no requirement that a Linux user must trust some third-party or their system won't run. Let's keep it that way :)
Microkernels have won, just the GNU/Linux community seems be unaware of it and keep repeating the words of the leader.
Not only we have millions of embedded devices shipping microkernel OSes on them, we also have the transition to type-1 hypervisors, and now the slow adoption of unikernels.
Also Apple, Microsoft and Google are increasing the scope of sandboxing on their OSes.
Interesting thanks! Where did you find this out? I have only just started reading Windows Internals Part 1 (which covers Windows 7) so I am intrigued where info regarding Windows 10 comes from.
They do mention it, just not by name. Kees Cook and the Kernel Self Protection Project is basically trying to move some protections that exist in grsec into mainline kernel. (since grsec as a whole will never get accepted)
Actually, it does seem odd that grsecurity wasn't mentioned. It's the first thing that comes to mind when I think of "killing entire classes of bugs" in the Linux Kernel. Many of the "safety" features we have now in most operating systems started there.. ASLR for example: https://en.wikipedia.org/wiki/Address_space_layout_randomiza...
They update android but understand the problem Linux faces as manufacturers and carriers don't update the android devices. Many android vulnerabilities come through because of the Linux stack so fixing that would fix a good portion of Android future vulnerabilities
“For the cases where computers are not well protected in the hands of end-users who are not IT professionals, and who do not have any recourse to IT professional help, we need to design systems that proactively protect them”, Konstantin Ryabitsev of the Linux Foundation
Is this the same Linux Foundation that gave Microsoft Keynote Position at the last LinuxCon?
There are engineers from multiple vendors collaborating on this project, and they have been using the kernel-hardening list for traffic control and for an initial review of the patches before asking that they be pulled. The progress section of this LWN article[3] shows that they've been quite successful so far --- or perhaps, it would be more accurate to say they've made a good start.