Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How is it possible in 2015 that major kernels are vulnerable to arbitrary code execution via null pointer dereference?

Leave aside memory safety, input validation, actually caring about the quality of your work, whatever. mmap_min_addr stopped a ton of attacks back in like 2007-10 when Linux had a local privilege escalation-of-the-month (and RHEL, which hadn't enabled it yet for backwards-compatibility concerns, got hit more frequently). It's not a particularly aesthetically-pleasing solution, but it's an effective one.

Are there legacy apps on OS X that require mapping the zero page? Executable? That Apple cares about supporting?

I've run into two things on Linux that need to map the zero page: versions of MIT Scheme from before 2009 (because the compiler was doing something super weird), and Wine, when running certain DOS / Windows 3.1 apps. Anything of that era probably stopped working on OS X when they killed Classic. Even Carbon has been dead for three years.

https://wiki.debian.org/mmap_min_addr

https://access.redhat.com/articles/20484



How is it possible in 2015 that major kernels are vulnerable to arbitrary code execution via null pointer dereference?

Because they're monolithic, bloated, and written in C.

(Rust guys, please don't screw up. We need a win there.)


> (Rust guys, please don't screw up. We need a win there.)

While it's true that Rust would help here, it is very unlikely that a Rust kernel project would get as far as e.g. Linux and let alone replace it.

Of course, rewriting and existing kernel stepwise would be interesting, if possible.


> Of course, rewriting and existing kernel stepwise would be interesting, if possible.

Perhaps it's possible to write a new kernel in Rust, and have it be backwards compatible with Linux, by "wrapping" the Linux kernel and drivers in sandboxes?

So a Rust kernel with some kind of built-in environment isolation, in which it can run the real Linux kernel. The running Linux kernel would then access physical hardware through a wrapper in the Rust kernel, while the Rust kernel would access hardware directly.

That's really the only way I see this project gaining widespread adoption: by leveraging Linux. Linux simply has too much momentum to be replaced with something non-compatible.

Of course, a Rust kernel could be useful for all kinds of things other than replacing Linux. Like a Mirage OS-type kernel that uses Rust to write drivers in Rust (instead of OCaml).


This is vaguely the idea behind Joanna Rutkowska and co.'s Qubes: there are lots of Linux kernels in different Xen domains, one for each class of userspace apps (banking, gaming, etc.) and one for each low-level service (networking, graphics, etc.). For instance, if the Linux kernel is vulnerable to a local privilege escalation, and the DHCP client is vulnerable to arbitrary code execution, on a traditional system, anyone on the other side of your Ethernet cable can root your machine. Under Qubes, all they can do is root the Xen domain that's providing networking -- but that's not significantly more power than they had than by being on the other end of your network cable, since that domain does nothing other than networking.

https://qubes-os.org/

I'm not convinced that Linux is unkillable, though. This thread is about OS X, I'm typing this on an OS X machine, etc. I suspect that if you do a good job of working with people's hardware (Apple has an advantage, of course), you can run Chrome, and you can run anything that's portable between OS X and Linux, you can get pretty far.


I don't see that happening with Linux, though.

Linux as a UNIX clone, will never use anything else other than C.

Replacing C in MirageOS sounds more likely.


Wouldn't that essentially be an hypervisor written in Rust?


That's what I'm thinking. Take a look at http://spin.atomicobject.com/2014/09/27/reimagining-operatin...

for an overview of what's going on in this area. If you're running containers on a hypervisor, with files on storage servers elsewhere, most of the Linux kernel is dead weight. Most of the kernel can be replaced by a modest glue library. Here's one, written in OCaml: http://anil.recoil.org/papers/2013-asplos-mirage.pdf

As containers catch on, we'll see more systems specialized to run nothing but containers. They will be much simpler than Linux or Windows. System administration will be external, as it is for cloud systems like Amazon AWS now.


Unfortunately Rust itself has limited space to secure a kernel. We need a major change in operating systems to have a safer environment that is not trivial to exploit.

My list goes as:

- kernel memory and userspace memory is hardware separated as it was originally with some early architectures

More about it here: http://phrack.org/issues/64/6.html#article

- microkernels

Having a formally verified ring0 execution block and everything else is running in a less privileged space, this would allow kernels to protect themselves against bad drivers and other parts of the operating system that can be used as attack surface today

- safe userland

this is I guess where Rust could help the most, having a memory safe low level language


Severals attempts have been made in secure OS, the major problem is the prevalence of UNIX monoculture in the mainstream desktop and server environments, that has came to be.

UNIX is married with C, so any attempt to replace C, means breaking with the UNIX mindset, which has proven very hard to do.

Even successful research projects like Oberon, failed to cater the industry, and this was before UNIX became widespread.

Regarding microkernels, most of the embedded space OS are actually using microkernels.

I am following MirageOS, HaLVM and Microsoft's Ironclad as possible safer OS.

This is why also also like what Apple, Google and Microsoft do on their mobile OS to reduce the amount of allowed unsafe code. Even though it isn't at kernel level.


> UNIX is married with C, so any attempt to replace C, means breaking with the UNIX mindset, which has proven very hard to do.

I don't believe this is particularly true, although I haven't explored it very far. UNIX is married with the API/ABI exposed by the so-called "C" library. So far, there have been no languages that offer native interoperability with C and have gained significant popularity, other than C++, and C++ is certainly pretty popular on UNIX (Qt, gcc, etc.). Rust does offer that and is (evidently) picking up a ton of steam, and you can get to the point where stupid tricks that previously required a C derivative can be done in Rust.

http://mainisusuallyafunction.blogspot.com/2015/01/151-byte-...

In fairness, this often also requires things like "C" strings, the "C" locale, etc. But those things can either be done smoothly enough from Rust, or are wrong anyway, that I think there's a chance to break the C stranglehold here.


You are seeing it only from the technical point of view.

UNIX and C were developed together. Like most system programming languages before it, and even those that failed in the market, C's original purpose was to bring its host OS to life.

So to remove C from a UNIX compatible OS, you need to remove C from UNIX culture, which is impossible.

The resulting OS wouldn't be UNIX any longer, it would be Plan9, Inferno, something else.

As for C++, is pretty popular nowadays because it also came from AT&T, so it has been part of the UNIX culture from the mid-80's. But never at kernel level.

There are OS written in C++, like Symbian, BeOS, IBM i and others. None of them are UNIX compatible OSs.

I cannot imagine any commercial UNIX vendor to allow anything else other than C on their kernels, nor I do see it happen in the FOSS world.

Alternative OS that try to research new paths in OS architectures, yes. But not OS that try to clone UNIX culture.

If you look at my comment history, I am not very found of C, but I just don't see it happen from the social point of view of how comunities behave.


There's no requirement for Linux to follow UNIX culture. People are already arguing that Fedora, Arch, etc. aren't UNIX in terms of culture and philosophy, and the variances are increasingly Linux-specific.

https://pappp.net/?p=969

(And I'm personally not a fan of UNIX culture, at least in 2015, partly _because_ it's a culture that thinks C is a defensible language to program in, at least in 2015.)


That much is true, but it is still thrown around at every oportunity, just look at the whole systemd discussion.

> And I'm personally not a fan of UNIX culture

Personally, I have learn a lot from UNIX and it allowed me quite a few interesting job opportunities in my career.

But I was also part of Amiga, BeOS, Windows, Demoscene, Oberon cultures, so in the end I enjoy systems that try new paths.

And even though I don't agree with Rob Pike ideas for Go, I surely agree with his opinion about UNIX.


Big chunks of OSX are written in C++ (namely, the IOKit) and it seems to have pretty successfully exposes a UNIX-like interface.


While true, Mac OS X is not a classical UNIX system in terms of culture.


I think we need hardware support first before we can think of any language. The support has to be transparent to any other part of the system. On the top of that the language that you are using has to be memory safe. This is probably where C falls off.

Yes, microkernels and hybrid kernels too. Yet Linux/FreeBSD/Random Unix Clone are monolithic.

It is the time for a new safer kernel! :)

http://en.wikipedia.org/wiki/Hybrid_kernel#NT_kernel


I'm setting up a system with SeL4.

None of the systems you're suggesting have formal proofs of correctness, and this one not only has such proofs, but is written in C.


What is the process for patching seL4? How does e.g. adding a parameter to an existing function affect the proof, and what's the process for proving additional predicates?


The proofs are publicly available, though I can imagine there would be significant... overhead involved in adding functions to the system.


Such naivety.


>> Even Carbon has been dead for three years. <<<

With regard to only this point, probably half if not more of the the top 15 grossing games on the Mac AppStore Games page today are either Carbon Apps or at minimum require the Carbon framework.


And a bunch of those are using Wine, which requires mapping code at the zero page, so they're compiled with a special load command that allows them to map the zero page.

On Yosemite, 64-bit binaries aren't allowed to do this.


Whoa, seriously? I didn't realize Wine for OS X was real enough to be usable for anything, let alone be admitted to the app store and make it to the top. That's pretty awesome.

Although every time I go "I didn't realize X on Y worked", it seems like games are the rationale, and not very surprisingly, since they exercise a relatively small part of the API surface apart from OpenGL. (Mono, some HTML 5 things as mobile apps, and Humble Bundle's asm.js collection all come to mind.)

Can OS X take a cue from iOS and require a special Apple-signed entitlement to do this (unless root overrides it in a config file), or is it not worth the trouble?

Also, does Wine actually require this in general? My understanding was that Wine needed this for legacy Windows apps that themselves map the zero page, but recent-ish apps designed for XP or later shouldn't do that, right?


Wine on OS X has been stable enough for gaming for years! Sims 3 used it, as does EVE Online, and quite a few other major titles, mostly those from EA. They all use TransGaming's Cider fork though, I'm not sure if TransGaming every contribute back to the Wine community these days.

CodeWeaver's Crossover is quite good as well. It can play Skyrim on my 2012 MPB with decent quality. I mostly use it for older games though - rollercoaster tycoon and the like.


https://code.google.com/p/google-security-research/issues/de...

That was a kernel null dereference, it really has nothing to do with creating a mach-o that puts code in the first page and there are valid reasons to do it, SheepShaver and BasilliskII come to mind.


Right, the null pointer gets dereferenced in the kernel, but (assuming I'm understanding the report right) the means of exploitation is to create a userspace memory mapping with something mapped at address zero, and cause the kernel to read from that address / run code at that address instead of faulting when it tries to dereference NULL. The PoC code is userspace and does this:

         //map NULL page
         vm_deallocate(mach_task_self(), 0x0, 0x1000);
         addr = 0;
         vm_allocate(mach_task_self(), &addr, 0x1000, 0);
         char* np = 0;
         for (int i = 0; i < 0x1000; i++){
           np[i] = 'A';
         }
Am I misinterpreting this?

Re SheepShaver and Basilisk II, those are non-Mac apps, and at least on Linux, there's no requirement for an emulator (like qemu) to map address zero in the host to offer a usable zero page in the guest; it's a convenience depending on how you write the emulator, but it's by no means needed. I don't think this is true of other host platforms either, but I'm less familiar with those.


Could this be solved by writing an operating system in Rust?


Yes, mostly, although that's a fairly big undertaking, and any code you have that uses unsafe blocks, or legacy C code (think proprietary drivers), or the like is going to be a weakness. mmap_min_addr is a mitigation measure, by saying, OK maybe we'll dereference NULL sometimes, but we're going to make totally sure that there's nothing mapped there, and certainly nothing that userspace put there. So if we do dereference null, it's just a crash, not execution of userspace-provided code in kernelspace.

And it's the sort of thing that you can turn on in an existing kernel. Rewriting the whole kernel in Rust is a major (but worthwhile) project; adding this restriction is, in its simplest form, just adding `if (addr < 4096) return -EFAULT` to the page-allocation routine. As another commenter pointed out, OS X does have this restriction for 64-bit binaries now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: