Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How can you achieve any level of security and safety with un-trusted cooperative apps? Any app can get hold of the CPU for an indefinite amount of time, possibly stalling the kernel and other apps. There's a reason we are using OS-es with preemptive scheduling - any misbehaving app can be interrupted without compromising the rest of the system.


I remember Microsoft Research had a prototype OS written entirely in .NET some time in mid 2000s; IIRC they used pre-emptive multitasking, but didn't enforce memory protection. Instead, the compiler was a system service: only executables produced (and signed?) by the system compiler were allowed to run, and the compiler promised to enforce memory protection at build time. This made syscalls/IPC extremely cheap!

I think a slightly more fancy compiler could do something similar to "enforce" coop-multitasking: insert yield calls into the code where it deems necessary. While the halting problem is proven to be unsolvable for the general case, there still exists a class of programs where static analysis can prove that the program terminates (or yields). Only programs that can't be proven to yield/halt need to be treated in such fashion.

You can also just set up a watchdog timer to automatically interrupt a misbehaving program.


You might be thinking of Midori. Joe Duffy has written a lot about it on his blog: <https://joeduffyblog.com/2015/11/03/blogging-about-midori/>

They forked C#/.Net, taking the async concept to its extreme and changed the exception/error model, among other things.

There are several other OS projects based on Rust, relying on the memory-safety of the language for memory-protection. Personally, I think the most interesting of those might be Theseus: <https://github.com/theseus-os/Theseus>


The second thing you described is common in user space green thread implementations in various languages. Pervasively using it in the entire OS is just taking to its logical conclusion.

For performance though, I don't think halting analysis is needed. Even if the compiler can prove a piece of code terminates, it doesn't help if the inserted yield points occur too infrequently. If a piece of code is calculating the Fibonacci sequence using the naïve way, you do not want the compiler to prove it terminates, because it will terminate too slowly.


A general-purpose OS has to be designed so you never have scenarios where code hangs or yields too infrequently. Best-effort insertion of yield points probably won't cut it. Cooperative multitasking in applications exists on a smaller scale with fewer untrusted qualities.


Just thinking loud here, would it be an option to load the task to marshal the memory access and cooperative multitasking to something equivalent to llvm? Then multiple different compiler and languages could dock to that.

As long as all of them have to use an authorised llvm equivalent and that one can enforce memory access and cooperation that could look from the outside like a quite normal user experience with many programming languages available?


On smalltalk systems like squeak or Pharo, the user interrupts the execution of a thread when it hangs with a keyboard shortcut. And people don't run untrusted code in their "main" image, they would run it in a throw away VM. The same type of model could be used here using an hypervisor. This said, no one uses exclusively a Smalltalk system, it needs some infrastructure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: