Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Unikraft Unikernel Project (linuxfoundation.org)
83 points by iou on July 2, 2018 | hide | past | favorite | 14 comments


Huge efforts and a lot of time was spent to develop multi-user, multi-application infrastructure (aka. standard kernel, FS, networking, etc.). The reason was to offer reusable functionality for applications on top of it to get the most out of hardware. Now they start building exactly in the opposite direction by packaging everything in one artifact where probably the most content will be duplicated (multiplied). Somehow I am failing to see how it should improve/increase/advance/blah/blah...


There are use cases where users run several applications on a single machine, which could even be shared by several users. They are the desktop PC, mobile phone, old school Unix server...

Now we see a use case (call it "microservices" if you want) where people want to run a single application as if it was alone on a machine, and where the concept of "user" doesn't even make sense.

This second use case is what unikernels are for. It may make a lot more sense to run a Unikernel application on an hypervisor than a whole OS in a VM, and probably than a container too.


It's all a bit theorical though, isn't it? All those user ids aren't necessarily human.

Look at a non-trivial application such as Postfix. It still is simple enough to be understood by one individual human, but it's complex enough to make use of file system rights and various forms of interprocess communication to do its job.

Once you've re-implemented this with unikernels, haven't you re-invented access control and authorization, which you wanted to do without? And yet, you've only got your email transport up. Still a long way to go for a modern collaboration server.

Real world systems, like airline bookings or ERPs, are orders of magnitude more complex than that. I'm not convinced I'd not rather have those use standard components for things like ACLs, process accounting and user ids. They were invented for a reason, and makes it easier to reason about data at rest and things like backups and process scheduling. My suspicion is that you'd want to reinvent them pretty soon when your system grows beyond that single micro service.


hwvirt is what most people think of when you say "virtualization", so they try to compartmentalize things at that level. i don't see why Xen/KVM/whatever is fundamentally more secure than something like Illumos Zones or FreeBSD Jails. Linux, unfortunately, doesn't think that way, but that doesn't mean it can't.

i think osvirt is a more efficient abstraction, but the industry clearly has other ideas.


It's more secure due to automatic address space layout randomization. In fact, it's both space layout and space content randomization-- each application/kernel is unique.


if your kernel is doing ASLR correctly, i'm not sure doing it again in each virtual kernel will meaningfully increase the entropy of the memory layout.


That's not the point. It's both ASLR and ACLR. ASLR is not enough for certain vulnerabilities when the attackers know which bits/content are present.


Also, there is no kernel but the unikernel itself in such systems. So no idea what you mean by again here.


that would be true if the unikernel was running alone on bare metal, but the more likely case is that you are deploying multiple unikernels on top of some hypervisor.

in the case of Xen, you will have linux or *bsd in dom0 which will probably have aslr enabled. so, when you launch your unikernel and do aslr you will be randomizing an already random memory layout.

my point of contention is with the hypervisor+unikernel model. once you try to increase the tenant density of a particular piece of hardware with virtualization you lose most of the advantages of removing the os and re-introduce the complexity in a different way.


I see your point with single application. However, do not you think that, for example, highly stripped special purpose kernel and distribution built with let's say buildroot will do the job equally well? The advantage is that it will run on bare metal (without supervisor or any other type of micro-kernel to manage microserviceS). If there will be the need for the second application, the bunch of functionality will be shared and hardware resources will be used way more efficient. Also, I am wondering if it would be better to spend resources addressing existing issues with OSs (like not strong enough security, etc.) instead of just giving up and hope that supervisors/micro-kernel will do it better?


Our needs have changed. Hardware isn't the bottleneck anymore and copying a few dozen megabytes a few thousand times each is trivial. We are now trying to get the most out of humans. Traditionally we've done this by adding layers of abstraction, but this hides complexity and introduces surface area, bringing security and compatibility risks with it. Nowadays there are very few humans who intimately understand more than a fraction of the layers of a modern (read: trendy) tech stack.

Unikernels let you combine the top 3 layers of the (Hardware -> Hypervisor -> Kernel -> Userspace -> Application) stack into one. That's a big win for reducing complexity. And if unikernels take off, I expect better tools for managing them to be built right in to hypervisors to help restore whatever functionality gets lost in the squish.

Funny thing is, if you squint a little, the resulting (Hardware -> Hypervisor -> Unikernel) stack looks a lot like (Hardware -> Operating System -> Application), which we all know and love from traditional pre-virtualization environments. In a sense, we're not so much rejecting the multi-user model as we are reinventing it for a cloud-based age.

Maybe this is indeed a big waste of time, but I don't think so. When it comes to multi-tenant computing, we operate under radically different assumptions than we did 30 years ago.


A lot of teams have decided avoiding all dependency management is better than mastering it. I'm trying to accept it, as another instance of "worse is better".


In the general case, avoiding a problem is better than solving a problem. No code is better than no code.


When A uses C, and B also uses C but another copy of a different version, that can create problems that would have been avoided by doing the planning work. Not only are you exposed to two sets of bugs in C (some of which are already fixed), but C may not be fully forward and backward compatible with its own output (especially in-process).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: