Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A lot -- maybe not a majority, but a sizeable number of people -- think dynamic linking is potentially exploitable due to its complexity, has real-life deployment problems (e.g. due to applications depending on a specific version of a library, or simply due to complacency in building and distribution) and that the advantages it offered twenty years ago are offset by larger hard drives, better and faster network connectivity, and better updating systems.

In other words, that its cost is no longer as easy to justify as it was back in the 1990s.

I have not studied the problem enough to be able to comment on it in its entirety, but there is at least some merit to a few of these claims:

* Static linking was a huge nuisance back when Unices introduced dynamic linking because keeping a system up to date was a very different affair. There was no apt-get update, apt-get upgrade. The OS vendor (often the hardware manufacturer) would keep his system up to date. For third-party software, you often depended on building programs from source (oh, yes: no autotools/cmake/whatever to deal with junk Makefiles, either, although this was partly offset by the fact that a lot of developers still knew how to write a makefile). An update in a single library could mean a bunch of manual rebuilds, sometimes followed by manual deployment. This is no longer the case, really. It is a little schizophrenic that we do daily builds for continuous integration, but insist that there's no frickin' way we're going to be able to rebuild packages that depend on a library and distribute them on time. The problem is certainly tractable, albeit at higher resource expense (imagine what an update in glibc would entail).

* On the other hand, it's not like dynamic libraries have fulfilled the promise of never using an out-of-date version ever again. It's not at all uncommon for programs to bundle their own version of shared libraries. A while back, when I first came across material discussing this problem, it turned out that a lot of programs of my system did it -- OpenOffice is the one I most distinctly remember, but there were others, too. As for other operating systems where package managers are less common (cough Windows cough), this situation is pretty much the norm when it comes to any library that Windows Update doesn't take care of.

* The linking process is extremely complex, and it has been found to be vulnerable. The vulnerabilities were patched, however. There is always a degree of uncertainty in affirming that vulnerability is inherent to complexity. Plus, if it is, then we have a lot of really bigger things to worry about, like that huge pile of code in the kernel which is orders of magnitude more complex than a dynamic loader.

Edit: I guess the best way to sum up my (current) understanding of the matter is that the case for static linking isn't as weak as it was a long time ago, but I don't think the case against dynamic linking is spectacular enough to be worth a full migration of everything. That its usefulness is diminishing, at least in some fields, is sufficiently proven by e.g. its adoption in Go. But I doubt that going back to static linking is the universal solution that it is sometimes advertised to be.



>offset by larger hard drives, better and faster network connectivity, and better updating systems.

Static linking puts more pressure on RAM and on each level of cache (because if 2 running programs share the same library, there are now 2 copies of the library contending for those resources) than dynamic linking does.

That strikes me as more important than increased use of the resources you list.


That's not always true, you don't need to have two full copies of the library at all times. See Geoff Collyer's explanation here: http://harmful.cat-v.org/software/dynamic-linking/ (but note that the memory use that he cites may not be that relevant).

I am also not convinced that this is a significant impediment for all workloads. For a lot of applications, the time wasted due to inefficient cache use is a fraction of the time spent waiting for stuff to be delivered over the network.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: