Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am sure this is true. But I seem to have had good results building static executables and libraries for C/C++ with cmake (which presumably passes -static to clang/gcc). golang also seems to be able to create static executables for my use cases.

Unless static linking/relinking is extremely costly, it seems unnecessary to use shared libraries in a top-level docker image (for example), since you have to rebuild the image anyway if anything changes.

Of course if you have a static executable, then you might be able to simplify - or avoid - things like docker images or various kinds of complicated application packaging.



> I am sure this is true. But I seem to have had good results building static executables and libraries for C/C++ with cmake (which presumably passes -static to clang/gcc). golang also seems to be able to create static executables for my use cases.

Depends on what you link with and what those applications do, I would also check the end result. Golang on top of a Docker container is the best case, as far as compatibility goes. Docker means you don't need to depend on the base distro. Go skips libc and provides its own network stack. It even parses resolv.conf and runs its own DNS client. At this point if you replace Linux kernel with FreeBSD, you lose almost nothing as function. So it is a terrible comparison for an end-user app.

If you compile all GUI apps statically, you'll end up with a monstorous distro that takes hundreds of gigabytes of disk space. I say that as someone who uses Rust to ship binaries and my team already had to use quite a bit nasty hacks that walk on the ABI incompatibility edge of rustc to reduce binary size. It is doable but would you like to wait for it to run an update hours every single time?

Skipping that hypothetical case, the reality is that for games and other end user applications binary compatibility is an important matter for Linux (or any singular distro even) to be a viable platform where people can distribute closed-source programs confidently. Otherwise it is a ticking time-bomb. It explodes regularly too: https://steamcommunity.com/app/1129310/discussions/0/6041473...

The incentives to create a reliable binary ecosystem on Linux is not there. In fact, I think the Linux ecosystem creates the perfect environment for the opposite:

- The majority economic incentive is coming from server providers and some embedded systems. Both of those cases build everything from source, and/or rely on a limited set of virtualized hardware.

- The cultural incentive is not there since many core system developers believe that binary-only sofware doesn't belong to Linux.

- The technical incentives are not there since a Linux desktop system is composed of independent libraries developed by semi-independent developers that develop software that is compatible with the libraries that are released in the same narrow slice of time.

Nobody makes Qt3 or GTK2 apps anymore, nor they are supported. On Windows side Rufus, Notepad++ etc. are all written on the most basic Win32 functions and they get to access to the latest features of Windows without requiring huge rewrites. It will be cursed but you can still make an app that uses Win32, WPF and WinUI in the same app on Windows, three UI libraries from 3 decades and you don't need to bundle any of them with the app. At most you ask user to install the latest dotnet.


> If you compile all GUI apps statically, you'll end up with a monstorous distro that takes hundreds of gigabytes of disk space

And yet the original Macintosh toolbox was 64 kilobytes. Black and white though, and no themes out of the box.

Even a 1MB GUI library (enough for a full Smalltalk-80, or perhaps a compact modern GUI) would be in the noise for most apps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: