Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Something that I haven't seen brought up yet is the "weird C++ vtable layout." This is actually the "relative vtable layout" that's first described here: https://bugs.llvm.org/show_bug.cgi?id=26723, and is usable in clang via the -fexperimental-relative-c++-abi-vtables option.

The basic idea is that you don't need to waste a whole 64 bits for vtable entry, especially since you can usually assume that code within the same DSO will be within 32 bits of each other. So, instead, you do a 32-bit offset from a known address (the vtable's address) to get the function pointer, and in the rare case you need a cross-DSO entry, just emit a thunk for the symbol that's in the same DSO to get an address within 32 bits.



Space for vtables is almost always negligible, especially so on 64-bit targets. So the main effect of inflated vtables is cache footprint. But where that matters most, you probably shouldn't be doing virtual calls anyway.

Compilers don't get to say what you compile. People care about the speed of bad code almost as much as good code, and sometimes more: what bad code wastes, the compiler might be able to give some of back.

Code that has a preponderance of vtables is usually bad code written by Java transplants who haven't learned the right way to code C++. But that code has to run, too.


Almost but not always. Fuchsia saw 1% memory savings (~20MiB) by enabling it:

https://youtu.be/9HGKlDiJy8E


> Space for vtables is almost always negligible, especially so on 64-bit targets.

From the link: "I can report that a prototype of this was able to shrink Chromium's code size by 9%."


I bet that was pre-linked size. But that corroborates my expectation about Chrome code quality.


Before Java came into the world I remember Turbo Vision, Powerplant, CSet++, OWL, MFC, Motif++, VCL, Tools.h++,...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: