Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Generally speaking many C++ game engines avoid the STL stuff and reimplement their own more predictable containers, often with custom allocation schemes. The engine at the last game company i worked at, for example, had its own containers and memory allocator and allowed you to define the allocation category and pool per allocator and per object class (so, e.g., dynamic strings would be isolated to their own pool to avoid fragmenting the heap).

Related, Andrei Alexandrescu had a great talk about allocators in C++ a couple of years ago: https://www.youtube.com/watch?v=LIb3L4vKZ7U

Also related to the Orthodox C++, the same engine also ousted exceptions and RTTI. I'm not sure about the reason for exceptions, but C++'s RTTI was simply inadequate and instead it was replaced with a custom one made using macros (similarly to wxWidgets and MFC) that allowed automatic object serialization and reflection which was used for all saving and loading, exposing objects to the editor automatically with a common UI and exposing classes and objects to the (custom) scripting language with very little setup.

Interestingly most of the stuff the engine had to reinvent seem to be first class citizens in the D language. Also Andrei's allocators also seem to be available (experimentally) there too.

Personally i prefer plain old C (C89 even, although with a few commonly available or easily reproducible extras like stdint) because i see C++ as too complex for what it is worth. However D seems to provide more power with less complexity and more and more makes me want to try it, especially the new "better C" mode that DMD has got (which i think is somewhat the D equivalent to Orthodox C++ that is linked from the page).



I felt the same way dipping my toes in C++ for a few years. C99 is definitely my preferred language. But when in Rome...


I'm kind of curious about doing this more often, but it's the lack of clean collections that puts me off.

What do you do regarding collections? (Dynamic arrays, hashmaps)?


> What do you do regarding collections? (Dynamic arrays, hash maps)?

I don't do anything!

I use the most appropriate data-structure and minimal transformation necessary to get the work done. I use maths and higher-level tools to verify my designs but the implementation, when required to be soft-realtime needs to exploit as much mechanical sympathy from my target platform as possible.

You might be interested in https://dataorientedprogramming.wordpress.com

Cheers!


Indeed collections are a stinky bit, but fortunately they are not the majority of the code. Generally i either do a "list of pointers" (for example http://runtimeterror.com/rep/engine/artifact/0a8bb29493c782f... from my own C engine) or i define `DECLARE_LIST(type)` and `IMPLEMENT_LIST(type)` macros which declares types and functions for handling those (note that with the word list "list" in both cases i mean a conceptual list of items, not the data structure, in practice it isn't a linked list but a vector). The latter can be faster, more flexible (e.g. you can specify how the comparisons are done so that you can use == for simple stuff, strcmp for strings, memcmp for structs or custom calls for more complex structures) and more type safe but on the other hand it can be very annoying to write, debug and extend which is why i rarely do it. Another way is to use an include trick where you do something like

    #define TYPE int
    #include "list_template.h"
    #undef TYPE
with `list_template.h` using TYPE wherever a data type would be needed and defining inline (C99) and/or static (C89) functions so that they can be redefined in multiple files (or have a dedicated C file that includes the above header with all data types and an additional macro that enables the implementation). This is basically sort of implementing templates in C.

The void pointer approach is the simplest and most macro free (despite me using macros here, i'm a bit macro happy sometimes :-P) but at the same time you are limited to pointers. In practice i've found that most of the time this is enough, which is why i still haven't replaced that yet. But there are cases where i'd prefer to be able to have a list of structs instead of pointers to structs, both because it is simpler (no need to define a custom free function) and faster (less indirections), so i'll most likely replace that code with another approach (most likely the macro that defines the types, not the include header).

But if there is a single feature i'd like to see from C++ to C that would be templates, even if they are single depth. I don't even care about classes or the other stuff (classes are nice to have, but not necessary as long as the compiler can figure out that the template parameters to structs and functions with the same name refer to the same type when used together).


> dynamic strings would be isolated to their own pool to avoid fragmenting the heap

Strings have arbitrary sizes. How does pooling them together reduce fragmentation? Do they always come and go in groups?


I think it reduces fragmentation in the other pools, not the string pool.


Bingo. This helps restrict the fragmentation from dynamic strings to one pool of memory, allowing the others to remain nicely packed, aligned, etc (whatever your ideal for them is).


> Generally speaking many C++ game engines avoid the STL stuff and reimplement their own more predictable containers

This seems a little like cargo cultism. I wonder if any of these shops regularly measure the performance of their custom containers and compare with the standard library on a modern optimizing compiler and make a reasoned judgment that it's still currently worth the trade-offs to stick with their own stuff.


IME most of it is due to how bad c++'s builtin allocator support is though. C++11 helps a little but a lot of stuff you might want to do seemed to be impossible last time I looked. This is awful since games will often use pools and arenas, often heavily.

Part of it is also for cross-platform consistency. No chance for bugs caused by using a different stdlib, etc. This is worth it when a lot of the toolchains for consoles are arcane have the chicken/egg-ish problem of often having poor stl implementations since they expect everybody to implement their own.

I've been out of game dev for a while though. These days I'd expect using the C++ stdlib to be more common, and mallocs are faster now (although even if you're bundling e.g. jemalloc, I imagine you still get a substantial benefit from using pools or arenas in many cases).


Older implementations of STL had a lot of issues, EA wrote their own version way back when to address them with a list of whys here: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n227...


Right, but that was a long long time ago, 64 bits long. Even calling it "STL" nowadays kind of dates someone. I wonder how regularly EA currently does bake-offs between their stuff and the standard library. Not that it would matter at this point--they probably have so much legacy code depending on it that it would be painful to switch.


You can clone the repo and run the benchmarks yourself. When I last did it it was quite mixed (compared to libc++ on a MBP) - a lot of things were the same or slightly faster in EASTL, occasionally some things were an order of magnitude faster, some things an order of magnitude slower. Those were in what I would consider "edge cases" rather than general usage, for which the two were fairly similar.

As other comments have said though, the main reason it's used (and the reason I was interested even though I don't do games) is because the allocation story in the stdlib sucks.


My recollection (which could be wrong, it's been a long time) is that the white papers Microsoft freely distributed for XBox 360 development said specifically to not use any STL containers except contiguous memory ones and probably not those either without custom allocators. The penalties for cache misses/pipeline flushes were very high and naive STL usage made both those things happen a lot in real games (Microsoft would step in and help developers and would do post mortems on things they did to improve performance).


You might also do this to improve debuggability and unoptimised build performance. The VC++ stdlib is particularly bad, as its authors have gone down the ultra-DRY rabbit hole even for simple stuff - a pain to step through, and it relies entirely on the optimizer doing its thing.

(libstdc++'s vector looks sensible in this respect - a good decision on their part. Haven't looked at any other aspects of it though.)

At one point the contents of vectors were often inconvenient to examine in just about every debugger, because you'd have to type out some infeasibly long expression to get at them, "vec._Mybase._Myval._Myptr[0]", that kind of thing, which you could also fix by writing your own container and simply calling your pointer field something like "p". (Same goes for smart pointers.) Luckily this is much improved in the latest Visual Studio but it may still be an issue elsewhere.


> I wonder if any of these shops regularly measure the performance of their custom containers

Yes, while it wasn't often, we sometimes did benchmark tests to improve performance when bottlenecks were found, especially towards the game's release when we were focusing on optimizations. IIRC we did some minor changes in the dynamic array container and we rewrote the hashmap and hashset implementations. One of the programmers wrote a performance test comparing several algorithms both with synthetic and real data (from the case that created the bottleneck).

> compare with the standard library on a modern optimizing compiler and make a reasoned judgment that it's still currently worth the trade-offs to stick with their own stuff.

There are other reasons to use a custom container than just the pure performance of the container itself. One is using a different allocation scheme, as the example i gave in the grandparent post, another is to use a friendlier API (see `find` and friends) and add more features. An important one in our engine was support for the custom RTTI that was used for object serialization and the scripting language that also worked and exposed those directly - the container, the RTTI implementation and the scripting runtime had to have intimate knowledge about each other to work transparently (especially when the editor entered the picture, where you could create new entries, often objects but also sometimes structs or other data types, by editing the array directly in a property editor).

Of course not all engines do that and TBH most of the performance and memory related bits are more relevant to consoles than (desktop) PCs (the API friendliness and RTTI stuff are platform agnostic though :-P). At the previous gaming company i worked at, the engine used standard containers. Also AFAIK the engine used by the Two Worlds games also uses standard containers (based on some of their developers' comments).

Personally when i write C++ i implement my own containers not because of performance but simply because i dislike the STL API - for example i want to have "Find", "IndeOf", "Swap", etc methods in the container itself :-P. Sadly it seems that i'll also need to do the same if i decide to start working with D seriously since D's standard library seem to more or less copy the STL API style.


> I wonder if any of these shops regularly measure the performance of their custom containers

You don't know? Maybe you should find out before slinging around accusations of cargo cultism.


Of course I don't know--that's why I said "I wonder". I have, however, worked in non-game-developing companies who eschewed the standard library, and when pressed the argument boiled down to: someone who worked here long ago said the STL was slow and therefore we don't use it. I am _wondering_ if it's the same story at other companies.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: