Why doesn't UE utilize STL containers?

I don’t have a lot of experience with UE containers, but I was wondering why every container within STL was re-implemented by UE rather than just using what’s there.
It would’ve been a ‘2birds1stone’ situation if they kept to what’s available. I’m assuming they had to make some optimisations, but what are they?

First: You mean the stdlib, not the STL (see Google)

Performance optimizations might be one reason, but that is unlikely as todays standard library implementations are so well optimized that it’s very hard to beat them with your own general purpose containers that are not highly specialized for a single use-case. With EASTL there is even an open source standard library implementation specifically optimized for game development, so UE4 could simply use that one.

2 reasons that come to my mind why UE4 should not use the standard library at least in their APIs:

a) binary compatibility:
Although Unreal Engine Source is available, many people prefer to use the binary releases for simplicity as those work out of the box without the need to first compile the engine itself. Unfortunately closed source 3rd party libraries can’t use the standard library anywhere in their APIs without forcing the game to use the exact same standard library implementation like the 3rd party library did. So if Unreal Engine has been linked against one standard library implementation and another 3rd party library has been linked against a different standard library implementation and you want to use both, UE and that other library, then you have a problem as your application can’t conform with the stdlib choices of both libraries.

It is critical that your app uses the same standard library as any 3rd party lib (or at least a 100% binary compatible one), which imposes standard library containers in it’s API as otherwise the compiler assumes a different layout of datatypes when compiling the lib than it does later when compiling the app, so a std::vector created/expected by your app would look different in memory (but not in code) than one that is created/expected by UE and when UE tries to access one of your apps std::vectors or your app would try to access one of theirs, that would introduce undefined behavior and likely lead into a runtime crash.

Incompatible standard library implementations are far from being a theoretical scenario. For example each major release of Visual Studio breaks standard library binary compatibility against libraries that have been created with older VS versions, GCC also has broken std lib binary compatibility with older compiler versions several times and Xcode relatively recently switched from GCCs standard lib implementation to the binary incompatible one of Clang as default for newer releases of iOS and OS X.

Also when UE would use the standard library in its interface, that binary compatibility problem would prevent you from using any standard library implementation other than the compilers default one, so your game could not use for example the EASTL. The Android NDK ships with a whole bunch of different standard library implementations and your choice for one of them and the choices of all 3rd party vendors that use the standard library in their APIs must match.

b) platform independence:
Usage of the standard library restricts your code to platforms, on which a complete standard library is available. Non-embedded platforms, suitable for gaming, that do not ship with a complete version of the standard library, still keep popping up even nowadays. For example early versions of Brew (a Java2ME competitor that was quite big in the US and Japan right until the rise of iOS and Android) did not support all of the the standard library and Android up to 2.3 didn’t do so either, so code that would use the standard library was likely to not compile on those platforms.
It makes little sense to invest the effort and replace working containers that one had to implement for those platforms in the past with standard library ones now, because who knows, when the next such platform will pop-up.

Thanks for replying, seems like really good arguments. I’m sort of confused though, because I thought the whole point of the C++ standard was to make sure that any compiler which follows the specifications laid out will have portable code (if it works in one it works in the others). I suppose this is true for C++ syntax, but the memory requirements for many data types are left up to implementation (not sure why they would do that).

I don’t think I’m going to get any better replies then that so thanks!


Well, that is the dfference between API compatibility and ABI compatibility.
The first one means that you can use the same source of your applications code with every compiler as the interface of the standard library is well-defined and if an implementation isn’t honoring that interface then it simply isn’t a proper implementation of the standard library.
The second means that binaries compiled against one version of one implementation are compatible with binaries compiled against one another version of that implementation or even against a completely different implementation. That is not always the case with the C++ standard library as it is up to an implementation on how to achieve the functionalities that the standard is promising, as long as the result is correct as far as the standard is concerned.

Many things are deliberately undefined i nthe standard not only when it comes to the standard library, but even when it comes to the language itself. For example the size of the built in integer and floating point datatypes is mostly up to the implementation. The standard only guarentees sizeof(long long) >= sizeof(long) >= sizeof(int) >= sizeof(short) >= sizeof(char), sizeof(long double) >= sizeof(double) >= sizeof(float) and sizeof(unsigned type) == sizeof(signed type), but not the exact size, so that in the wild short is normally 16bit, but 64bit on UNICOS, int is sometimes 16 bit (when compiling for 16bit OS), sometimes 32bit (32bit OS and Windows and Unix 64bit), sometimes 64bit (Sparc64, UNICOS), long is sometimes 32 (32bit OS and 64bit Windows), sometimes 64bit (64bit Unix, Sparc64, UNICOS), yeah, there have even been systems around, where char has been 7bit or 12bit.

So why is the standard deliberately leaving such important things up to the implementation?
The reasons are performance and flexibility. This way C++ programs can achieve optimal performance even for extremely low level code on every hardware and OS, no matter the memory model or the processor. Keep in mind that C++ is a langauge that can be used for pretty much everything, not only desktop and mobile, but also mainframes or embedded systems and there are situations when a hardware manufaturer has the choice between an implementation that behaves in a more standard way and one that saves incredible amounts of money through non-standard optimizations. For example modern cars have literally thousands of embedded computers in their electronic systems, with price and size limitations that don’t always allow implementing things the usual way.

The Unreal Engine’s avoidance of the C++ standard library’s containers is primarily historical. C++ was standardized in 1998, the same year we shipped the first Unreal Engine game.

Back then, the standard libraries were controversial, poorly-implemented, and very inconsistent between platforms. They made very aggressive use of new C++ features that hadn’t yet been production-proven, and had some significant problems interoperating with C libraries. For example, std::vector wasn’t guaranteed to be contiguous, so passing an array to a C library like DirectX could potentially involve creating a temporary copy of it.

The standard containers are very stable now, and I don’t think it would be too hard to compatibly evolve UE4 to use them if we chose to, through some #define’s and implementations of the existing TArray and TMap operations that could be deprecated over time.


Are there any actual plans on converting to the C++ standard library sometime? I guess there’s no real benefit in transferring, as the current TContainers seem to work fine. This makes the conversion a low-priority item, as it doesn’t really matter?

You called it. There is a general (though not universal) desire to transition to the standard containers and algorithms, but there would be a bunch of work needed to make that work, as well as concerns about fragmenting the codebase by mixing TContainer and std::container styles (and all the necessary interop code that comes with that). Specifically, there are no actual plans in place… at the moment it’s mostly regarded as a ‘UE5’ line of thinking.

I don’t think there’s any desire to support the entire standard library though (iostreams, locales etc.).



If at any point Epic does decide to merge/replace parts of their library with stdlib, then I, and many others would greatly appreciate it even if it takes until UE5.
It often feels like I’m trying to learn two massive libraries at once since game development is more of a hobby and I’m more likely to end up in a regular programming position.

However, I’m glad that I got the official reasoning for why Epic does things their own way.

I for one, wouldn’t want to have stdlib inside Unreal.

I completely agree.

My problem with the stdlib currently is that, first without a custom allocator design you’re going to be in trouble. since stdlib does not support explicit alignment. Also, stdlib in large projects it makes compile times even worse than what we have currently. Also compilers are really bad at not inlining stdlibs deep functions calls.

Something like EASTL would be much more appropriate imo.

BUT really lets be honest with ourselves, if its not broke don’t fix it. :slight_smile:

There’s pros and cons with STD Lib, like anything else.


  • STD Library is battle hardened and is used in a wide variety of applications from Games to mobile apps, to just about any software used today.
  • Documentation and examples are wide spread.
  • New containers continually added (unordered_set, unique_ptr, etc).
  • Most platforms support STD Lib natively (PS4, XBOne, etc).


  • Internal memory allocation can be tricky(as Tim pointed out), although it’s far better than it used to be. Especially with the std::move semantics.
  • Good luck debugging them if you run into an issue.
  • New updates could be using C++ features that you currently can’t support due to platform compiler limitations or what not. (C++ 11 adoption was pretty quick, but still took some time).

Overall, I think there is a compelling reason to at least offer some support of the STD Lib in UE5.

All good points. Even if it’s not convenient to use stdlib containers/classes UE4’s classes could use similar names to those in stdlib:

TArray::Num could be TArray::size.

Little things like that can also increase productivity as you will not need to spend time finding UE4’s ways of doing the same thing.
Familiarity should not be underestimated.

This is something i really would like to see.

And can we please have a way to initialize UE4 containers with a specified size via the constructor? Like std::vector. Means I can knock over that step in an initializer list and then memcopy into it, the use of which has greatly affected execution time when dealing with large data sets (10,000s of elements), and is part of why I have to end up with std::vector and TArray mixed in the same project :frowning:

I feel like the UE4 implementations are lacking some of these useful little features.

These sorts of operations are supported via TArray<T>::AddUnitialized or TArray<T>::AddDefaulted (the first doesn’t run the constructor, and relies on you knowing what you are doing, the latter runs the default constructor for T). You can also call Empty(Size) to reserve enough room for a specific number of elements if you want to add a few at a time later on but don’t want to reallocate as you go.

Michael Noland

Ahh I didn’t realize I could use Empty(Size) for that, of course! Thanks. It would be nice to have a constructor variant of the AddUninitialized or AddDefaulted methods (preferably the latter) so that they can be constructed to size (and have elements constructed) at the point of initialization of a class within which they’re member variables. It’s not the end of the world but I miss that from std::vector.

Actually, at one point TArray had such a constructor and did the equivalent of AddUninitialized. It got removed because it was deemed unsafe to not explicitly mention ‘Uninitialized’ when doing an uninitialised resize. Also, it caused confusion about whether or not the constructor was resizing or reserving. This was also before AddDefaulted existed.

Familiarity and compatibility is one of the motivating factors behind a move to standard containers/algorithms. For some of the reasons already touched upon (like problems with the standard allocator model), it may be the case that we end up with our own ‘UESTL’ which, like EASTL, will be our implementation but which ‘quacks’ like the standard.

In any case, it’s unlikely to happen any time soon.


How easy would it be to make things blueprintable if not using internal implementations?

It seems like TArray for example is built to be blueprintable, but then again other types like TMap and TSet aren’t so that’s not even an issue. Not sure why they aren’t or if there are plans to make them blueprintable later though. Would be nice if they were blueprintable :slight_smile:

Since you guys brought up UE5 will it be free like UE4? Plus another question. Is the STD slower then the UE4’s lib?

Making C++ implementations blueprintable - exposing to the editor, is as easy as it could be for the most part since all you need is a UPROPERTY(…), UFUNCTION(…), UCLASS(…), etc before the appropriate declarations.
UE5 will likely be as free as UE4, as it can be quite awkward to go from one financial model to another. Especially more so if you ask for a higher paying price.
I expect the stdlib and UE4’s lib to be similar in performance seeing as how the both use move semantics - though I suppose the specific implementation may potentially not be ideal.