C++ TArray insane memory usage

I have a huge problem with memory allocation for c++ TArrays. If i use blueprints array, it works fast and easy. About 30ms for array of 4 016 016 elements and ~16mb memory.


But when i try to resize c++ TArray like this, it takes minutes and more than 30Gb memory.
in .h:

UPROPERTY(EditAnywhere, Category = DynamicTerrain, BlueprintReadWrite)
TArray<int> HeightMap;

in .cpp:

void ATerrainCreatorBase::InitMap()
{
HeightMap.SetNum(4016016); //
}

what am i doing wrong and how to fix it? Help, please.

1 Like

This article may help:
https://www.unrealengine.com/en-US/blog/optimizing-tarray-usage-for-performance

But are you sure there’s no other way to do what you’re trying to do?
Maybe you could start by changing from int to uint8?

https://docs.unrealengine.com/5.0/en-US/epic-cplusplus-coding-standardblueprint-debugging-in-unreal-engine/

I have:

TArray HeightMap; // 1 008 016 __ 4 016 016
TArray MaterialMap; // 1 008 016 __ 4 016 016
TArray DeltaHeightMap; // 1 008 016 __ 4 016 016
TArray NewHeightMap; // 1 008 016 __ 4 016 016
TArray ProtectedZone; // 1 008 016 __ 4 016 016

But it should takes (401601643)+(4016016*2)=56 224 224 bytes.
56Mb is really not enough. But i still cant understand, why UE5 takes 30+ gb for allocation in c++, and only 100 mb in blueprints.

If InitMap() is the problem I’d change:

void ATerrainCreatorBase::InitMap()
{
// SetNum = Resizes array to given number of elements.
HeightMap.SetNum(4016016); 
}

To:

void ATerrainCreatorBase::InitMap()
{
// Reserve = Reserves memory such that the array can contain at least Number elements.
HeightMap.Reserve(4016016); 
}

But “.Reserve” dosen’t change size of array, and this will crash UE:

HeightMap.Reserve(4016016);
HeightMap[5] = 1;
GEngine->AddOnScreenDebugMessage(-1, 15.0f, FColor::Yellow, FString::Printf(TEXT(%d), HeightMap[5]));

And how i can now change Array size for good work?

Reserve is just for ‘reserve’ its initial size:

HeightMap.Reserve(4016016);
HeightMap.Init(0, 4016016);

If you need to resize after that use SetNum().

i tried this:

HeightMap.Reserve(234256);
HeightMap.Init(0, 234256);

and UE used 300 Mb for this. But should uses less than 2 mb. For 4016016 it will use 6+ Gb instead of 16 Mb. Still haven’t idea where is this crazy memory leak happening.

If you will always have exactly 4016016 elements try
TArray<uint8, TFixedAllocator<4016016>>
followed by a call to Init(0, 4016016)

… or uint32 if you need that. whichever. uint8 should be 4016016 bytes + whatever overhead data the TArray adds.

Unfortunately, i can’t use fixed size. For different maps i need different sizes. But blueprint version of Array hasn’t problems with memory. I think i have a trouble with declaration or initialization of Array. It’s still one engine, and there should be same variants for declaration. But I can’t find the correct one.

OK, but if they are different maps, and different sizes, but you routinely need items of the same size then you can declare those fixed.

typedef TArray<uint8, TFixedAllocator<4016016>> HeightArray;
typedef TArray<uint8, TFixedAllocator<138457>> SomeOtherKindOfArray;

I’m assuming you’ll not ever be changing the size of them dynamically, inserting/removing from a 4 million length array would be very costly.

Okay, one step closer. When I removed “UPROPERTY” it started working much faster. One question - “UPROPERTY” works bad always or some “Properties” slow it down.

UPROPERTY(EditAnywhere, Category = DynamicTerrain, BlueprintReadOnly)
TArray HeightMap;

Thank you for your help and ideas, i checked each one of “UPROPERTY” and it turned out that “EditAnywhere” is fatal to performance.

in .h

UPROPERTY(Category = DynamicTerrain, BlueprintReadOnly)
TArray HeightMap;

in .cpp

void ATerrainCreatorBase::InitMap()
{
if(HeightMap.Num() == 0){
ChunksX = FMath::Max(1, ChunksX);
ChunksY = ChunksX;
MapSize = ChunksX * 16 + 4;
HeightMap.SetNum(MapSize * MapSize);
}
}

and it works like a charm.

Makes sense, wasn’t where I think any of us were thinking – I don’t think any of us were considering the editor as being the culprit.

IMO, if you do have some common sizes, you would be best served by fixed allocation, though. :slight_smile:

IMO an array of that size shouldn’t really be exposed directly to Blueprint at all. Many Blueprint Functions and in some cases the VM itself will frequently copy arrays rather than modifying references, so it’s almost always far worse for performance. Event Functions for example seem to always invoke a copy, even when you pass elements by reference.

Hide it in C++ and provide accessor functions to modify it from BP to make sure you retain efficiency (at the very least, make sure it’s read-only). You have no control over how arrays are allocated in BP, so if you are adding/removing/editing the array frequently you are also reallocating it by default.

5 Likes