I’ve found that when an actor contains, say, 1.5 million+ FVectors in a single TArray, calling functions on a valid pointer to said object will often result in a stack overflow when the function is entered. This would make sense if the vectors were allocated on the stack, but they’re using TArrays, which are dynamic (and thus on the heap).
It also doesn’t matter if the total number of FVectors is the same but are split over several TArrays as member variables or as part of a struct that is itself a member variable of the above class. A stack overflow still occurs.
Where that member is
TArray<FVector> TestVectors;
either with the pre-initialization(.AddUnitialized(…)) or without, in BeginPlay or other setup-style functions;
TestVectors.AddUninitialized(numVectors);
for (int i = 0; i < numVectors; ++i)
{
TestVectors[i] = (FVector(rand() - RAND_MAX * 0.5f, rand() - RAND_MAX * 0.5f, rand() - RAND_MAX * 0.5f));
}
Accessing the object from a valid pointer will still result in a stack overflow, hence the question; what’s the limit to an AActor’s / its arguments’ total size? How can I allocate large contiguous blocks for millions of vectors? It’s not actually that much memory (we’re talking in the tens of megabytes).
Thanks!
EDIT: At the time of the stack overflow, this is the call stack: UE4Editor-OpenCL_Plugin_Test-297-Win64-DebugGame.dll!__chkstk() Line 109 Unkno - Pastebin.com
Compute() is a function that kicks off some OpenCL calculations, but trying to invoke this function is what immediately leads to the problem; it fails before you can step into/onto the first line. This is called not long after the allocation inside the AObjectPoolComputer object, which is the one whose functions cause the above issue.