Hello all, I fear I’m developing a sort of mental complex as I build up my BPs. I increasingly struggle to decide whether to make new variables local (only within a function) or “global” (class variable). When a piece of data is only needed within 1 func, its a no brainer. On the other hand when that data needs to be passed around in and out of funcs, it can get complicated quick. It often results in an excessive amount of IO pins and routing, up and down nested hierarchies, across from one end of a program to another. A big cluttery mess. So I give in sometimes and just make it a class variable, accessible to all functions. And organization wise, it’s not so bad, cause I can just tuck it away in a category. My nagging fear, though, is that too many class variables, especially larger arrays and structs, can start to increase compile times. And indeed, compile times do increase as my BPs get bigger, I just don’t know exactly why - I just superstitiously blame too many class variables. Then again, the complications of NOT using them might actually be be worse for compile times. Maybe there’s memory concerns as well? I assume local var data is destroyed after the func exits, but I’ve noticed that class var data can hang around. That’s no good. Then sometimes I suspect that it’s just lazy to use the class vars, and one should strive for some sort of BP structure that doesn’t require them. But if there is a way to do that without it becoming a total rats nest, I haven’t found it. It’s a millions times easier to pass data around within a BP via class variables. Maybe I should just always use them by default unless I know for certain I only need a var within a func? But then, you end up with these functions with no IO pins, because they just operate on the global (class) variables, and it just seems wrong and bad for readability. So, any insight from yall to keep me from going insane? Does anyone actually know that “right way” to approach this or the true advantages/disadvantages? Thank you for any input.
I would think your compile times going up is not related to how you manage your variables, but actually, for example, to how many copies of the blueprint you have in your level. Strange, but true.
In any event, it’s generally accepted that the more local variables are, the better. But, that means passing them as parameters, which means a slight inefficiency because they have to be moved via the stack. It wouldn’t really have any effect though unless you’re passing hundreds or thousands of parameters.
Also, have you got the youtube ‘infection’ of using functions where an event is actually needed? Basically, if the process returns a value, then it’s really a function, otherwise it’s a procedure, ie an event. Most of the stuff I see on youtube is using functions where an event is actually what’s needed.
If you are using events, you could excuse using global ( to the blueprint ) variables. Although specifying them as parameters leads to better readability.
One thing’s for sure though, if you intend to reuse a function, then it should go in a function library and have local variables.
I haven’t noticed the thing about compile times and number of BP instances. That is interesting. To me compile times seem to depend most on simply the amount of “stuff” present in the functions you’ve modified since the last compile. Just a layman’s observation.
About passing parameters, I do worry about performance sometimes since arrays and structs are always copied during passing. Seems like making them global would result in better speed/memory use.
As for the infection… yes I may have it… but not from youtube, more because 1. the function might not return anything, but it changes a class var and it needs to be called multiple times. and 2. function encapsulation comes with local vars and new dedicated graph which you don’t get with events.
Function libraries are good - and they aren’t technically restricted to local vars - they can access class variables of the calling BP by casting the world context node. You’re saying this is a nono? There are times when it seems inevitable.
Thanks for replying.
You could pass by ref.
You can still do this with an event
and this…
Yes, you can get around partitioning, but generally speaking more isolation is better.
Also, generally, it’s all up to you at the end of the day
I’m just thinking, if your BPs really get this complicated, it might be time to split them up, and use interfaces. Then you really do have to separate things
Obviously you can call an event multiple times, my bad there. But how do you get a dedicated graph for an event? Collapsed graph is ok, but can’t be brought up separately from its parent, and still no local variables.
I didn’t know about the pass by ref option, that’s great.
Alright, alright, little game changer there. But what exactly is better about that than just making a function with no return? I read some things about replication and the ability to have multiple exec inputs, but those don’t seem to apply to my case, and I’d much rather have the local vars in a lot of cases.
I can’t really say much about all the pros and cons as far as UE is concerned. But it always just makes me wince a bit when I hear a youtoober say “so, we’ll make a function to do that”. Notice the world ‘do’ there
Has Return: Function
Functions: Heavy usage of Local vars
Events: Can have parameters, but I try to limit them to netcode related.
Example…
90+ percent of the heavy work is done in functions. Helper function calls inside of the main “action” functions.
On the main graph the out param of impure functions can store data including references to other BPs until the actor that executes it gets destroyed or the value is set to none.
I personally have not proven this or have not been able to using memory insights. I mean I know the data persist but idk if it prevents an actor from being GCed (for all those that like GetAllActors…).
Here is the article: The easiest way to memory leak in Unreal Engine | by Igor Karatayev | Medium
Would be awesome some insight or debate.
That is very interesting. And from 2022 even. If this is true, wouldn’t it mean one should try to avoid calling any function in an event if that func has a return (especially if it returns an object reference which could go sour)? They would all create implicit ubergraph variables and basically be wasted memory (unless it was needed for a callback). That would be a pretty bazar design IMO. But if that’s true, a way around it would be to use only non-returning functions in an event and have those funcs just alter class variables instead of returning values. Which I’m being told not to do, because at that point, having no returns, they should just be events, not functions. But now learning this, seems to me that if inside any of those events you had a func with return pins, you’d be back to polluting the ubergraph. In that case the best practice practice would actually often be to use funcs with no returns within events, and make sure to encapsulate any functions with returns within those non-returning funcs. Gah…
This made me curious so I made some tests.
I’m not expert in super low level memory management but my experiments lead me to believe they are properly garbage collected despite leaving an “GetAllActors” output pin in the wild.
I made the following C++ classes :
UCLASS()
class ATestActor : public AActor
{
GENERATED_BODY()
public:
struct FDummy
{
int32 A = 0;
FDummy()
{
UE_LOG(LogTemp, Warning, TEXT("Dummy constructor"));
A = FMath::RandRange(42, 9999);
}
~FDummy()
{
UE_LOG(LogTemp, Warning, TEXT("Dummy destructor"));
}
} Dummy;
};
UCLASS()
class ATestReferences : public AActor
{
GENERATED_BODY()
public:
ATestActor* RawPtr;
FWeakObjectPtr WeakPtr;
UFUNCTION(BlueprintCallable)
void SetRef(ATestActor* In)
{
RawPtr = In;
WeakPtr = In;
}
UFUNCTION(BlueprintCallable)
void TestRef()
{
UE_LOG(LogTemp, Warning, TEXT("------------------------------"));
UE_LOG(LogTemp, Warning, TEXT("RawPtr: %p"), RawPtr);
UE_LOG(LogTemp, Warning, TEXT("IsStale(OrPendingKill): %i"), WeakPtr.IsStale(true));
UE_LOG(LogTemp, Warning, TEXT("IsStale(Killed): %i"), WeakPtr.IsStale(false));
UE_LOG(LogTemp, Warning, TEXT("Dummy = %i"), RawPtr->Dummy.A);
}
};
Then the following blueprint, child of ATestReferences.
I used functions to spawn, assign and destroy, to avoid wild pins.
I first tested without the red part, then with the red part.
spawn/assign
destroy
Got the exact same result in both cases, which is the following :
-
The actor is spawned. Raw pointer points to its address, weak pointer is valid, FDummy constructor has been called, it has a value. GetAllActorsOfClass has been called once (at BeginPlay) and retains its output for subsequent timer calls :
-
DestroyActor() has been called. Raw pointer isn’t changed (obviously). Weak pointer says the object still exists but is pending kill. The print string that uses old output result from GetAllActorsOfClass is still printing the object :
-
20 seconds later, GC has kicked in. FDummy destructor is called. Weak pointer says it’s completely stale. The output pin array still has one element, but is unable to print the name of the object anymore. Only the value of Dummy remains, because when memory is freed there is no reason to zero/change it, so the value is still there at that specific memory address even if it’s not considered valid memory anymore :
-
After messing around doing stuff in the game, that value eventually changed which indicates the memory block has been reused for something else :
So as far as I’m concerned that output pin is not creating memory leaks by presisting references. It is true that it retains values from previous calls, but as far as I can tell it is using weak references that do not prevent garbage collection of pointed objects.
This was done in 5.1 btw. It could be a recent development, I have no idea since I’ve never tested this before.
@Chatouille Thank you for taking the time.