How to best handle late binding issues?

Hello,

This is a problem I have encountered in several projects now, and I still don’t feel I have a good way to tackle it.

As an example;

I have an enemy which, when spawned generates a “threat” value. This increments over time until the enemy is destroyed.

When the enemy is spawned, GameState binds to the OnThreatChanged event dispatcher for that specific enemy. Then, as the threat value is incremented, GameState receives the notifications, and updates the running total of “threat” in the game. (there can be multiple enemies etc).

The process works - BUT - the code which increments the threat and calls the event dispatcher is capable of executing before the binding code in the GameState blueprint has completed. The result is that the first call to the event dispatcher is effectively unresolved/missed.

In this very specific case, the enemies have a “threat on spawn” value, lets say thats 5. When they spawn, a value of 5 is set on the enemy as its current threat, after that, its incrementing by a threat-per-second value. In testing, the initial value of 5 never makes it to the GameState.

If I pop a Delay node in to the enemy blueprint, with a value of 0.2, this seems to buy just enough time for the binding to complete - but I find this incredibly icky. Its such an arbitary guess at a number and could be influenced by other things, which would still result in the same failure. The obvious resolution would be to have the binding code report that it is complete, before any further actions are taken on the enemy. But in order to do that, the enemy would then need to bind to an event on GameState, which I don’t really like, as I’m trying to reduce how many “things” know about other “things”, so that they can all just do their jobs independently.

Has anyone else run into a similar issue, and if so, how did you go about resolving it?

I’ve had this problem.

I agree the delay node is krap, you don’t know how loaded the destination hardware will be.

Two ways I found:

  1. Deliberately code the enemy to wait until the game state is ready. You can make a timer which watches a variable etc in the game state which it sets once it’s bound.

  2. Put each actor in different tick group.

So the binding BP would go in an earlier group. I haven’t pressure tested this.

1 Like

Hi @ClockworkOcean,

We meet again! :slight_smile:

Thanks for the reply and thoughts. Option 1 was something I had considered, but I did wonder if I may be over engineering a solution for a minor issue - plus - it will invariably occur some where else yet too! I seem to run into this in each project, I can’t help but wonder if perhaps its my approach/architecture thats the problem? I come from a .net background and have always tried to limit what had access to what, considered how things could/should communicate and so on. It often feels like everything in Unreal just has access to everything, and perhaps my attempts at trying to avoid that in my own projects leads to these issues. Dunno. Frustrating!

Tick Groups - ooh, something new for me to look at, I’ve not come across these before. Is this effectively affecting the execution order?

If you use blueprint interfaces instead of direct communication, actor A doesn’t know what B is, and has to use ‘access procedures’ to get the results it wants. Much more insular.

The only thing you can’t do with BPI, is binding ( one to many ).

Tick groups: You basically get to choose which part of the frame each actor’s code is executed in. I do believe that the setup ( initial code run excluding nodes with latency ) happens in one frame.

If you want total control, then it’s C++, but BP is more fun :slight_smile:

EDIT: Ah, that RobMede! :slight_smile: Sorry, took a while for the penny to drop :laughing:

1 Like

Hey, glad you remember - its been a while :slight_smile:

I am using some interfaces also, but often use the event dispatchers.

When I’ve written the .net apps in the past, its always been a bit easier to produce diagrams of what can see what etc, whats allowed to interact with what, because from that point I’m doing it all. But in Unreal Engine, I have no idea what can already see/interact with other things, I find it a bit erksome that something like a UI widget can use the GetPlayerController function for example, where-as I would typically have the UI utterly in its own little domain, safely away from everything else and just “receiving” messages to update itself. What I find it leads to is this ever present feeling of “doing it wrong” which is really frustrating! An “doing it the easiest way” (with everything chatting to everything all over the place) just feels, well, dirty :smiley:

There is a logic to what is visible where, and it’s to do with the scope of objects in what Epic considers the typical use of these units ( game instance, game mode, player controller etc ).

I guess the best thing is to assume things that can see each other are supposed to, and keep as many of the things you design, apart :slight_smile:

EDIT: When I say ‘you’, I mean the programmer. I re-read that and it came over wrong I think…

1 Like

Yeah, I guess so… would be great if there was some kind of architectural diagram for a lot of the “core” components in UE4, I’ve yet to find one…

hehe… its ok, I knew what you meant - no harm done :slight_smile: