Protecting "critical sections" (shared state) in *Blueprint*: most lightweight strategy?

While trying to integrate two complex marketplace assets, I came across an issue, where two events were overlapping, and manipulating the same state at the same time, causing the state to get “corrupted”. I hadn’t realized until then, that events in blueprint really do run concurrently, even if running on a single-thread, and so those “multi-threading” issues can happen in blueprint events too.

Searching for “locking” resulted in 35 pages of hits in the forum; I gave up after 10 pages, as none of this seemed to have anything to do with what I was looking for. I also researched “critical section”, but there seems to be a C++ “critical section” object used in unreal for “real” multi-threading, so all the hits where about that.

I’ve made a basic example reproducing the issue, and the “solution” I came up with, to solve the problem. I’d like to know, if I my solution is actually “safe”, and if there is a better/more lightweight alternative to it, as I will have to use this everywhere where “shared state” is manipulated non-atomically, now that I realized how “unsafe” events in blueprints are. The last thing I want, is to regularly waste hours/days debugging concurrency issues.

So, here we go. First, the “broken” example:

Now, the “Working Example”:


Basically, what I do here, is I have an integer “lock”, and before I start to access the shared state, I increment it. I have assumed, that at least the increment node would be “atomic”, and if two events try to increment the lock at the same time, only one will get “1” as a result, and the other will get “2”. IF that is correct, than I think this solution will work. So, after incrementing the “lock”, the event checks the result of the increment, and can tell if they “got” the lock (result is 1). In that case, they proceed, and decrement when done. If increments (lock) and decrements (unlock) ALWAYS match, then the lock will never be 0 (free) again, until the event that first got “1” decrements it, so during all that time, no other event will enter the “critical section”. The order of the decrements between the events should not mater, and the decrement return values are not checked. If the increment (lock), returns more than 1, than the lock is already locked, and so the event must wait. It still must decrement anyway, wait a bit, and then retry. On the one hand, you want the waiting event to proceed as soon as the lock is free, but OTOH, you don’t want to waste masses of CPU on tight busy wait loops.

So, I’m hoping there is a more elegant/performant solution (like a true atomic compare-and-set), possibly coded in C++ (I haven’t learned to use C++ in unreal yet).

Thanks for the link. The example is “generic”, but my actual use-case is “reacting to input events”, so luckily, I don’t currently think I’ll have to use that in a tick or frequent timer.

I witnessed the first PCs as a wannabe-programmer teen, but assembler was too low-level for me, and there was no free C compiler, so I never coded anything relating to PC interrupts. Therefore, I cannot comment on that.

In my actual case, I have two independent, and “incompatible” APIs that I’m trying to get to work together. Both of them presents events, that are meant to be triggered by the same input-events. I’ve modified the code slightly, so one API gets all input events (the “master” API), and delegates as appropriate to the second API (the “slave” API).

My problems comes from the fact, that the “master” API has to invoke two events from the “slave” API in quick succession, to get the desired effect. And since both events involve delays and animations, they end-up overlapping somehow. The slave API was designed to deal with that by simply using a boolean flag, but on event overlap, the second event was just ignored, which might make sense for “input events”, but would not do in my case. So I want the “second event” to wait for the first one to finish, instead of just failing. The code above allowed me to do that, but it feels rather “clumsy” / heavyweight.

My assumption about the “Blueprint VM”, is that events are executed more-or-less as functions would, that is, “child events” are executed “immediately”, but they are “queued” when one event has to “wait” for something. In that case, the VM just goes back to execute the “caller/parent event”, up the “event stack”. At least, it looks like that based on the “order of my print strings” :D.

Cleanest solution I could think of is creating a class in C++ that has a FCriticalSection mutex variable in it, expose it as a blueprint class and give it BP exposed methods for lock/unlock/trylock.
Then you can just have a variable of that type in your BP class and call the BP methods on it.