What are the options that I have to implement a loop that runs faster than the game loop?
Say that I need to have two separate loops. The game loop, and another loop that runs considerably faster, with shared accessed data between them. This is my question in itself, but knowing the goal behind the faster loop may be relevant to the answer; it is a loop that needs to only gather the controller device inputs at a very high rate and save them in tables with time of reading of the each input value.
I would like to know what options I have here, and what the more viable ones are.
This still doesn’t make any sense to me. It is seriously not a good idea, especially for input that’s actually going to be saved into a table as well?! The input system in Unreal is event based, so why not record them when they are triggered? Player pressed the “E” key, input binding is triggered, you record it in that table. A million times better than having another loop.
Yeah I think multithreading is the naturally more logical approach to take. I am preparing to do that.
But I wonder if the task that is needed for me here is just to take the input each 1-1.5 ms, if it is possible to have timers within the main loop to do that within the loop itself? I think this logically is an alternative option as well, although I have no experience in all this issue tbh.
Thanks indeed for your feedback. I’ll tell you more about the reasoning behind such a strange decision I am making.
Upon prototyping in my action-based, networked-game I assessed that the acceptable delay in action is somewhere between zero and 20-25 milliseconds. Which turns out to be a common observation in different gaming genres like fighting and shooter networked games. By delay here, I am talking about the local action delay (client-side, nothing to do with the network-related delays); input-to-render periods.
Here are a couple of points that will explain the decision I’m making in handling input in a separate faster loop with a time table:
a. For a game running 120 or 60 fps, the start of a frame in itself can delay the catching of the input by up to 8 or 16 ms respectively.
b. I am focusing mainly on the two analogue sticks input, and I’m dealing with them as 2D-coordinates within the unit circle, the path and speed of the analogue positional change is important, and it is arithmetically misrepresented when summed every 16 ms or so (averaging kills the readings in between).
Both of those two points weren’t enough alone for us to go for separate input handling tho;
c. The system of network buffering/expectation local gameplay averaging mechanisms and the system of animation configurations decision both depend on the nature of analogue sticks input curve. More specifically, those two system need to know from now to 10-15ms in the future, what is the input table is exactly containing. They need a 10-15ms future input, taken with high frequency in order to give reliable results at the moment of their update. In other words, the game will deliberately delay all actions (gameplay/animation) for 15 ms, while knowing exactly what will happen during this period in the “future” to affect the current configuration of the systems.
Actually this decision was a result of long time of prototyping with “inputs taken in main loop tick” at 120 fps.
Now I really think that this decision is actually the perfect one for the game, so much so that the game might even depend on that being well made to deliver the proper gameplay experience for the player.