Im working on a FPS game AI, I’ve made all the behaviour tree and logic. But I want the AI to be realistic and not shoot immediately when he sees the player. So I made a detectionmeter that slowly fills up, I need it to fill fast and slow based on how much cover is player having, for that I do many line trace form the AI in various directions and check how many traces hit the player and based on that fill the detection meter. But its taking a lot of resources and its lagging a lot, any better way to implement it?Here are some screenshots of the blueprint:
Hi, first off if you want to do heavy looping, then blueprint is not the right choice for performance, better use C++. But even then, from looking at the number of linetraces in your image, you might/will run into performance issues if you do that for many AIs and for many targets (about a hundred linetraces on tick in C++ is fine, so for all your bots together, but much more will push it)
In C++ I would let the player pawn implement the IAISightTargetInterface and overwrite the CanBeSeenFrom function (this way you will directly use the sight perception sense). Then depending on how much cover the player has, return a different stimuli strenght. Then in blueprints you can access the stimuli strength directly from the perception and fill your detection meter based on that.
As for determining how much cover the player has, I would rather do traces to each bone (or a number of bones that you choose, maybe also depending on the distance between the player and the bot) and see whether that trace hits the player or something else (e. g. if the AI can only see the left foot of the player then the cover would be quite good and I would therefore return a low stimuli strength). If you use C++ then this logic would go into the CanBeSeenFrom function, and that would be all you need to do (sight perception sense will use that function then instead of doing its own line trace and you can then access the stimuli strength in C++/blueprints directly from the perception component and therefore have it all in one place).
You can also do those trace tests to each bone in blueprints, but that would be slower and more important you will need custom workaround logic around it since you cannot overwrite the CanBeSeenFrom function there and therfore cannot just use the stimuli strength, so I would not do that.
Hey, thanks for the answer! appreciate it. But Im not willing to do this in C++(not very comfortable in using C++). Im thinking that instead of doing so many line traces just do an array of line traces from AI to the player(9 or 10 line traces in matrix shape) and then fill up the detection meter based on the traces that hit the player. Also I think you misunderstood that AI will see the player once the meter fills up. Its not like that, once AI senses the player successfully it will do the line traces and then fill up the detection meter and once the meter is full it will trigger the behaviour tree. Is there a better way off implementing it in blueprints?