The v2.2.1 update of Top Down Stealth Toolkit has gone live on the Marketplace.
Change Log:
Added Unreal Engine v4.19 compatibility.
Removed redundant segments of logic from the Enemy AI & Player Character blueprints.
Bug Fixes:
Fixed an issue with Stamina Bar being displayed as empty at the start of the level.
Fixed a bug associated with surveillance cameras movement in the deactivated state.
Fixed an issue with turrets failing to reacquire its active target after reactivation from a disabled state.
A free gameplay demo for the v2.2.1 edition of Top Down Stealth Toolkit (Windows) is now available at: Dropbox - File Deleted
All changes within the blueprints are tagged with the boolean variable ‘Version2_2_1’ in order to easily identify the alterations made in this update. Comments are also added to describe the major changes.
Replaced the Line of Sight Visualization system with an improved version that uses a Divide & Conquer rule to detect edges, and thus deliver vastly increased performance over the previous brute force models.
It’s using the same workflow that’s employed in my new product, Line of SIght Visualization, & hence the same integration tutorial applies here as well for adding vision cones to custom characters.
As for performance benefits, you can check out raycast & vertex count for multiple configurations in this free gameplay demo: https://www.dropbox.com/s/610m79f69e…ayDemo.7z?dl=0. I’ve also conducted some tests using a normal EQS driven brute force approach here, which like the previous visualization system requires many times more raycasts to get similar levels of edge smoothness.
I’ve submitted the v4.22 compatibility update to the Marketplace. It should be available for download within the next few days.
While there are no other major changes in this update, I did come across an issue with the footstep noise pulse emitters sometimes shifting between visible and hidden states. If anyone else is facing the same issue after converting your projects to 4.22, just make sure to add the following add vector node in your BP_PulseFXGenerator blueprint.
It will make sure that the emitter will always be just a slight bit above the ground location.
Apart from that matter, there were a few behavior tree task blueprints that were shown to have names that were too long (during packaging) despite it being less than 260 characters. Renaming the classes fixed that one. And that’s about it. The toolkit should work quite fine in 4.22 once these issues are taken care of. If anyone’s having trouble with the process, you can reach out to me through this thread or the support email id.
The v4.23 update has been submitted to the Marketplace and should be available through the launcher this coming week. I have not come across any issues upon converting the project from 4.22 to 4.23. So it’s probably safe to do it in your own projects as well. But if anyone does come across some issue, just let me know.
Edit: The 4.23 version has gone live and is now available to download from the Marketplace.
I’ve started working on the next round of updates for Top Down Stealth Toolkit, starting with a new Mouse+Keyboard driven Movement/Aiming system for the player character. It’s still a work-in-progress, but I’ve shared a preview video on Youtube:
Also just wanted to point out that the new system will not be replacing the default purely keyboard driven movement, but will instead provide an alternate form of player control system.
Hi, I’ve submitted the v3.3 update to the Marketplace. As shown in the previous video (Top Down Stealth Toolkit v3.3 WIP: Cursor-Driven Player Movement/Aiming Animations - YouTube), it adds a secondary player control system where the player character always looks at the mouse cursor while walking. I’ve added some basic interpolation to make sure that the character doesn’t immediately look at the cursor location, but instead eases into the new rotation over time. Also sprinting cancels the aiming and the character will always face the direction of the sprint.
You can switch between the new control system and the old one by heading over to the “BP_GameMode_CoreGame” blueprint, and then setting the “Default Pawn Class” to BP_PlayerCharacter_Mouse+KeyboardControl or BP_PlayerCharacter_KeyboardControl.
Edit: The update has gone live on the Marketplace. You can identify all the new changes by searching for the Version3_3 Boolean variable in blueprints.
Hi @Norhaji , it’s not supported by default. On paper, it’s quite easy to make the player character invisible to the AI by not registering the player as a Target Stimulus. When you do that, the AI cannot will not search for the said Stimulus. For example, in the case of your player character, it can be achieved through the following logic:
However, doing so will break some other systems when the AI accidentally walks over to where the player is standing. With the visual perception unable to see the player when cloaked, even if the motion perception (used when you enter the small circular ring around an AI guard) can know that there is something close by, it will not be able to do anything without actually seeing the player using visual perception. I’ll look into it and let you know what can be done about it.
Alright, so I was going over the implementation and was wondering what you had in mind regarding the design behind detection of invisible targets by enemy AI. Do you want them to ‘see’ the invisible player only when in close range (within the circle)? Or is it like once the motion perception detects the player within the circle, it can temporarily keep seeing even if the player goes outside range (basically the cloak cancelling out upon detection)?
Right now I need stealth device. So disabling perception kinda works. As for system modification I see it that way.
Lets add additional parameter to visibility sense generator, np. 0-100% and divide line of sight to zones. If 100% entering line of sight equals instant detection. Lower value is, shorter instant detection range becomes with some chance to trigger investigation when target is outside instant detection zone.
If visibility would be 0% entering motion perception zone should trigger investigation with ability to detect and break stealth for X time after looking directly at target in motion perception range.
Alright, got it. Dividing Line of Sight into zones is definitely something that I’m planning to look into for future updates, kind of along the lines of Shadow Tactics. But since it will involve making changes to the core design (from instant detection to a delayed detection), it’s going to be a while before the feature makes it into the toolkit.
As for the cloaking system, breaking stealth seems like a feasible idea. For example, if the motion perception detects something that is right in front of it, we can have a different response when compared to how it behaves when the stimulus is behind it. And since the AI can already interact with stimuli (like reactivating stunned guards), the same workflow could be used here to interact with the cloaked actor, thus causing it to break stealth. Once that happens, the Visual Perception can kick in and do its job as usual. I’ll try it out and let you know how it goes.
Hey @Norhaji , I found a much easier solution than the above approach to have the enemy AI detect cloaked targets in close range. No need to get the Motion Perception involved at all, but instead use a sort of multi-stage Visual Perception model. Will have to do a few more tests, but if everything checks out, I’ll share a tutorial showing how you can implement it in your game.
Open your Player Character blueprint add a new Boolean variable **bCloaked **to the blueprint. This variable will help us determine at any point if the player is cloaked or not.
Now have your Player Character blueprint implement the **BPI_Cloaking **interface and it’s interface functions as shown below:
Here, you can add additional logic to handle setting of cloaking material and so on.
Now we move on to the AI side of the implementation. We’ll start with adding a couple of new parameters to the struct Struct_VisualPerceptionData:
bCanDetectCloakedObjects? (Boolean)
**CloakDetectionRadius **(Float)
The bCanDetectCloakedObjects? parameter can be used to have only certain AI agents be capable of short range cloak detection. For example, you can have the Patrol Guards have this feature and turn if off for Cameras.
Head over to the **BPC_AIPerception **blueprint and open the **VisualPerceptionCheck **function. We’ll add the following logic as part of the Linear Distance check to ensure that the perception evaluates against different vision radii between normal and cloaked targets.
Basically the normal targets will be perceived at the default Vision Range where as Cloaked targets can be perceived only at the Cloak Detection Range.
Now select the **AIPerception **component within your **BP_PatrolGuard_Parent **blueprint, then set the bCanDetectCloakedObjects? to True, and **CloakDetectionRadius **to some value less than VisionRadius.
Finally we can make sure that when a cloaked target is detected, that it’s cloak gets disarmed. If not, as soon as the player gets out of the small cloak detection range, the AI will be unable to see the player. So fire up the **BP_AISensoryManager **blueprint, open the **EvaluateTargetsForAgent **function, and call the **OnDisarmCloak **interface function (created in step 1) before setting a new Active Target.
I was thinking in buying this toolkit, but since I’m new in Unreal I want to see if there are tutorials on how to use it. I see in the first page some tutorials in our site but it seems the site doesn’t exist…
Hi Joao, thanks for bringing that to my attention. My blog domain had recently been changed from .in to .com, which caused all external references to become broken. All of the links have been corrected and you should now be able to go through all of the articles. If you’re having any further doubts or run into trouble understanding some concept that is not listed here, you can always reach me here or through the support mail id.
there is absolutely nothing preventing us from switching camera position to FPS or third person correct?
i see you’ve spoken about keyboards and mouses, nothing stopping us from mapping movement to a game pad?
I’ve been wondering about the vision cones, I’d like to turn them off, but I’m also wondering if they’re able to spot in relation to height. If a vision cone runs into a waist high block with the character on the other side-- does the vision cone stop them from seeing the player clearly standing there? If the AI were on a 2nd story would their vision cone(if large enough) be able to spot the player on first level?