Limit AI Pawn Sensing when Player is under Shadows?

I started a new thread since it’s a different topic from what I asked earlier :slight_smile: I implemented an AI which chases the Player at sight using AI Pawn Sensing in this tutorial: LINK My question is, how can I limit this Pawn Sensing to a minimum when the player is hidden under level’s shadows? Shadows casted usually by walls etc so that the enemies don’t notice when the Player is creeping past them through the shadows :slight_smile: Just a tiny heads up would do, I can do the rest!

Are the shadows dynamic? I.E. do light sources change at run time? If they’re mostly static I would just use trigger volumes placed at appropriate places in the level.

I can’t really think of a way to detect if the player is in shadows as such, but even if you could I imagine it would be prone to a lot of weird bugs; situations where the player should be clearly visible as a silhouette but they’re under the long shadow of a statue or something so the game treats them as less visible.

In the end I feel like you would do so much work refining the shadow detection itself that it would be easier to just define zones of “low visibility” where the player should not be visible unless the pawn is closer.

That’s a great idea! I added some Trigger Volumes but I’m facing a minor difficulty in referring the AI Pawn Sensing of my Enemy. Like you said, I have to change the Pawn Sensing View Radius to a minimum so that my enemy won’t see the player when he’s hidden in the shadows/shadow triggers. Here’s where I am stuck at:

I’m trying to change the AI Pawn Sensing attributes such as the Sight Radius when the Actor Begins Overlapping with the Trigger Volume (Shadow). How do I change these values? :slight_smile:

This might sound crazy but I would try to use traces to see if the player is in a shadow.

Trace to a large distance in the opposite direction of the dominant directional light.
get all point lights in a sphere then trace from the world location to the player.
get all spotlights in a sphere and do some math to see if the player is within the cone(?).

if you want to be really accurate you could trace from multiple parts of the body of your character to see if the player is only half in the light.

I heard this from that Hourences in a video of his Solus game so you could ask him. Although i’m not sure if he did it for any light other than the dominant directional.
although i’m not sure how performance heavy this would be.

Edit:

I have the basic version
GoogleDriveLink
Download both of those then put them in your projects content folder.
Open the Script holder (you cant compile this don’t worry) then copy the collapsed nodes and put them in the specified places then create the Variable in the player character. (instructions are in the script holder)
Then in the player character create a sphere component and add the overlap events and replace the ones that are already there.

That should be it. tell me if anything comes up

Also this isn’t a full thing. it only does the point lights and it does not detect them based on their attenuation radius (Because I cant get that for some reason).

Here’s the blueprint

Hdrlight and light adaption already do some logic to adapt the player vision to certain lightlevels. Maybe its possible to query the same data from blueprint logic. If it is not completly done in post process. In farcry they simply query how much of the user screen is covered because then the player would assume he is in cover. But that only work in first person view. There should be some generic solutions to this that are not ue specific that you should be able to adapt.

http://www.gamasutra.com/view/feature/2888/building_an_ai_sensory_system_.php?print=1

Maybe that?