Issues with AIPerception sight and hearing

Hey community!

I’ve somewhat recently started learning development on Unreal Engine and it was going swell, until I came to this point in making an AI NPC.

(Picture of the code I’m talking about are as attachment + I could only post one since I’m a new user).

So shortly put, I’m working on a sight/hearing system for the AI and for some reason it keeps triggering the hearing output from it just seeing me. This causes the AI to go to the location of the last heard noise first, then going to the location of the visual detection. I’ve done some debugging and found out that it stems from the “Switch on int” (which parses which sense was last sensed).

I would like to know if there is something I’m missing about the AIPerception, since it seems pretty fiddly to separate which sense was actually sensed. I’m also fairly certain that the issue is not caused by the behaviour tree or anything else, since the “Senses” blueprint is the first (and only) place where it handles the AIPerception input.

I have no problem on giving more information as needed. This issue has really brought my development to a halt and I would greatly appreciate any help :slight_smile:

(In the picture “Switch on int” outputs: 0 = sight, 1 = hearing, 2 = damage (unused)):

1 Like

I use pawnsensing is simpler although it is more prepared to be used with blueprints.

although I’m not sure if it has improved since the last time I tried.

1 Like

Hi, what do you execute in the hearing branch after the switch? Cause that will be executed even if it doesn’t hear the actor. If you only want to execute the hearing branch if it hears the actor, then you need to check whether SuccessfullySensed is true or false.

1 Like

Hey :slight_smile: ,

I had the understanding that the “Switch on Int” reads the array element, which is the index of the corresponding sense in the AIPerception config and executes the branch which correlates to that Int (sight = 0, hearing = 1, etc.). Also, wouldn’t it be impossible to separate the sight and hearing branch using SuccessfullySensed since it is true whenever any sense is successfully sensed? :thinking:

Here is what is in the “Sight” and “Hearing” branches:

Moving this here instead of editing my previous post:

This causes the AI to go to the location of the last heard noise first

Ok I guess I didn’t read your question good enough =)

If it never heard your actor, then this location will be garbage. If it isn’t garbage, then that means that it hears your actor (and that isn’t yet forgotten) and SuccessfullySensed will be true. Further you’re executing this logic everytime any sense get triggered (so even if it sees something new or stops seeing something new you execute the hearing branch and if there is still a not forgotten hearing stimulus SuccessfullySensed will be true there). So what are you trying to do?

Also, wouldn’t it be impossible to separate the sight and hearing branch using SuccessfullySensed since it is true whenever any sense is successfully sensed?

No it won’t be true for any sense. What you’re doing there is you’re looping through the last stimuli of the three senses you have and SuccessfullySensed is per sense, so it can be true for one sense and false for another.

1 Like

I would use something like this for filtering bitween senses:

@chrudimer is right for AISenseSight you should check if “SuccessfullySensed” as the PerceptionComponent triggers for both.

Also, the solution you set up would be catastrophic performance wise, every time the bot will sense something you loop not only through all perceived actors but also through all stimuli…


No, it will only loop through all actors where the perception has changed which are the exact same actors where it calls OnTargetPerceptionUpdated on. The difference there is that OnTargetPerceptionUpdated gets called for each of those actors, whereas OnPerceptionUpdated gets only called once with all of them.

You’re right though that it would loop through all senses for this specific actor.

1 Like

Hey silik1 :slight_smile: ,

Is there a way to get the information from the sense (age, stimulus location, sensed location, etc.) from doing it the way you have it? I would love to use something that simple but I wasn’t sure if it was possible to get the additional information that way.



So what are you trying to do?

I’m trying to make the AI prioritize sight over sound. So for example if the NPC hears you, they will go investigate it, but if they see you at all or during that they will go investigate that instead.

Hmm, alright. Thanks :slight_smile: To be honest it seems deceptively simple haha. This is a proven way that is robust and doesn’t have any major downsides?

:slight_smile: no major no minor. no downside at all :smiley:

1 Like

Cool! :slight_smile: It would work with multiple player characters as well?

yeah or with any other Pawns as well.

1 Like

Okay, thanks guys/gals :smiley:

If you or anyone else reading this post has some other ideas, I’d still be happy to hear them tho :slight_smile: Nice to get some direction from more experienced people. I’ll do my best implementing your ideas.

Hey @silik1,

I’m pretty sure it fixed the pesky main problem, so thanks a lot for that! :smiley: Although, now there is the problem of the NPC’s sight not updating the location of the sensed player quickly enough. It only updates the sight information when you enter or exit the cone of vision. What would be the best way to update the visual location constantly when the player is inside the cone of vision? :thinking:

I tried a bunch of things, but I am unsure if you can update the sighted location quickly enough when using “On Target Perception Updated” instead of “On Perception Updated”.

So basically, NPC_Target_Location should be updated constantly when NPC_Player_Visual is TRUE… Here is the updated layout:


This triggers when something is entering or exiting line of sight.
It’s for you to set up a boolean and logic accordingly.

Also you got some method in the Perception module.

GetCurrentlyPerceived could be of use, but don’t be shy to experiment.

1 Like

Instead of storing vector NPC_Target_Location you could store reference to the sensed actor, and then in your BehaviorTree you can use this stored actor in MoveTo task. it will evaluate its position automatically