AI Perception and Sound

Trying to understand the logic behind the hearing perception system. As I understand it the system does not trigger on “sound” but rather on report noise event. Does anybody have some more details behind this. Am I understanding it correct? Are you suppose to add a noise node next to each sound node that you want to trigger hearing? I guess I figured sound and attenuation would trigger hearing?



Yes. But the actor need to be registered as a stimuli source 1st. Look for pawnsensing documentation it is very similar.

Thanks. So there really is two separate systems though? Wouldn’t it make sense sound is sound? vs. having an additional functions that is noise? Or is there an option in pawn sensing to allow for sound to be registered?


There is indeed 2 systems, Pawnsensing and AiPerception. But once Aiperception is “finished” (it is finished but more user friendly) pawnsensing will be gone.
For Pawnsensing, to allow sound to be registered the component is PawnNoiseEmitter.

If i misundestood and you meant 2 system as “playsound” and “makenoise”, it does make sense because you don’t want your ai to react to any soundeffect you’re playing so you need to tell him wich sound to react to.