AI Sense Hearing Question


I know that the new perception system isn’t quite ready but I’ve been playing around with it and reading though the source code.

I’m just wondering how exactly the hearing sense is implemented. It seems from my testing that a hearing event is registered by calling PerceptionSystem::MakeNoiseImpl(). It also seems to register a perception event with listeners no matter how far away the source is.

How does the sensing system decide if a noise is heard by a listener? Does it take the level geometry into account at all? What source files should I be looking at to work this kinda thing out?

What I would like to do is implement some sort of sound sensing system which works like the navigation system where you can set up volumes that affect how far sound can travel through them and use path finding to see if a sound is heard or not. Is anything like this planned for the system and if not will the perception system be extensible enough that I could extend it with something like this?

The core of AI hearing sense processing is done in UAISense_Hearing::Update and there the distance and other conditions are being checked.

The default implementation of AI hearing doesn’t care about level geometry, but if you implemented hearing that did that it would be very easy to plug it in in a form a sense class.