Leveraging Audio Occlusion and Attenuation in AI hearing

As per title: unreal already includes audio occlusion/attenuation calculations and I wonder if anyone has ever used them to influence AI perception hearing results.

I could not find any mention of this anywhere. Is there something that im ignorning that would make this unfeasible or inefficient?

Would love to hear opinions and especially direct experiences with it, if possible!