With audio being so important to a good VR experience, I figured there should be a discussion on this very topic. I’ve done some research on the subject, however, I am by no means an audio engineer nor physicist. Here is some info that I’ve gathered that will need processed from within the engine:
- Reverb/Echo: Bouncing of the soundwave
- Diffraction: How the sound passes through openings in the geometry
- Refraction: “Dimming” the sound as it passes through solid geometry
Now, these three phenomena are highly dependent on the material. So, PBR materials will definitely play apart in this entire thing. What I propose, is to include the necessary variables/properties of the material, so that the audio processing engine can then use these variables to calculate the above acoustic phenomena. A good website that lists a whole lot of these known properties is located here.. This list was compiled by the Onda corporation, so props to them. They have listings for solids, liquids, gases, rubbers, and plastics; to name a few.
This entire thing looks pretty familiar, or it should. This is what GI does. So along with baking lightmaps, we should now bake, umm, audiomaps (?).
There will be some cases for different audio projectors as well. As with lights, we would need both an omni sound source, as well as a directional sound source. The directional source does transmit the sound in an omni fashion, however, it would have an… FOV? setting, which by default should be ~ 180 degrees with the falloff of another 180 degrees. Use case scenarios of each type of sound transmiter:
Omni-directional sound: An explosion
Directional sound: Speech
So, I guess I’ll end it here and open the floor to you guys. What are your thoughts / ideas? How can we implement this? What is the overhead for something of this caliber? I know that audio is usually processed by the CPU, and with what I’ve seen with today’s games, the CPU is hardly used at all, so we should be good on that front.