What is the easiest way to get 3d spatialization for VR?

Hi All,

im just starting to take a look at how to add sound to my VR game and was wondering what the best way to go about it is?
Are things like true 3d spatialization part of the engine by default these days?
Or would I have to use something like fmod or wwise? If so which one is the easier for a beginner to pick up? Or which one would you recommend?


Hi Fredrum!

Welcome to the world of Game Audio! We have definitely had spatialization in game audio for a long time! The default panner maps the 3D position of spatialized sound sources and projects an angle across your speaker map, which then outputs that audio to the correct speaker(s). We utilize attenuation by distance to affect the loss of power from the sound source dissipated into air which can mean a drop in volume or a change in frequency response (through a filter).

On a platform where you know the user will be using headphones (like VR), you also have the option of utilizing an HRTF renderer which simulates the transit of sound across a human head (delaying sounds based on the speed of sound through air and filtering sounds based on the shadowing caused by the head itself).

We support 3rd party HRTF renderers and ship with some options already. Both Oculus Audio and Steam Audio Plugins provide HRTF renderers you can check out and these can be enabled like any other plugin in UE4.

There are a variety of other tricks and implementation techniques one might employ to create a well resolved 3d audio experience, so I encourage you to be creative and experiment.

Never discount the power of a well resolved stereo asset either played back non-spatialized, sometimes those can be just as useful in crafting your 3d audio experience as any other technique.

Hi Dan and thanks for the response!
I was on holiday last week (aahhhh) so I saw just now that I got a reply.

Great thanks for the pointers! I have a few follow up questions for when you have time,

I did start doing some first 3d sound setups and I think I got it roughly working. But one thing I seem to remember from before and which I cannot say for sure yet is if the current 3d positioning supports how high/low a sound source is?
I can tell the 360 angular around me works well but I can tell if the up/down works?

Interesting that you say I can use stereo recordings. I was under the impression 3d sound should be mono and the stereo created by the engine.
Do you have any further advice on that?

I am aiming to eventually go multi platform (Oculus+Steam) so not sure how to approach that.
Would anyone have any suggestions on that front?


The engine supports playing back stereo assets as spatialized and non-spatialized.

2D sounds are non-spatialized, but 3D sounds can be non-spatialized as well.

Stereo spatialized sounds generates a dual mono source perpendicular to your Listener Orientation spread apart from one another based on the Stereo Spread value on your Sound Attenuation settings.

High and low sound is difficult to evaluate in real life as well.

I recommend this video to understand the challenges involved in sound spatialization as well as the implications of utilizing HRTF processing:
And more information on HRTF:

Thanks a lot for the tips!
I’ll look into the Oculus and Steam HRTF solutions.

Cheers, Fred

old post but interesting.
it’s really hard to generate full spatialized sound in general. imitating sounds heard in separate rooms, separate floors of a building, planes/noises in trees, ect. maybe with these new generations of sound cards programmers can leverage ray tracing to do it, but as for hearing direction instinctively in a headset, almost impossible without multiple drivers imitating the different directions sounds come from.

Ive been getting some great results using the Resonance Audio plugin to get what you’re describing.