Render Target / Texture issue


I’m trying to implement a render texture / target in a VR project - mapping a camera pointed at another part of the level to a texture.

The issue I have is that when running normally (i.e. no HMD) the texture looks fine on a monitor and updates great.

When you look at it on a HMD, then each eye seems to see a different image, almost like it’s trying render stereoscopically, but the offset is way off. It’s so bad you can’t look at the object/texture without it almost hurting your eyes. If you close each eye, you can clearly see an image on the texture, which is correct, it’s just that the two images don’t align properly in stereo.

Does anybody know if there is an easy way to fix / resolve this issue?

Ideally it would be nice if it render correctly in stereo, so that it actually had depth, but if it had to be completely flat, it wouldn’t be the end of the world for me.

Many thanks!

Without seeing your setup I can only speculate as to what is going but my best guess is that the Render Target you are displaying is not being transformed into stereo correctly. You might be able to fix this by splitting the Render Target output into a left and a right eye using the screen space Material Expression node. Check out the following How-To for more information on how to do this.

However when you set this up, you can omit the LongLatToUV conversion part and instead just plug your Render Target into the the Left and Right eye Texture Samplers. If that does not work then just try using one Sampler and plug the Render Target into both.

Thank you very much for your detailed response. This was a huge help and ultimately got me the solution!


Sorry for the bump on this, but I’m having the same issue, even following the tutorial Sam linked (I have the left and right eye texture samplers both using the same scene capture source texture, and the displayed texture won’t converge when viewed in HMD). UKdude, can you share what your solution ultimately was?