Is GI in VR possible yet?

I’ve been fighting with Unreal for about two weeks now to try and get some sort of Realtime GI going in VR, but everything I try just keeps coming up short.

My hardware setup is a desktop with a 1080ti and an MSI laptop with a 3060 in it. And for a headset I’m using a wired Quest 2.

Someone suggested on discord that I try RTXGI in 4.27 as it “supports” VR.

Outside of VR, it got even nicer results than lumen with none of the night time SSGI problems (I highly recommend trying it if you haven’t already). And it performs moderately well on my 1080ti and super nicely on my 3060 (although the strip of metal above the keyboard on the MSI gets hotter than the surface of the sun).

But when I try it in VR, the quality goes completely to hell. None of the nice shadows under tables and other furniture show up at all, and the head movement when you look around is… jiggly? Makes you feel like a bobble head.

I suspect that the nice imagery in the viewport is a combination of lighting by RTXGI and shadowing by Path tracing (the shadows and the reflections have that telltale sparkly look) and when I launch in VR the RTXGI is working but the path tracing is not (is that correct?)

Has anyone had any luck getting better results that this with RTXGI in VR?
Can anyone offer any tips or suggestions for getting the VR view to look more like the viewport view?


Good question. Haven’t dived in the VR world with UE. So can’t answer your question. But I do have a question for you, how do you feel the development on the quest 2 using UE? Not hard?

Ok, first up, apologies for not answering syrom, it’s a pretty broad question…

Now, re GI in VR.

Still battling away, currently my thinking is that RTXGI is in fact working in VR.
What’s not working is raytraced shadows.

i.e. here is what my map looks like in the editor:

and if I turn off casting static and dynamic shadows, that’s pretty much what it looks like in VR for me:

I have heard lots of people talking about how we can’t do raytracing in VR, but that’s what doesn’t make sense to me. To explain why, I made this short video:

Here you can see two cameras in slightly different positions having no trouble at all both rendering with raytraced shadows. I don’t understand why we can’t just do this and then just send one camera to each display in the VR headset?

1 Like

P.S. I upgraded to a 3070ti

Hmm… interesting. Notice how without raytrace the screen shot looks so VR from what I’ve seen so far. Hopefully down the road it gets implemented properly. And yeah, see your point with the 2 camera deal. And congrats on the 3070ti! I’m still on the 2070 and waiting for a affordable 24gig vram card to be available.