VR virtual reality in UE5 with Nanites and Lumen?

Hi, I’m new here, so this question might be completely off.
But I was planning on buying a new computer and VR until I heard Nanites and Lumen was not supported on UE-5 in VR mode?

Why don’t this work? When you can have a lot of reflective mirrors in the world, is that not the same?

Do you really have to generate 2 different “worlds” when using 2 displays(eyes)?

I added a picture on how I see about it.
The red dot symbolize the rendering spot (I don’t know the correct name for this)
And the blue cameras the captures and displayed images.

1: Is how i suppose it’s normally thought about it.
2: Is how I imagine it would render/work.
3: Is how I see it working with eye tracing.

Is this completely wrong? Or where is the problem in making this work?

And as a side note, can’t UE make it work even if it takes a lot of computer power?
I guess it’s only the closest nanites that has to render twice even if you use method “1.”

Or, make a “false” VR experience with the same image in both eyes, or will this look to weird? You would still get the nice experience the head-movement and a wide display.
(especially with Pimax 8kx or later on 12k, if there is support for those, now or later on?)

That’s all :slight_smile:
Thanks for a good Engine worthy of the future!

If you give the same image to both eyes you better also ship a complementary bucket with your game.

2 Likes

Okay, so you have tried it?

Would it be possible to make, let’s say, everything that’s more then 30 meters away, to be rendered as one picture to both eyes, and everything closer as 2?
That is, a combination of the pictures.
Or would that not save any FPS?

And do you have any comments of the other options i suggested?
Like the picture nr 2 and 3.
That is, is it possible to let the computer calculate a picture behind the eyes, and capture 2 pictures where the eyes is supposed to be?

@Iamjagar I believe the issue is how nanite and lumen are rendered. They are done completely different than the current render. Because of this, they are releasing UE-5 with nanite and lumen non functional for VR, however, I believe they will be working towards that in a future release after UE5.0 but they didn’t give a timeline for it.

Yea, I heard they talking about it, and that it will be years away, that’s why I try to think outside the box, maybe it will help

Humans are extremely sensitive to the eye position being wrong, to the point that your VR headset has sub-millimeter adjustments for eye distance. Rendering both eyes from the same position only works on things very far away. Old versions of UE4 actually had support for monoscopic far field rendering for mobile, although that was removed for some reason.

I’d imagine the main reason Nanite and Lumen won’t work for VR is that they seem to do a lot of processing outside of the render passes. The brute force approach to duplicate all the resources and work for each eye might not be too difficult to hack in, but it would drag down performance so much that it would be completely unusable. Epic would probably not be very keen on supporting that as a feature.

If I remember right, they are planning on implementing Nanite but not Lumen for VR.

Far sight

Is it not possible to render one image, but crop it so that the center becomes more natural for far sight? (I try to illustrate my thought in the picture, if it’s understandably.)

And that’s why I try to brainstorm ideas so you don’t have to render things two times.

But I don’t understand how they can reflect so much from windows, cars, water and everything else, and not the same things from a slightly different place.

I see that they thought about a lot of the things that I talk about in your link, however they didn’t talk about how to render/calculate from “the third eye” (or further back) as in example 2/3 in the first picture.

The monoscopic far field rendering works pretty much like that already, but using only that would cause parallax errors for anything nearby.

You could technically correct for the parallax using motion vectors, the same way how spacial reprojection (motion smoothing, ASW) works. It would cause a lot of artifacting and occlusion issues though.

The thing with Nanite/Lumen is likely not of an issue of whether they can get it to render from slightly different places, but how much time and effort it would take to develop, and if it’s possible to make performant enough for VR to make it worth doing.

Well, I hope they can get some help and support from the forum, because it would really be something if it worked.

It would be awesome if we could have Lumen & Nanite for VR.
I guess we all would love to have that solution.

Hmm… but for now… I am assuming the best is to use Forward Rendering and GPU Lightmass for baking. Or do I see this wrong? What are all your thoughts on this?

Thanks for all your input, appreciate it!

1 Like