New rendering technology idea

Hi All,
I hope this get read by a few peeps that know a lot more about rendering then I do.
Disclaimer: I am not a render-engine specialist, nor a software developer, just a professional 3d artist with 20 years of experience using different render-engines and compositors ( like After-effect and Nuke). So I know a little bit about how a render is made.

When I first saw Nanite and Lumen I was impressed like everyone else and I wondered how they ( Epic ) did it.
A lot of videos further I have a good idea what they did ( but some of it is still hard to grasp). It is a brilliant piece of software engineering with a lot of advanced trickery to pull it all off.

My idea goes a step further, the basic idea of Nanite ( distance fields) combined with motion-vector-maps. You all know distance fields, a kind of voxel-like representation of a solid-object. Objects will be sharp and detailed close-by and fussier in the distance.
And what are motion-vector-maps? A motion-vector-map is a visual representation of all motion in a render. Those are rendered on a frame to frame base. This is something that is not standard in the UE… But it can be done with some HLSL-shader code or some node-setup.
The trick is to combine both and get a a reduction in rendering-workload when there is any motion in the scene ( pretty much always). The motion-vector-map works as a reducer for the distance-fields. The more intense the MV-map is, the more reduced the models become. Makes sense, do you really see every detail on a statue when driving by at 100 km/h? ( That would be about 2483,554 opossums/h for the Americans) The model would be a blur that could be made out of a few dozen voxels, instead out of many millions, but you did see a shape that pretty much looked like that million voxels-model.
You could also do things like distance to center of the screen on top of the MV-map.
Things in the center of the screen will be more higher-resolution then on the sides of the screen. Makes sense, since you are concentrating on the middle of the screen and not something in the corner of the screen.
Also distance to camera could be added. Things that are further from the camera will be higher resolution to reduces LOD-pops ( which don’t really exist anymore with Distance fields? But not all objects are Nanite-able…).
I think this could be implemented ( or already is planned?) and I wanted to share my idea and hope to contribute something back to the UE and Epic :slight_smile:
Kind regards,

Rob

Nanite already controls detail by distance (the goal is that you never have more than one polygon per pixel since more than that would be unnecessary). I wouldn’t reduce detail at the edges of the screen since while for a lot of situations people will be focused at the center there would be plenty of times that you would look elsewhere and there’s nothing to stop you from doing that.
I’m not sure about the motion stuff. I would think that a vector pass would be done after Nanite gets processed so that could be an issue.

Hi Darthviper107,

You have to see the MV-stuff working together with Nanite in motion. This idea is ideal for fast moving games, where the focus is naturally at the center of the screen, where your car, plane, boat, whatever is going.

I am not a render-engine programmer, but if they can invent things like Nanite and Lumen, I think they will find a way around it :slight_smile:

Adding vector motion to lumen wouldn’t be impossible, but likely either incredibly hard or expensive.

Edit: Lumen, not nanite

Nanite doesn’t use distance fields, lumen does; you can learn how it works on the docs and in the livestream. Also, unreal does have a motion vector pass; that’s how motion blur works.


Using motion to LOD something seems like a good idea, but it will not be as effective as you think, as the LODs will only lower when the screen is blurred during large motion. In other words, you are increasing the performance of a blurry image, which is pointless because it’s a blurry image.

This would only be useful in situations of fast constant motion, like the example you gave, where the statue is going to only be seen in a blur, but never at high detail. Transient motion (like player-controlled camera movement) is short, so the LODs will only lower for a few frames max before increasing again. Slow constant motion will not benefit because the blur will be nearly non-existent and too small to hide the lower LODs.

You can’t use the motion vectors because you need to render the object to render the motion vectors; I think this what darthviper107 was pointing out. You would need to get the motion of the object relative to the camera before it is rendered, then scale the LOD based on that.