I’m not sure how many people are aware of this, but there’s a relatively recent (October 2017) research paper from Cornell University about a dynamic GI approach that I haven’t heard of before, called deep illumination.
Having read through the paper, it seems like a fascinating approach to both dynamic global illumination quality and performance issues. It essentially teaches a neural network to more efficiently discriminate between correct and incorrect lighting from global illumination in real-time at a very low cost.
The researchers state that they can get VXGI quality with only a fraction of the performance impact.
A quote from the paper giving an overview of how it works:
“There are two phases to our approach: 1) the training phase, and 2) the runtime phase. For the training phase, we extract G-buffers (depth map, normal map and diffuse map), the direct illumination buffer, and the output from an any expensive GI technique, which we use as our “ground truth,” from a 3D scene for different possible light-camera-object configurations in terms of position and direction. (We have used trained on the output of path tracing and Voxel-based Global Illumination, but our technique will work on any GI technique.) The GAN network is then trained in a semi-supervised fashion where the input to the network is the G-buffers and direct illumination buffer. The ground truth, that is, the output from the chosen GI technique, is used by the network to learn the distribution with the help of a loss function. Once training is complete, we can then use the trained network for the runtime phase. During the runtime phase, the network can generate indirect illuminations and soft shadows for new scene configurations at interactive rates, with a similar quality to the ground truth GI technique used for training, and using only image space buffer information.” (Page 5).
Has anybody looked into this technique, because it sounds like an amazing solution to dynamic GI issues, like quality in the case of LPV and performance-heavy calculations in the case of VXGI.
Here’s a link to the research paper:
[1710.09834] Deep Illumination: Approximating Dynamic Global Illumination with Generative Adversarial Network (Summary of paper)
https://arxiv.org/pdf/1710.09834.pdf (Actual PDF of the paper)