I think you guys are making assumptions on things you do not understand. It’s like you heard about VPLs in a different context and now they are automatically bad. Virtual point lights are simply freely placed point representations of bounced lighting. Freely placed bounce light representations can be better than ones with quantized positions (voxels) because you can be smarter about where and how you place them.
VPLs with Distance Field GI are placed by tracing rays from the light through the distance field respresentation of the scene. It takes ~.5ms in a full Fortnite level, without even culling the objects being ray traced against (unoptimized). Triangle count does not affect it. Contrast this with Reflective Shadowmaps used by Light Propagation Volumes to inject VPLs, which would be about 6ms in this scene (from experience).
Voxel based methods are only good if you can afford enough resolution to keep the voxels small, say 10cm. That’s not possible on anything but the $600 GPU’s. Even there, the massive GPU time is not well spent. The reason is that voxels are a terrible representation for any diagonal geometry, or thin walls. The result is leaking everywhere and self-occlusion artifacts, or if you allocate all your resolution up close, then you have a poor view range. Voxel methods require huge costs to handle dynamic scene updates, because revoxelizing requires multiple rasterization of the mesh’s triangles.
Distance field GI gets around these problems by 1) storing geometry representations in distance fields which reconstruct diagonal or thin surfaces accurately after interpolation 2) Not requiring any operations on the triangles of the mesh at all. Dynamic scene updates are just replace this matrix on the GPU, no revoxelization.
Anyway I probably won’t post any more early results here. Not sure what I expected.