Hey , big fan of your work, been following thread for a while.
Tricky question. Will talk about AHR, as I have no idea about VXGI limits. Technically, you could have as much distance as you want, it’s just making the bounds as large as the entire scene and having a high sample count. In reality, there are limitations, for example, you wont get 1km long rays at a 1 cm voxel size, it’s all on finding the balance between sample count, voxel size and bounds size.
Let’s do some math. Given that you mentioned large scenes, I’ll say you need to use a movable bound. Let’s say 100m by 100m by 20 m(up), which I think it’s about what you’ll need in most scenes, as you can’t really see so much detail after 100 m. I suggest to add an ambient cubemap to fill in the rest.
Also lets say a voxel size of 0.25 m and a sample count of 32. 25 cm precision on a 100m sounds like enough, but you could go even lower, to say 20 or 15, just remember that memory usage is cubic with the number of voxels. With a that settings, the voxel grids will take about 40 mb for t( there’s a few of them, but at most we are talking about 50 mb. The math is simpy SizeX/VoxSize * SizeY/VoxSize * SizeZ/VoxSize, and then multiply by 4 for the emissive grid, and divide by 8 for the binary grid), and the 32 samples gives us a range of 8 m.
The idea behind all talk is that for most uses, range should be enough, but will require tweaking the settings.
I’m a little curious and probably missed the explanation earlier but are you voxelizing everything in the bounds at run-time? or are the bounds fetching pre-voxelized static geometry and running a union with anything dynamic?