Oops, sorry for that post, I just read page 3 where you mentioned you tried voxel cone tracing and it was slower. That’s a surprising result to me. Are you still using fixed step sizes?
Hi again,
Sorry to bother you with . I cloned 4.8 and it built fine. I have 4.7 building fine also. I found errors in the logs that make me think it is something to do with some files having errors. It might be a dependency but really right now I am quite confused.
Here is a log file of the build. First error starts at line 555.
http://www.f00n.com/random/AHRbuildlog.txt
Any further help is greatly appreciated as I can’t wait to get something made with . If you haven’t time I understand and I will be posting on the answerhub to see if anyone else can help.
Thanks.
Yes, and on practice is faster, both on tracing ( not really sure why, probably the gpu likes that access better) and on voxelization, as you dont need to pre-filter the voxels.
I’m also just tracing 5 rays plus one for reflections, so that may be another difference. One thing I really want to add is taking the prev frame , or a few more, if the pixel hasn’t moved a lot, and keep that GI. That way you increase the ray count without performance hit( at the expense of some ghosting)
EDIT: A few pages back you can see a comparison between AHR and vxgi. While vxgi looks better, AHR is about 3 times faster.
I think the problem is you downloaded the broken 4.8 version. I have deleted that on , so try downloading the zip/cloning again, should work
ahh thank you
Post here when you try it, want to see how it goes.
I must have gone wrong somewhere. I deleted the repo, re-cloned the “release” version but it’s the same. I don’t see a specific 4.7 version and the choices that are offered it seems the “ahr_new_release” should be the right one?
Hmm, i think it was my bad. You should clone release, but I see on that the release one I old. Will check it out when I get home
Thank you.
Wow, I never would’ve thought you could get such good results with just 5 rays. The noise is barely noticeable. Are you relying heavily on the bilateral filter to smooth out the noise? Because, if so, you’d also lose lighting detail for surfaces with high-frequency normal map details. But everything you’ve shown so far looks fantastic, really. Funny how voxel cone tracing does all that expensive filtering, yet it barely makes a difference in image quality.
VXGI does look better in that comparison, but only because of the color bleeding. If you can fix the bilateral filter, it should look just as good as VXGI. Maybe VXGI is using the “joint bilateral upsample” technique using the depth buffer mentioned in article: http://web.stanford.edu/class/cs448f/lectures/3.1/Fast%20Filtering%20Continued.pdf.
Clever trick here (for GI, reflections are just traced normally).
I trace with the object normal, not normal map, then apply a heavy bilateral blur( that I need to improve). The trick is storing the 5 rays separated , bluring them separated, and then on the final step, at full res I sample the 5 rays and interpolate the result from the normal mapped normal doing a wighted average of the five rays, where the weight is the product of the normal and the ray dir. way you get an approximation of what light would arrive to a given normal
Pppppfffffttt! Mind blown!
Well, i had messed up a bit my repo, think i’ll have it fixed on a few minutes. The branch is “release”, ignore all the others. Particulary, ahr_new_release will have the day to day changes, might even not compile. Actually, think i’ll change the name to something more clear, like ahr_live_branch or similar.
EDIT: It’s now fixed. The release branch now contains all the correct commits, and should be working. Just ignore all the other branches.
is very likely that I am missing the point, but i am going to ask it anyway , what stops devs from doing something similar to for direct shading but replacing the rays with the lights so is posible to only compute lightnig over n pixels while still getting a very good quality.
Maybe for shadows you could do something like , where you compute the shadow term for a fixed directions and then interpolate, but probably not for directional lighting. In any case, you could think a sibling of precomputed radiance transfer, and there’s some good research on that area.
Still, good you asked! Always ask, the worst that can happen is that someone says you’re wrong
Thanks, now i know wat prt really is, thankfully i havent talked about that yet in other forums, so now there is more research for me, alsot i think that i have heard of similar tecnique in one of the many square enix papers about path tracing using rasterization.
Lets hope that in the end using that ‘’ interpolation’’ is a viable option for other things such as direct lightning.
EDIT:I had an idea that is probably going to work , but could slow down the rendering instead of improving it , maybe the ‘’ interpolation’’ can be emulated by apliying an inverted fresnel for each ligh (or lights that come from the same direction if its posible ) using the light vector as the camera vector input and the low resolution direct shading of each light as their respective power inputs , but just taking into acount the normal map and not the object normal
all can be done using Fresnel Vector Ops for prototyping and if really turns out to be worth the effort im going to TRY to implement it direcly as a shader to speed things up (im am by no means a seasoned gneral programer , and i know far less about programing in HLSL), but because is not a general discusion forum i am going to start my own thread when i have actually done it.)
Few weeks ago I promissed that I’ll make a scene but I can’t work on it right now because I have some problems with school. I’m so sorry
Right now you’re just raymarching till hit, though 4.8 should fully support the signed distance field volumetric textures. Which should dramatically improve tracing time as tracing distance increases, but then as distance increases you’ll get either dramatically different results from temporal instability or dramatic undersampling… still I’d suggest it’s worth a try.
Using a distance field to accelerate the raymarching is a very good idea! I think the method should hold up fairly well for long distances, but if it gets too noisy due to undersampling then you could always use a cascaded voxel representation and read from a lower-resolution cascade as the rays get further apart.
Cacaded voxel textures could also be used to accelerate the raymarching in much the same way as . First you read from the lowest-resolution cascade, and if that voxel is empty, you can skip the ray all the way to the edge of that voxel. Then move on to the next cascade. That way you can skip through large empty areas quickly. It’s basically sparse octree voxel raytracing, but without the sparse part. It should avoid the performance pitfalls of sparse octrees, too.
But using distance fields is probably the better method overall. It might also help performance to collapse the instanced distance field into one global distance field. I think the guy who’s working on DFGI is already doing .
No problem! I’m stuck with studies myself.
I really don’t see why it should be so much faster. Sure, you take less samples, but the access pattern is awful for GPUs. Take for example what I talked earlier, when I tested trilinear interpolation. Doing “naive” raymarching, even while doing more samples, ended up equal or faster than the more complex version with a buffered binary grid.
While it looks good on paper, doing a sparse trace is not that good on reality. For example, consider when you hit a lower res cascade, but the ray doesn’t really hit the higher res voxels inside. You’ll have to keep a stack to go up when you don’t have a hit. It is more complex.
As usual, only way to know for sure is to implement it and see, and luckily epic is already working on DFGI. Also, distance field have some limitations on the geometry and use quite a good deal of memory, so it’s nice to have different techniques for different projects.