hello,
looks like great work and am looking forward to testing it but it wont compile for me due to a bug in version 4.12 it is fixed in 4.12.5 would you be able to update?
BUG
hello,
looks like great work and am looking forward to testing it but it wont compile for me due to a bug in version 4.12 it is fixed in 4.12.5 would you be able to update?
BUG
Already did
Will double-check everything today and push it to git
perfect did try to do it on my version but gits being stupid for me as usual
Just pushed 4.12.5 to git.
Also did some work on the OpenGL side. Need to modify a few shaders for it to start, and then I can test it. Still, I never used OpenGL, so it probably wont work for a while D:
, thanks!
I have no knowledge of either API, but I’ll try to fix any issues with clang (on GNU/Linux and maybe OS X) once it’s working on Windows
Didn’t had much time to work today, and probably wont have any week, but did catched a small break, and as it was too short to do anything productive, I just played around, and ended up with a base for SSS
Can’t wait to use it on OpenGL
Maybe one day Epic will add your work to the official engine, once it’s complete
Well, after some really busy weeks with university projects (had to pull 14 - 16 hours shifts to get the code finished last few days), I’m now more free.
I did screwed up my repo just now, so had to delete everything and clone again. Luckily I just changed my computer, so it’s a good opportunity to fully test my new beast! Xeon E5 2683 V3, 14 cores (plus HT) at 2.6 (have some BCLK OC) along with 32 Gb DDR4 2133 CL 16 RAM (running at 2217 CL 12). I’m still hyped over it
I screwed my repo trying to update to 4.13, so I think I’m going to wait on that update for a bit, and see about working on features and improvements.
out.
congrats new PC buddy
Well, i had an idea the other day to improve a reconstruction technique I worked on a while back, to reconstruct an image from a sparse data set, but now I manged to make the selection of the points to keep smarter, and well, guess the results speak for themselves
Top left is the original image, and on the right is the reconstruction. Bottom left is the image with the culled pixels in bright red, so everything that’s red is not on the input image for the reconstruction. Bottom right is the relative error % (times 100 so it’s visible).
I actually wanted to show working on AHR, but I haven’t really had much time to work on that, so just showing for now. The idea is to use before tracing, to only trace on a few pixels. The reconstruction code is fairly light, so I should have little overhead. Most of the overhead will probably come from the lowered coherence, as the trace will be sparse, but considering that image only has about 20% of the originals pixels, and with GI i can probably push it to 10% or less, it should allow a really big boost, that will also allow me to trace more rays, and improve quality a lot. Plus the way it works makes trivial to apply temporal filtering, essentially doubling the number of rays for free.
Hope to have something more to show soon.
Thanks!
Image reconstruction for sparse tracing? It’s a novel idea, wonder what the delta over motion is. Another thing to note though is that the test image is fairly low frequency in terms of spatial delta, lots of smooth lines and gradient like changes. The input also appears heavily biased towards edges, which will make accurate reconstruction easier. It’d be a better test if the input was actually randomly sampled from halton sequence/whatever. Maybe it wont matter that much for diffuse traces but it might get bad for specular. Still, excited to see what the results are!
Here’s with a more complex scene.
Not sure what you mean by “It’d be a better test if the input was actually randomly sampled from halton sequence/whatever.”. Part of the reason it works is because I select what pixels to keep. For AHR I’ll select based on the GBuffer.
Also, here’s the exe I used to make that images. It expects a file called “tex.bmp” to be placed on the same folder as the executable. Then you need to input two sparsity parameters, one for the edges and the other for uniform noise, after that it shows the reconstructed image and number of culled pixels.
By pressing “I” and “O” you can see either the culled image, or the error.
It’s single threaded, cpu based and not optimized at all, so ignore performance (it takes a second at most though).
If you could try it I would love to see what results you get!
About the edges, i’m 97.758% sure it’s an implementation error, and not an error on the algorithm itself
PS: was meant mostly for the diffuse part, as I can get around with some approximation error there. As you said, it might be more problematic for specular, but will see how it works
PPS: Forgot the link to the exe https://drive.google.com/file/d/0B6A51p8LzEWYUTlLdDhWUU1zdk0/view?usp=sharing
PPPS: And the SDL DDL https://drive.google.com/file/d/0B6A51p8LzEWYR0Z3OEhKVlNPU3M/view?usp=sharing
Ok, I suppose I’m lost on what you’re trying to reconstruct. Why essentially, compress and then reconstruct the g-buffer itself?
No no, the idea is to use the GBuffer to select the pixels to trace, trace GI for only that pixels, and then reconstruct the sparse GI image.
I mentioned the GBuffer cause I need some info on the scene to select what pixels to kill, based on normals, albedo, stuff like that. The nice thing is that I have more info on the scene by using the GBuffer compared to the images I showed earlier, so I can make better guesses on what to keep and what to kill.
The core idea is to trace rays for less pixels.
In case the g-buffer is being reconstructed based on the most important and high frequency changes in the g-buffer/viewspace. Which is not the same thing as the frequency changes in lightspace, which is something that isn’t known until its traced. The viewspace varies on the edges you detected true, but that’s no indication that lightspace will (or rather is a poor indicator, back to that in a second). But assumptions can be made, specifically that lighting samples applied to neighboring worldspace pixels will be similar. is why it’s not to much of a stretch to downsample and then screenspace raytrace.
The same should apply to any raytracing. Specifically one could take a voxel like structure of the screen, cascaded so the voxel size goes up as depth increases (thus reducing sampling rate as scene complexity increases with depth). The voxels would otherwise be of equal size in all dimensions, depth as well as x and y, thus essentially downsampling the screen in 3 dimensions instead of 2. Then from the center of each voxel trace and apply the results from that to the entire g-buffer portion that the voxel contains. Blurring or otherwise combing contributions from neighboring voxels would be needed to ensure a smooth change in lighting. Another interesting result would be valid temporal gathering over time, as you could essentially gather light in worldspace and since you know where each worldspace voxel is just keep results from previous frames and keep re-using/adding to them.
The downsides to is possibly dramatic lighting changes for geometry that comes in without having samples from previous frames. For current that temporal upsampling is usually applied to is relatively minor, such as shadow filtering and screenspace raytracing, and so can be hid by the fact that the new geometry would be a bit motion blurred for a frame anyway. But large contribution diffuse lighting might make it unusable, or at least unusuable for progressive sampling over a lot of frames. Still, the idea of worldspace downsampling seems valid.
Regardless what’s presented is essentially a 2d image compression algorithm. And possibly an impressive one, what’s the compression ratio of your input compared the original? Anyway, in trying to only sample the pixels as you’ve done you’d get dramatic and temporally incoherent lighting pops as light sampling skips large sections of the screen. EG a large, high frequency luminance change might be totally valid in the middle portions of her dress, but since you’re not sampling from those positions you’d not see it at all until it hits a valid pixel and then POP! a dramatic lighting change happens. Or rather, indirect lighting can be as high frequency as direct lighting (just not as often) it wouldn’t be valid to sample shadow maps from your “important” pixels, as large sections would pop in and out of shadows coherently.
JPEG can make 10x - 15x compression without noticeable changes, and is at most 5x - 8x for the same quality, so not that impressive for image compression.
You raise a good point about the validity of the Gbuffer as an estimator of illumination variance, and that’s something I want to test. Will see if I manage to get a test app working tomorrow.
I do plan to add the pixels from the prev frame on the reconstruction stage, so you reconstruct with the ones traced frame, and the ones of the prev frame, to increase samples and increase temporal coherence.
Also, one could do a quick SSGI (maybe even just SSAO) pass, at say 1/4 or even 1/8 resolution, that should provide a much accurate estimator to select the pixels.
Another option is going multi-pass, by generating a relatively small amount of trace samples (based on the gbuffer), tracing, analyze the trace image and generate new sampling points for the areas that show the highest variation. may be the best option actually, just thought about it.
Of course, all is assuming low frequency GI, that’s a common assumption to make, and the one that allows to use some heavy blur (for example, I use two passes of a 13x13 depth-aware blur for AHR), but sure, as you say, it may break.
So, i managed to do some early tests.
There are things to fix, and it’ll probably take a while until it’s on UE4 (mainly cause I use indirect dispatch and UE4 has no support for it), but i think it works really really good! At least x4 increase in performance is , even if it’s at the cost of some image quality ( or can turn up the quality and still gain performance).
I need to improve temporal filtering, but that’s just some really early testing. Just to make some hype
Cya all later
If you can fix that jittering when moving the camera and also get it into UE4 (what is indirect dispatch?) then that looks really promising!
Nice to see you still keep pushing it , keep up the good work!
is really .
Great work.
I tested out GI system in “Realistic Rendering” and “Infilitrator” and it looked fantastic in each, as well as MUCH more optimized than VXGI.