Oh surely is, I just commented on it, because it was also something that took time to consider, now probably it will be a rework if not dropped at all, since upsampling would produce similar if not almost like the sharpening.
Which is the angle for the light are you using into this scene? Im sure you posted something, but I was unable to find.
Also, very little angles which would make the shadows even more elongated, would present the same performance? So, both methods are equivalent in performance regardless scene differences, right?
I use default angle of 1 degree. When angle goes up both methods start to hurt by cache misses quite equally. But I barely see any difference between 0 and degrees.
I am not really sure should I bundle all my edits to single PR or try to split them small incremental improvements.
I already bought the idea, seens great. One unique PR with statistics about gains and pictures for the whole set of optimizations is more likely for them to take action faster than separate. Also simplifies the amount of testing they will do.
I tried your code and I see zero difference
I modified the shader file and pressed Ctrl+Shift+. and the log reports some shaders that changed and re-compiled
yet the result is exactly the same
this is how it looks on a scene with a sphere (default scale) and a DirectionalLight (default settings, just set to movable)
setting Shadow = 0; in line 191 makes the entire world shadowed, so the shader edits are really affecting (and I’m really in the correct shader define for PCSS)
but your changes still make no difference at all
This is the default unreal sphere scaled to 50, and a directional light with all properties as default but set to movable
@Kalle-H
I’m not sure if you should merge everything into one large pull request or separate minor changes. Epic had plans for further PCSS improvements, so they might or might not have something planned/drafted already. At least judging from . I’d probably separate performance-only changes from large alterations, like ditching sobol.
@
PR 4503 by Kalle-H reduces soft transition threshold depending on the angle between surface and light and pushes acne a bit further into the shadow, but will not cure it completely for large objects. We had a bit of a discussion about this in git comments.
[edit]
nevermind, I had failed to integrate the PR properly
I’ll report later with results
[edit]
ok with a proper implementation I see the result. but yeah @ it’s only really relevant for small objects.
sadly it doesn’t help with what I’m working on (large-scale city builder)
Results are not 100% same but it’s hard to see the difference with that image. My screenshot was using default sphere with scale of 1. Only thing that is different is that my test screenshot was not from PCSS but the default path.
Deathray also tested the PR and noticed that it’s surely help but does not fix the whole problem.
I am currently mainly testing with PCSS shadows and I just ditched DepthBiasPlaneNormal slope biasing because that caused some very nasty visible triangulation and caused severe shadow bleeding. It’s also not free. For me sample counts.(40 blocker search and 32 pcf) it’s add 182 assembly instructions. From 0.63ms to 0.58ms. Almost 10% of ShadowProjection time.
Random rotation is now using 4x4 ordered dithering.
There is screenshot of self shadows from real game object. Object size is 2.6 x 2.4 x 5.6m and it’s about 7k triangles.
I am totally with you on this. Despite mentioned in almost every paper out there as magical adaptive bias, it breaks on every edge and near terminator line. For large kernels assuming whole surface plannar is still bad, and perhaps as a part of research, it might be feasible to precalculate the bias for several (4 ? ) control points and interpolate between them when comparing samples. But at this point it becomes so bulky, that nobody would ever dare even to think of using it in production. So adaptive RPDB Can be used only in conjunction with other biasing techniques, not as substitute.
The issue still stands firm though, and for PCSS with large penumbra sizes, biasing is even more important than for conventional PCF.
Ideally Blocker search and pcf filter would need to use cone tracing. I just can’t get it to work. Idea is to only account blockers that are actually blocking current pixel and then calculate what percentage of light circle area those blockers are blocking. Currently all blocking samples are weighted equally in visibility calculations but samples that are near in depth cover more area so they should have bigger weight. I think if I can implement this then biasing shouldn’t be that big issue.
Can someone confirm that extra scaling with 0.5 is wrong? It’s seems that code is just confusing diameter and radius. Also CotanOuterCone could be precalculated.
I think the intention is to downscale the CotanOuterCone to a radius form instead of diameter (CotanOuterCone == diameter right?) and multiplication will have precedence over the division and the result must be in radius.
PS: thats a hell of a place to put some comments in code >.<
DeferredLightUniforms.SpotAngles.x is cos(OuterCone).
So equation is:
cos(OuterCone) / sqrt(1.0 - cos(OuterCone)^2) which simplifies to cot(x) when x is positive. Cot(x) is 1 / Tan(X) which is standard perspective correction scale term. I don’t see any reason why 0.5 is there.
Hmm… I see now… Which issue that would fix? I mean, there is a reason for you to reach that part of the code, so the correct answer maybe is behind the question: Why are you there?
Also, as a side comment, sometimes people just multiply stuff for 0.5 instead of dividing by 2. Modern compilers usually will do the best choice regardless the intention, not sure how the shader compiler is handling such thing thou. Anyways, this might lead to a copied and pasted code with an error inherited or there is an reason behind this which only a comment in code would dismiss (not this case unfortunetelly).
You finding is solid, as for the repercussion changing it, I don’t know. The usage ahead would tell I guess.
Considering that SpotAngles.x store cosine of half outer spotlight angle, indeed 0.5 seems incorrect here. Stuff can(and as best practice, should be) precalced, but it should be folded to unifrom by compiler.
Good to get second opinion. Trigonometry isn’t my strong suit. I will send PR to correct this.
I just noticed that stationary spotlight softshadows are not as soft for dynamic objects that for static. Then I noticed that 0.5 multiplier which seemed off. Then I needed to do math and my assumption seemed to be correct. I used lightmass as reference so I can compare against ground truth.
Could the 0.5 be multplier because after perspective projection coordinates are in clip space (-1 to 1) but we need radius to be at texture space(0 to 1). I am not sure about this at all.
Ps. Ligthmass reference says that even after removing 0.5 scale, dynamic shadows were bit too sharp.