It’s happen on two different PC’s with different gpu’s and latest drivers. We only use vanilla POM from 4.10. No pixel offset or manual texture size. I try to build minimal test case tomorrow at work.
I’m having a problem with Dynamic shadow cascades set to 4:
Does it relate to ? :
"Description=“Pixel Depth Offset is used to give accurate intersection with other meshes as well as shadows cast by other meshes. Will cause a problem with Dynamic Shadows since the original depths will shadow the new offset depths.”
Is there a fix ?
@ > Any update on those examples you were prepping a while back?
Yes, it was a volume decal, totally my bad if is the wrong thread. The volume was just the noise node with some value scaling and such.
Re: marcomaryred
The PDO does not affect shadow casting yet, only shadow receiving. That means for now it is best to not use it with dynamic shadows unless you can disable shadow casting on the PDO meshes. As for why did 3 cascades look better than 4? Probably with 4 the resolution got worse and therefore the self shadowing changed in appearance. Does seem odd the difference is so striking but I don’t think there is any bug there beyond the lack of PDO affecting shadow casting. Most likely one of them was just out of the ideal range for shadow density. It can happen with a variety of settings combos. Still, I will play around to see if I can notice anything.
Re: Examples
I have started on a map and should have it done week. thursday I am going to go onto the livestream to talk about some paragon tech arty stuff. Looks like that will mostly be about POM and some procedural foliage stuff. I hope to use a version of the upcoming content examples level as part of the demonstration so here’s hoping I can get it done by then.
Awesome, looking forward to it
Wow! Can’t wait!
I just checked the starter content, is there no sample material for pom ?
I believe mentioned it would be in the 4.12 release!
Yes. 4.11 preview was already released and that means new content submissions were already locked down. It will be in 4.12. And the youtube copy of the stream should be up in a few days as well so maybe people can just refer to that until then.
Watched the Youtube version of the stream and it looks amazing! Thanks so much for the work put into ! Can you elaborate a bit on the status of your curvature experiments ? It’s kind of difficult to see the results from the video right now. For instance, if we have a corner of a wall, will we be able to invoke some kind of silhouette clipping on bricks at that corner without going through elaborate tweaks? I reckon if you manage to nail such a ‘basic’ but very important visual realistic improvement (comparable to displaced geometry, making these corners look so much better), we’ve already gained a huge amount!
Silhouette clipping like that is possible, but to do it without the curvature information in some way requires basically hard coding where the UV boundaries are. It is very easy to say “did the ray go beyond X?”. The trouble comes when you want X to be a different position for every mesh. And getting the corner to be seamless viewed edge on will be tricky, not sure about that part.
Doing it with the curvature in some ‘light’ fashion may be possible. Ie it might require a very specific chamfer size with soft normals, and some vertex colors to be painted a specific amount only on those corner verts. Then you do some slight parabola on the ray and check if it ever goes back up above 0. If so, mask the result.
I will dig out some examples later.
The ‘correct’ curvature I was messing with is a method that would work very well if the rate of curvature stays fairly constant as the ray goes through the volume. That is true for many curves like cylinders and spheres and thus would work for most rounded corners. It might run into trouble where the mesh curves one way then suddenly reverses direction. I think for those kinds of meshes, the only solution is to provide a vertex normal as well as tangent vector texture and raytrace them in lockstep so the ray can be re-transformed at each step.
That said, I have seen some people claiming to have solved without requiring extra textures. The only other thing I can think of is to use ddx/ddy. I have tried for a while to solve it that way but the main is facets appearing.
The basic gist of the math is that you have two principal curvatures. In our case we pretty much assume those occur along tangent X and tangent Y. If we know the radius of curvature for both axes, we can the view vector (or just extract the components from the already performed dots) which tells us how long the ray travels along each axis, then multiply each distance by that axis curvature and then sum the results before performing a vector rotation using the sum.
A cylinder has all curvature along 1 axis and none along the other. A sphere has equal curvature along both axes. A saddle has curvature flipped between X and Y. Those are the basic cases to solve.
Did you have any luck digging Brian, or shall we have a small fundraiser to get you a bigger shovel?
Even if there are certain limitations or some extra steps involved, would still make for a very desirable addition to POM.
Can’t wait to see what you come up with!
I spent some time with inners of POM node. I noticed that there are some low hanging fruits for optimizations. With small modifications I managed to get couple temporary variables out of the loop and simplified lot of math with identical results. should help with GPR pressure. Biggest change was that custom node now returns offsetted UV instead of offset. causes some node changes but these does not change overall cost. also fix first iteration intersect with white pixel. also fix uninitialized variable(yintersect) bug when ray didn’t intersect. I don’t understand reasoning behind +2 part but I didn’t change that.
There is modified custom code for non shadowed POM.
float rayheight = 1;
float oldtex = 1;
int i = 0;
while(i < MaxSteps + 2)
{
float texatray = (HeightMapChannel, Tex.SampleGrad(TexSampler, UV, InDDX, InDDY));
if (rayheight <= texatray)
{
float xintersect = (texatray - rayheight) / (stepsize - oldtex + texatray);
rayheight += stepsize * xintersect;
UV -= xintersect * UVDist;
break;
}
oldtex = texatray;
rayheight -= stepsize;
UV += UVDist;
i++;
}
return float3(UV, rayheight);
Edit: Also “UV to World Ratio” last step use division but then output is only used for division. Swapping dividend and the divisor then get rid of those redudant divisions and can be replaced with multiplications.
Hey,
Looks good. I started optimizing/rewriting a bit as well to take advantage of some tracing optimizations that Brian Karis found while writing screen space reflections. Basically the gist of his optimization is to perform the raytraces in groups of 4 which apparently really speeds up how the GPU handles these kinds of lookups.
The first step in that was rewriting it using more of a ray length approach, but the result of that is pretty similar to what you have above. In my testing it was only a very minor perf difference from the current version, but it should be even faster once the ‘vectorization’ part is done. Here is is for anybody curious.
Rewritten just using Ray UVz (to make it easier to integrate the ssr method):
float SampleDepth, DepthDiff, LastDiff = 0;
float3 RayUVz = float3(UV, 1);
float3 RayStepUVz = float3(UVDist, -stepsize);
int i=0;
while (i<MaxSteps+1)
{
SampleDepth=(HeightMapChannel, Tex.SampleLevel(TexSampler,RayUVz.xy,0));
DepthDiff = RayUVz.z - SampleDepth;
if(DepthDiff < 0)
{
RayUVz -= RayStepUVz * (LastDiff / (LastDiff - DepthDiff));
break;
}
LastDiff = DepthDiff;
RayUVz += RayStepUVz;
i++;
}
return float3(RayUVz.xy, 1-RayStepUVz.z);
Now is the very WIP ‘vectorized’ version… has not yet been tested and there are still a bunch of extra temporaries I haven’t removed yet ( uses the old ray method) but should show where its going:
float rayheight=1;
float oldray=1;
float2 curoffset=0;
float oldtex=1;
float texatray;
float yintersect;
int i=0;
float4 offsets1, offsets2 = 0;
float4 raycheckheights= 0;
float4 texheights = 0;
while (i<MaxSteps+2)
{
offsets1 = curoffset.xyxy + float4(1,1,2,2) * UVDist.xyxy;
offsets2 = curoffset.xyxy + float4(3,3,4,4) * UVDist.xyxy;
raycheckheights = rayheight - (float4(1,2,3,4) * stepsize);
texheights.x = Tex.SampleLevel(TexSampler,UV+offsets1.xy,0).r;
texheights.y = Tex.SampleLevel(TexSampler,UV+offsets1.zw,0).r;
texheights.z = Tex.SampleLevel(TexSampler,UV+offsets2.xy,0).r;
texheights.w = Tex.SampleLevel(TexSampler,UV+offsets2.zw,0).r;
bool4 hitmask = raycheckheights < texheights;
[branch]
if (any(hitmask))
{
float2 outoffset = 0;
[flatten]
if(hitmask.w)
{
outoffset = offsets2.zw;
rayheight = raycheckheights.w;
oldray = raycheckheights.z;
texatray = texheights.w;
oldtex = texheights.z;
}
[flatten]
if(hitmask.z)
{
outoffset = offsets2.xy;
rayheight = raycheckheights.z;
oldray = raycheckheights.y;
texatray = texheights.z;
oldtex = texheights.y;
}
[flatten]
if(hitmask.y)
{
outoffset = offsets1.zw;
rayheight = raycheckheights.y;
oldray = raycheckheights.x;
texatray = texheights.y;
oldtex = texheights.x;
}
[flatten]
if(hitmask.x)
{
outoffset = offsets1.xy;
//need to use previous set if first value hits
oldray = rayheight;
rayheight = raycheckheights.x;
oldtex = texatray;
texatray = texheights.x;
}
curoffset = outoffset;
float xintersect = (oldray-oldtex)+(texatray-rayheight);
xintersect=(texatray-rayheight)/xintersect;
yintersect=(oldray*(xintersect))+(rayheight*(1-xintersect));
curoffset-=(xintersect*UVDist);
break;
}
curoffset=offsets2.zw;
rayheight=raycheckheights.w;
texatray=texheights.w;
i++;
}
float3 output;`
output.xy=offset;
output.z=yintersect;
return output;
Oh and the steps+2 was a special case to remove artifacts from textures that had either complete black or complete white in the heightmap. If I didn’t let it run an extra step it would try to divide by 0 on the first case and then an another extra step was needed to keep it from ending exactly at 0… I tried some other methods but that was easier I’m sure a more graceful solution exists.
Nice. Small additional optimization. int i can be float. also means that you can remove floor node outside.
Your new method still can’t handle the first white pixel. It’s just not for optimization but for correct visuals. POM never should offset at all when pixel isn’t dented. Maybe adding small epsilon or using saturate.
Good day!!! Any update about a material to test POM? I tried code from other page, yet in front of camera looks weird… Maybe some material-ready to learn tweking it? Thanks!
Hi,
The example content is done and will be available with 4.12 whenever that is.
Will the update to 4.12 contain the most recent optimizations from yourself and jenny? (Some super combo of the two snippets to get the vectorised + white pixel fix)
I’m still trying to figure out the math for the texture bombing + POM I tried here…
Rotating the camera vector around 0,0,0 (using the pseudo random texture) didn’t seem to do the job. need a few hours to sit down and think about it…
I don’t think 4.12 will have all the optimizations but there is still time so I will try to get to it.
Don’t worry, these optimizations are really quite tiny compared to the MAJOR material perf optimizations in 4.11. In prior versions the engine was actually computing POM from scratch for each material output… meaning if you used POM and then used the result as UVs for basecolor, spec, rough, normal… it was actually doing POM from scratch 4x. Now the compiler has been fixed and it will only perform it once and share the UVs across the various inputs for the base pass. optimization took some complex POM material from ~350-400 down to ~150 instructions and made a huge difference in how it runs on midrange video cards. Some scenes were barely running on my 680 at home (~15-20fps, complex POM materials using many texture blends) and after that change they run smoothly again. These other optimizations are small incremental ones compared to that.
You will actually need to rotate the cameravector after it has been transformed into tangent space. And you need to rotate it around 0,0,1 which is Z after being transformed. Additionally, you need to rotate the UVs by the same amount as the camera vector and they need to match up for each ‘splat’ or ‘bomb’ or whatever you call them.
fwiw the version of POM checked in has no white pixel bug. That was just in the temp alternate version posted.