POM material

Ah works perfectly. Thank you very much!

I’ve tried it and it looks great, shadowing also looks good! Connecting the Pixel Depth Offset causes the entire material to go black but I’m sure it’s something I’ve missed.

Thanks again!

Try turning off cast shadows on the mesh. If you have dynamic shadows, pixel depth offset basically causes the mesh to self shadow its self anywhere its offset below the surface (which for most heightmaps is everywhere).

Is there a way to set it up so it can use float2/3/4 instead of a texture so I could use procedurally generated height maps?

Sure that would be fairly easy actually. You would simply need to replace the texture lookup with the result of the math function at the given position that gets shifted around by the loop. It could potentially be much faster.

But once you start getting into tracing math you may as well go all raytraced distance fields which aren’t much different but can get better results for less steps. I actually made another node that can generate shapes from distance fields but the code has to be entered into the custom node specifying the distance field. So it was a neat experiment that was not scalable. Then one of our rendering programmers made an “epic friday” project that does the same thing but allowing you to input varying DF functions. No idea if/when that stuff will be made available but I will try to post some of what I have at some point.

hi great stuff going on here.

I am combining with a texture bombing material that I converted from here http://http.developer.nvidia.com/GPUGems/gpugems_ch20.html. I have made it to have an infinite non-repeating tiled texture.

The is when I rotate the UVs, the parallax uv offset offsets int he wrong directions. Just wondering which vector in the custom code I might have to transform.

I realise that is an expensive operation because I have to do 4 times for the texture bombing. But I wanted to try :slight_smile:

no rotation

with rotation…

Here is an example of that. is a regular flat static mesh and the box with subtracted cylinder are entirely generated by the distance field code. Normals are also generated (and packed into alpha as float which is absurd but custom node only outputs single float4 for now).

The grid texture is by using the standard world aligned texture material function using the results of normal and worldposition from the raytracer.

6b91e0647b967ea0bcbdb0909098eeb7101c43c6.jpeg

All the h,b vars define the cylinder and box. “quickend” is just a hacky way of tracing into difficult corners with less steps but causes artifacts. The TempAA thing is purely experimental as a way to “fuzz” some of the step artifacts since I wasn’t doing thing correctly.

The really painful thing about is that to generate the normal, the code has to be repeated 3 times so the function is nasty since you have to update it in all places to change it.

Super experimental distance field raytracer function:


float3 p=View.ViewOrigin.xyz;
float3 CamVec = normalize(Parameters.WorldPosition-View.ViewOrigin.xyz);
float4 HitSpot=0;
float3 normal=0;
int HitMask = 0;
float3 offsets[3];
offsets[0]=float3(1,0,0)*noffset;
offsets[1]=float3(0,1,0)*noffset;
offsets[2]=float3(0,0,1)*noffset;

int i=0;
while(i<Steps)
{

//sphere distance


float3 di = abs(p) - b;
float shape1= min(max(di.x,max(di.y,di.z)),0.0) + length(max(di,0.0));
//float shape1=length(max(abs(p)-b,0));
float2 d = abs(float2(length(p.xz-ShapeOffset.xz),p.y-ShapeOffset.y))-h;
float shape2= min(max(d.x,d.y),0)+length(max(d,0));
float CurDist= max(shape1,-shape2);
//float CurDist = shape1;

//HitMask=(CurDist <= TempAA ? 0:1);

if(CurDist <= TempAA)
{
float hitmiss = bias*(CurDist);
float3 o = p+(CamVec*hitmiss);
HitSpot.xyz=o;

//x

di = abs(o+offsets[0]) - b;
shape1= min(max(di.x,max(di.y,di.z)),0.0) + length(max(di,0.0));
//shape1=length(max(abs(o+offsets[0])-b,0));
d = abs(float2(length(o.xz-ShapeOffset.xz+offsets[0].xz),o.y-ShapeOffset.y+offsets[0].y))-h;
shape2= min(max(d.x,d.y),0)+length(max(d,0));
normal.x = CurDist*.1-max(shape1,-shape2);

//y
di = abs(o+offsets[1]) - b;
shape1= min(max(di.x,max(di.y,di.z)),0.0) + length(max(di,0.0));
//shape1=length(max(abs(o+offsets[1])-b,0));
d = abs(float2(length(o.xz-ShapeOffset.xz+offsets[1].xz),o.y-ShapeOffset.y+offsets[1].y))-h;
shape2= min(max(d.x,d.y),0)+length(max(d,0));
normal.y = CurDist*0.1-max(shape1,-shape2);

//z
di = abs(o+offsets[2]) - b;
shape1= min(max(di.x,max(di.y,di.z)),0.0) + length(max(di,0.0));
//shape1=length(max(abs(o+offsets[2])-b,0));
d = abs(float2(length(o.xz-ShapeOffset.xz+offsets[2].xz),o.y-ShapeOffset.y+offsets[2].y))-h;
shape2= min(max(d.x,d.y),0)+length(max(d,0));
normal.z = CurDist*0.1-max(shape1,-shape2);
break;
}

p+=CamVec*(max(MinStepSize,CurDist));
MinStepSize+=saturate(i-8)*QuickEnd;
i++;
}

normal=-normalize(normal);
//normal=clamp(normal, -0.999,0.999);//multiply is cheaper

float normalpack=sign(normal.z+0.000068);
normalpack*=floor((1+normal.x)*0.5*1000)+(0.99+normal.y)*0.499;
HitSpot.w = normalpack;
//HitSpot.xyz=normal;

return HitSpot;

It is probably better to wait for the other solution that lets you input the distance field function externally, but is something to mess with for those curious.

I also made another one that raytraces the global distance fields of the world (thanks to Daniels awesome nodes for returning the global value).


float3 p=View.ViewOrigin.xyz;
float3 CamVec = normalize(Parameters.WorldPosition-View.ViewOrigin.xyz);
float4 HitSpot=0;

int i=0;
while(i<Steps)
{
//sphere distance
float CurDist= GetDistanceToNearestSurfaceGlobal(p);

if(CurDist <= TempAA)
{
HitSpot.xyz=p;
normal= GetDistanceFieldGradientGlobal(p);
break;
}

p+=CamVec*(max(MinStepSize,CurDist));
MinStepSize+=saturate(i-8)*QuickEnd;
i++;
}

normal=normalize(normal);
float normalpack=sign(normal.z+0.000068);
normalpack*=floor((1+normal.x)*0.5*1000)+(0.99+normal.y)*0.499;
HitSpot.w = normalpack;

return HitSpot;

one is much easier to implement but of course the result is very blobby because it is the low res global distance fields.

of course if I had bothered to take an image of a dynamic actor it would appear high quality through “DF x-ray” material, since dynamic actors use their full own local distance field and not the global one.

You would need to replace the “world to tangent” transform with an “InverseTransformMatrix” node and supply the updated axes.
a8f8e3ca05e7f4b49b5401178290e59576449c63.jpeg

Instead of just using the tangent vectors, you would need to transform the tangent vectors into the rotated space. Or you could also try “rotating” the transformed cameravector around 0,0,0 (and remember to add back to the input “position” if not using in world position offset input which does not). Which is easier may depend on how you generate the rotated vectors in the first place. Is it some kind of pseudo random rotation vector thing?

Yeah its looking up a noise texture and giving each ‘stamp’ a random rotation, translation and scale. Thano for the tip I’ll give it a go.

The main drawback I am having now is that I incorrectly thought I would be able to make a stamped height map and use that to drive the stamped texture. But of course it’s all uv manipulation so I’m doing a parallax material function for each layer of bombing.

Yea doing multiple times will probably be very expensive. I am curious to hear your results.

If you are using a noise texture that will further compound the expense of doing since you add yet another level of dependent texture reads to the parallax loop. You could avoid that by trying to use some pseudo random numbers (aka take large prime numbers and multiple/divide/add stuff with fracs here and there to get pseudo random numbers, lots of neat ideas to google).

Yeah I’ve done some similar simple hashing bits of code before so I might look at that.

The great thing about is that on the landscape, it uses the cascading shadow map and with the pixel depth offset the character shadows into the POM correctly.

The actual bombing material itself is working nicely. If I overlay one layer of bombing (i actually prefer the term splat) over the original tiled texture, it hides the original seams nicely. These images are without POM but you get the idea…

Hi ,

Would that be the volumetric decals by ? As I did notice a commit from about 6 or so days ago that looked interesting. It sounds like the same thing as you can specify a distance field function to generate shapes and such:


If so, then its available in my 4.8 branch as I merged it a couple of days ago, so if anyone is keen to have a play around, you can go get it here: https://.com//UnrealEngine/tree/4.8_NVIDIA_Techs

I am also planning on uploading my test level, which has a bunch of distance field material functions created for generating simple shapes and doing unions, subtractions and such.

looks really cool. is there anywhere that explains how to generate something similar?
[/QUOTE]

I will include it with my demo, basically it was a cube subtracted from a torus, with noise animated over the top all in the new volumetric decal shader.

Oh by the way I rotated the vectors with the inbuilt customrotator material function.

So I’m a bit unsure how to transform the cameravector so that it ‘undoes’ what the customRotator does to the uvs. The uv rotation was in 2d uvs space. Any tips would be much appreciated!

You say it’s simple but I barely even know what you’re on about T.T
And I don’t think I’ll be messing with raytraced distance fields with any time soon lol

Could someone give me a more specific pointer to help me out?

Another idea im having…

If I apply a grass displacement map to the paralax shader then we get these spikes that ‘kind of’ look like blades of grass. Could it be possible to add an extra offset in each loop using a noise function to ‘bend’ the grass as it reaches the tip? (actually you are doing the inverse, start bent at the surface and work your way down to the ground and release the strength of the noise to 0. Can then have a world space noise that moves through space+time to create beautiful moving lush grass.

Oooh very excited. Just got some nice swaying grass. Need to play with the noise but in theory its pretty cool!

I want to introduce a bias to the strength of the noise so that it creates a ‘bend’ as opposed to a linear directional change. Would be nice to have lookup curves in the material editor.

Just realised that I had the height map set to sRGB. doh.

Here’s an update with if fixed and some DOF 'cause all the cool kids are doing that these days.

All that is left is to do the work, but I don’t mind breaking it down step by step.

First, you need to create the math function that creates your gradient to test with. I am testing with a simple ‘spheremask’ type gradient which actually makes a cone shape as a heightmap:


float x = distance(frac(UV), 0.5);
x*=2;
x=1-x;
return x;

You can use any math to generate the gradient (meaning it could be float1, float2 for 2d, float3 for 3d noise etc) as long as the return value is a float at the sampled position.

Here’s the entire test material so far showing the cone is working:

Now all we need to do is take the parallax function and replace the texture lookup with function using the offset UV from the loop. To make it simpler we will deal with the Parallax-Only version of the code.

Here is the entire Parallax only code:


float rayheight=1;
float oldray=1;
float2 offset=0;
float oldtex=1;
float texatray;
float yintersect;
int i;

while (i<MaxSteps+1)
{
texatray=(HeightMapChannel, Tex.SampleGrad(TexSampler,UV+offset,InDDX,InDDY));

if (rayheight < texatray)
{
float xintersect = (oldray-oldtex)+(texatray-rayheight);
xintersect=(texatray-rayheight)/xintersect;
yintersect=(oldray*(xintersect))+(rayheight*(1-xintersect));
offset-=(xintersect*UVDist);
break;
}

oldray=rayheight;
rayheight-=stepsize;
offset+=UVDist;
oldtex=texatray;


i++;
}

float3 output;
output.xy=offset;
output.z=yintersect;
return output;

We just need to replace line 11 which is the texture lookup with the math function. That line currently is:


texatray=(HeightMapChannel, Tex.SampleGrad(TexSampler,UV+offset,InDDX,InDDY));

If we instead replace it with our math function from before, but with the “+offset” from the UVs, it will just work.

That translates into these 3 lines:


texatray = distance(frac(UV+offset), 0.5);
texatray*=2;
texatray=1-texatray;

Example (I also created a normal map from the gradient to make it more obvious):

Notice slight problem at the base. Its disconnected because I forgot to clamp the cone function.

Fixed:
Function_parallax_bugfix.JPG


texatray = distance(frac(UV+offset), 0.5);
texatray*=2;
texatray=saturate(1-texatray)+0.0001;

I also just realized that any texture that has absolute black 0 in the heightmap can cause that artifact too (hence the +0.0001). I will need to find a more elegant fix for that. Probably by adjusting the if statement somehow. Or I will just add the 0.0001 to the regular heightmap version as well. Doesn’t seem to cause any other issues so far.

I will reply about the other custom rotator stuff later. But if you are only rotating the x and y then that is essentially the same as rotating around the Z axis (0,0,1). So you should be able to use the rotateaboutaxis node. I am not exactly sure how custom rotator works so I will check it out later.

I’m probably not understanding what you mean. Would screen aligned billboards give me a good grass effect? The way i’m doing it currently is using the artifacts from a high frequency heightmap to my advantage. How were you thinking it work? I’m doing it way to avoid the need for textured cards!