Try turning off cast shadows on the mesh. If you have dynamic shadows, pixel depth offset basically causes the mesh to self shadow its self anywhere its offset below the surface (which for most heightmaps is everywhere).
Is there a way to set it up so it can use float2/3/4 instead of a texture so I could use procedurally generated height maps?
Sure that would be fairly easy actually. You would simply need to replace the texture lookup with the result of the math function at the given position that gets shifted around by the loop. It could potentially be much faster.
But once you start getting into tracing math you may as well go all raytraced distance fields which arenât much different but can get better results for less steps. I actually made another node that can generate shapes from distance fields but the code has to be entered into the custom node specifying the distance field. So it was a neat experiment that was not scalable. Then one of our rendering programmers made an âepic fridayâ project that does the same thing but allowing you to input varying DF functions. No idea if/when that stuff will be made available but I will try to post some of what I have at some point.
hi great stuff going on here.
I am combining with a texture bombing material that I converted from here http://http.developer.nvidia.com/GPUGems/gpugems_ch20.html. I have made it to have an infinite non-repeating tiled texture.
The is when I rotate the UVs, the parallax uv offset offsets int he wrong directions. Just wondering which vector in the custom code I might have to transform.
I realise that is an expensive operation because I have to do 4 times for the texture bombing. But I wanted to try
no rotation
with rotationâŚ
Here is an example of that. is a regular flat static mesh and the box with subtracted cylinder are entirely generated by the distance field code. Normals are also generated (and packed into alpha as float which is absurd but custom node only outputs single float4 for now).
The grid texture is by using the standard world aligned texture material function using the results of normal and worldposition from the raytracer.
All the h,b vars define the cylinder and box. âquickendâ is just a hacky way of tracing into difficult corners with less steps but causes artifacts. The TempAA thing is purely experimental as a way to âfuzzâ some of the step artifacts since I wasnât doing thing correctly.
The really painful thing about is that to generate the normal, the code has to be repeated 3 times so the function is nasty since you have to update it in all places to change it.
Super experimental distance field raytracer function:
float3 p=View.ViewOrigin.xyz;
float3 CamVec = normalize(Parameters.WorldPosition-View.ViewOrigin.xyz);
float4 HitSpot=0;
float3 normal=0;
int HitMask = 0;
float3 offsets[3];
offsets[0]=float3(1,0,0)*noffset;
offsets[1]=float3(0,1,0)*noffset;
offsets[2]=float3(0,0,1)*noffset;
int i=0;
while(i<Steps)
{
//sphere distance
float3 di = abs(p) - b;
float shape1= min(max(di.x,max(di.y,di.z)),0.0) + length(max(di,0.0));
//float shape1=length(max(abs(p)-b,0));
float2 d = abs(float2(length(p.xz-ShapeOffset.xz),p.y-ShapeOffset.y))-h;
float shape2= min(max(d.x,d.y),0)+length(max(d,0));
float CurDist= max(shape1,-shape2);
//float CurDist = shape1;
//HitMask=(CurDist <= TempAA ? 0:1);
if(CurDist <= TempAA)
{
float hitmiss = bias*(CurDist);
float3 o = p+(CamVec*hitmiss);
HitSpot.xyz=o;
//x
di = abs(o+offsets[0]) - b;
shape1= min(max(di.x,max(di.y,di.z)),0.0) + length(max(di,0.0));
//shape1=length(max(abs(o+offsets[0])-b,0));
d = abs(float2(length(o.xz-ShapeOffset.xz+offsets[0].xz),o.y-ShapeOffset.y+offsets[0].y))-h;
shape2= min(max(d.x,d.y),0)+length(max(d,0));
normal.x = CurDist*.1-max(shape1,-shape2);
//y
di = abs(o+offsets[1]) - b;
shape1= min(max(di.x,max(di.y,di.z)),0.0) + length(max(di,0.0));
//shape1=length(max(abs(o+offsets[1])-b,0));
d = abs(float2(length(o.xz-ShapeOffset.xz+offsets[1].xz),o.y-ShapeOffset.y+offsets[1].y))-h;
shape2= min(max(d.x,d.y),0)+length(max(d,0));
normal.y = CurDist*0.1-max(shape1,-shape2);
//z
di = abs(o+offsets[2]) - b;
shape1= min(max(di.x,max(di.y,di.z)),0.0) + length(max(di,0.0));
//shape1=length(max(abs(o+offsets[2])-b,0));
d = abs(float2(length(o.xz-ShapeOffset.xz+offsets[2].xz),o.y-ShapeOffset.y+offsets[2].y))-h;
shape2= min(max(d.x,d.y),0)+length(max(d,0));
normal.z = CurDist*0.1-max(shape1,-shape2);
break;
}
p+=CamVec*(max(MinStepSize,CurDist));
MinStepSize+=saturate(i-8)*QuickEnd;
i++;
}
normal=-normalize(normal);
//normal=clamp(normal, -0.999,0.999);//multiply is cheaper
float normalpack=sign(normal.z+0.000068);
normalpack*=floor((1+normal.x)*0.5*1000)+(0.99+normal.y)*0.499;
HitSpot.w = normalpack;
//HitSpot.xyz=normal;
return HitSpot;
It is probably better to wait for the other solution that lets you input the distance field function externally, but is something to mess with for those curious.
I also made another one that raytraces the global distance fields of the world (thanks to Daniels awesome nodes for returning the global value).
float3 p=View.ViewOrigin.xyz;
float3 CamVec = normalize(Parameters.WorldPosition-View.ViewOrigin.xyz);
float4 HitSpot=0;
int i=0;
while(i<Steps)
{
//sphere distance
float CurDist= GetDistanceToNearestSurfaceGlobal(p);
if(CurDist <= TempAA)
{
HitSpot.xyz=p;
normal= GetDistanceFieldGradientGlobal(p);
break;
}
p+=CamVec*(max(MinStepSize,CurDist));
MinStepSize+=saturate(i-8)*QuickEnd;
i++;
}
normal=normalize(normal);
float normalpack=sign(normal.z+0.000068);
normalpack*=floor((1+normal.x)*0.5*1000)+(0.99+normal.y)*0.499;
HitSpot.w = normalpack;
return HitSpot;
one is much easier to implement but of course the result is very blobby because it is the low res global distance fields.
of course if I had bothered to take an image of a dynamic actor it would appear high quality through âDF x-rayâ material, since dynamic actors use their full own local distance field and not the global one.
You would need to replace the âworld to tangentâ transform with an âInverseTransformMatrixâ node and supply the updated axes.
Instead of just using the tangent vectors, you would need to transform the tangent vectors into the rotated space. Or you could also try ârotatingâ the transformed cameravector around 0,0,0 (and remember to add back to the input âpositionâ if not using in world position offset input which does not). Which is easier may depend on how you generate the rotated vectors in the first place. Is it some kind of pseudo random rotation vector thing?
Yeah its looking up a noise texture and giving each âstampâ a random rotation, translation and scale. Thano for the tip Iâll give it a go.
The main drawback I am having now is that I incorrectly thought I would be able to make a stamped height map and use that to drive the stamped texture. But of course itâs all uv manipulation so Iâm doing a parallax material function for each layer of bombing.
Yea doing multiple times will probably be very expensive. I am curious to hear your results.
If you are using a noise texture that will further compound the expense of doing since you add yet another level of dependent texture reads to the parallax loop. You could avoid that by trying to use some pseudo random numbers (aka take large prime numbers and multiple/divide/add stuff with fracs here and there to get pseudo random numbers, lots of neat ideas to google).
Yeah Iâve done some similar simple hashing bits of code before so I might look at that.
The great thing about is that on the landscape, it uses the cascading shadow map and with the pixel depth offset the character shadows into the POM correctly.
The actual bombing material itself is working nicely. If I overlay one layer of bombing (i actually prefer the term splat) over the original tiled texture, it hides the original seams nicely. These images are without POM but you get the ideaâŚ
Hi ,
Would that be the volumetric decals by ? As I did notice a commit from about 6 or so days ago that looked interesting. It sounds like the same thing as you can specify a distance field function to generate shapes and such:
If so, then its available in my 4.8 branch as I merged it a couple of days ago, so if anyone is keen to have a play around, you can go get it here: https://.com//UnrealEngine/tree/4.8_NVIDIA_Techs
I am also planning on uploading my test level, which has a bunch of distance field material functions created for generating simple shapes and doing unions, subtractions and such.
looks really cool. is there anywhere that explains how to generate something similar?
[/QUOTE]
I will include it with my demo, basically it was a cube subtracted from a torus, with noise animated over the top all in the new volumetric decal shader.
Oh by the way I rotated the vectors with the inbuilt customrotator material function.
So Iâm a bit unsure how to transform the cameravector so that it âundoesâ what the customRotator does to the uvs. The uv rotation was in 2d uvs space. Any tips would be much appreciated!
You say itâs simple but I barely even know what youâre on about T.T
And I donât think Iâll be messing with raytraced distance fields with any time soon lol
Could someone give me a more specific pointer to help me out?
Another idea im havingâŚ
If I apply a grass displacement map to the paralax shader then we get these spikes that âkind ofâ look like blades of grass. Could it be possible to add an extra offset in each loop using a noise function to âbendâ the grass as it reaches the tip? (actually you are doing the inverse, start bent at the surface and work your way down to the ground and release the strength of the noise to 0. Can then have a world space noise that moves through space+time to create beautiful moving lush grass.
Oooh very excited. Just got some nice swaying grass. Need to play with the noise but in theory its pretty cool!
I want to introduce a bias to the strength of the noise so that it creates a âbendâ as opposed to a linear directional change. Would be nice to have lookup curves in the material editor.
Just realised that I had the height map set to sRGB. doh.
Hereâs an update with if fixed and some DOF 'cause all the cool kids are doing that these days.
All that is left is to do the work, but I donât mind breaking it down step by step.
First, you need to create the math function that creates your gradient to test with. I am testing with a simple âspheremaskâ type gradient which actually makes a cone shape as a heightmap:
float x = distance(frac(UV), 0.5);
x*=2;
x=1-x;
return x;
You can use any math to generate the gradient (meaning it could be float1, float2 for 2d, float3 for 3d noise etc) as long as the return value is a float at the sampled position.
Hereâs the entire test material so far showing the cone is working:
Now all we need to do is take the parallax function and replace the texture lookup with function using the offset UV from the loop. To make it simpler we will deal with the Parallax-Only version of the code.
Here is the entire Parallax only code:
float rayheight=1;
float oldray=1;
float2 offset=0;
float oldtex=1;
float texatray;
float yintersect;
int i;
while (i<MaxSteps+1)
{
texatray=(HeightMapChannel, Tex.SampleGrad(TexSampler,UV+offset,InDDX,InDDY));
if (rayheight < texatray)
{
float xintersect = (oldray-oldtex)+(texatray-rayheight);
xintersect=(texatray-rayheight)/xintersect;
yintersect=(oldray*(xintersect))+(rayheight*(1-xintersect));
offset-=(xintersect*UVDist);
break;
}
oldray=rayheight;
rayheight-=stepsize;
offset+=UVDist;
oldtex=texatray;
i++;
}
float3 output;
output.xy=offset;
output.z=yintersect;
return output;
We just need to replace line 11 which is the texture lookup with the math function. That line currently is:
texatray=(HeightMapChannel, Tex.SampleGrad(TexSampler,UV+offset,InDDX,InDDY));
If we instead replace it with our math function from before, but with the â+offsetâ from the UVs, it will just work.
That translates into these 3 lines:
texatray = distance(frac(UV+offset), 0.5);
texatray*=2;
texatray=1-texatray;
Example (I also created a normal map from the gradient to make it more obvious):
Notice slight problem at the base. Its disconnected because I forgot to clamp the cone function.
Fixed:
texatray = distance(frac(UV+offset), 0.5);
texatray*=2;
texatray=saturate(1-texatray)+0.0001;
I also just realized that any texture that has absolute black 0 in the heightmap can cause that artifact too (hence the +0.0001). I will need to find a more elegant fix for that. Probably by adjusting the if statement somehow. Or I will just add the 0.0001 to the regular heightmap version as well. Doesnât seem to cause any other issues so far.
I will reply about the other custom rotator stuff later. But if you are only rotating the x and y then that is essentially the same as rotating around the Z axis (0,0,1). So you should be able to use the rotateaboutaxis node. I am not exactly sure how custom rotator works so I will check it out later.
Iâm probably not understanding what you mean. Would screen aligned billboards give me a good grass effect? The way iâm doing it currently is using the artifacts from a high frequency heightmap to my advantage. How were you thinking it work? Iâm doing it way to avoid the need for textured cards!
Thanks , I had a hunch that it might have had something to do with that part, but overall, I still had no idea.
Iâm plugging in the color from else where, so would it be as simple as adding a new input, say Color (couldnât think of a more appropriate name) and then in the code, just have it like so
texatray = Color+offset;
(Iâm most likely missing the importance of
I appreciate it very much for taking the time to explain in detail.
Overall, I canât expect you to tell me how to do everything, so are there any recommended resources in regards to creating shaders and the math behind them?
, that is indeed very cool, could swear Iâve seen a similar use of POM with grass on GTA V.
Also, so you donât have a floating character, would you have to move the plane up a touch and use the Pixel Depth Offset connection?