POM material

Thanks , I had a hunch that it might have had something to do with that part, but overall, I still had no idea.
I’m plugging in the color from else where, so would it be as simple as adding a new input, say Color (couldn’t think of a more appropriate name) and then in the code, just have it like so


texatray = Color+offset;

(I’m most likely missing the importance of

I appreciate it very much for taking the time to explain in detail.
Overall, I can’t expect you to tell me how to do everything, so are there any recommended resources in regards to creating shaders and the math behind them?

, that is indeed very cool, could swear I’ve seen a similar use of POM with grass on GTA V.
Also, so you don’t have a floating character, would you have to move the plane up a touch and use the Pixel Depth Offset connection?

thanks :slight_smile:

i would have move the character skeletal mesh down or the collision capsule up (one or the other). i am using pixel depth offset here. the great about it is that it does proper dynamic shadows into the POM.

actually just the skeletal mesh down. the collision capsule is the root component so that wouldnt work.

A bit off topic , Is there a way to have a material node for back faces? I want to implement some volume rendering code (ray marching) and the technique needs the bounding box rendered as rgb in two passes like …

Is that possible with the deferred rendering system? I’ve tried twosidedsign with a two sided translucent material but don’t think it works that way.

That value is purely for the height of the surface where the ray is being sampled.

The function returns the necessary coordinate offsets as the “Parallax UVs” or “offset only” value. So you just need to use those offsets to look up your color function. You can easily replace the concept of “uvs” with world position if it makes it easier to understand. Basically that can exist external of the function.

Here is an example where I used the “material complexity” gradient and sampled it by the other cone gradient. You can use any kind of coordinate defined color map way.

Here it is hooked up to the simple non-parallax material where I first create the height gradient using spheremask, and then use the resulting ‘1d coordinates’ as a lookup for a ‘material complexity’ type color blending function. But could be anything I just used the cone since it would make a nice height-layered effect on the final cone way.

To use it with parallax simply requires using “Parallax UVs” on the function using the same height gradient:

Notice that we use the same original math to create the cone gradient, sampled using the Parallax UVs. Then we can sample the gradient with that updated gradient and it looks like :

The parallax function actually does output the final height of the ray intersection, but currently the material function converts that Z value into world space for pixel depth offset which is why in the above image I had to re-sample the height gradient using the final parallax UVs. Probably not a big deal.

Regarding PixelDepthOffset and floating characters, you will always have some error. how you handle it depends on the type of texture. I think the most common usage case would be stone floors and brick walls. If the gaps between the stones are small enough that you wouldn’t question walking on them in real life without discomfort, there should be no problem. If you are talking about pixel depth offset for some huge gnalrly terrain effect then yes maybe you can offset the plane but then you may have the opposite problem where the character gets clipped by parts of it. If your heightmap has lots of low frequency negative space it may be better to model out the low frequencies using very low resolution geometry then use POM to get finer details to pop.

That POM grass looks like it it’d work great for far away 3rd person games or RTS style games, I might try mixing it with grass planes to use for lawn grass and seeing if I can get a cheaper and/or better looking result than either one on their own.

Yeah I’d thought the same thing. Have the POM as the underlying short layer and then use the landscape grass type to have some instances spawned sparsely to blend in with it. These things are always greater than the sum of their parts. I had a look at the kite demo project and its a great example of how to use assets together technically and creatively to make a beautiful image.

Hey @, is there any way to set the heightmap to Shared: Wrap so it doesn’t cause my landscape components to gray out when they reach the texture limit?

Ah ok, I get it (I think), thanks .

Does new POM now also support ‘silhouette clipping’ so we can have nice pom’d edges? :rolleyes:

No. I don’t think that its very easily supportable in a robust fashion. The standard implementation only works for simple square UVs and then you need to define the UV borders for every single instance in the world (using a material instance or some such) In addition you would need to define the coordinates or slope of any angled edges. Its more a neat thing to turn on for screenshots but more of a pain to set up in real usage cases. You can try it out by simply checking if the parallax UVs are outside of a desired range.

I think in general pixel depth offset provides a more robust way to achieve the same thing. Anywhere the floor meets up against another edge you will see the intersection. Instead of leaving brick unbounded just use trim.

I’m going to investigate and see if we can adapt technique for tree silhouettes…

https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch04.html

I seriously have no idea how you are able to convert into something in UE4.
People are amazing…

That will be interesting.

The challenge as I see it is that they were creating fin geometry dynamically based on the geometry at the silhouette. Without c++ code, you may have to be creative and use the use built in mesh splines somehow. Or use some kind of pivot painter like painted logic with UVs or vertex colors so the built in strips know the width/position of the tree at all points. Or somehow virtually shrink the geometry using virtual coordinates for the main trunk so you can use the edge as a virtual plane for the fins. Lots of things come to mind to try but they all seem to require a lot of work.

Crysis 3 has silhouette rendering like that.
I believe it’s based on : http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.89.5044&rep=rep1&type=pdf

mentioned Ray Marching for Volume Rendering, and I figured I’d save him some time since I did a little while ago just after we got texture sampling from custom nodes.


Here are the assets:


And the custom node (which is kind of messy, but it was my first custom node):


float totalValue = 0.0;
int steps;float3 p;
int slice;
float2 cornerpos;
float2 coord;
float4 value;
for (steps=0; steps < MaximumRaySteps; steps++)
{
p = (((from + (direction* StepLength*steps))-position));
slice = ((p.z/height)*(rows*columns))+(rows*columns/2);
cornerpos = float2(((slice%columns)/columns), floor(slice/columns)/rows);
coord = cornerpos + float2(((p.x/width)+0.5)/columns,((p.y/length)+0.5)/rows);
if((abs(p.x)>width/2)+(abs(p.y)>length/2)+(abs(p.z)>height/2)+(distance(p+position,CameraOrigin)>Depth) < 1) totalValue += Texture2DSample(Tex,TexSampler,coord).r*(Density);
}
return totalValue;

It’s pretty basic; I look forward to seeing how volume rendering evolves in UE4 ('s distance field ray marching materials look sweet)!

Could anyone please upload .uasset files? :slight_smile: I tried to copy the material from 's post some pages back, but I couldn’t paste it in the material editor.

Nice. Although I managed to get my own working the other day using probably the same technique…

The main POM stuff is a material function and not an actual material.
The code in post 203 from works fine for me as well.

I’ve actually got a crazy idea for doing procedural trees using distance fields from curves. During my day job i’m often doing crazy volume compositing using distance fields and noise. Mostly for clouds but it would be fun to have a curve that generated a distance field that could then have a raytraced surface shader. I’ve got a procedural geometry that generates slices from back to front order so that translucent compositing works from any angle. It’s faster slightly faster than raytracing. It was made to render volumes but I could use it to render isosurfaces as well i think.

Probably wont work but weve got to at least come up with some crazy ideas.