POM material

I wasn’t testing in the main build, the version I used has a bunch of other changes so its possible something was just broken with decals when I tried. I will test in main. It should be possible to get the local transform of the decal at least.

Let’s hope for the best, sadly I can’t test if my memory serves me right since I started the project from scratch and went straight to your POM, so there might have been changes made to the code making it possible to work with decal properly.

It looks like all of the usual transform nodes are failing to return anything valid for decals. I cannot imagine how an out of the box solution could have worked with decals, since the camera vector needs to be transformed into tangent space.

I was able to get part of the way there by deriving a tangent space, but it is unfortunately only possible to derive a tangent basis for the decal’s own plane due to an engine bug. It is not letting translucent decals read from scene texture which is definitely a bug (I entered a jira for that). That means that its possible to have correct POM on a decal that is mostly flat, but if the decal goes around a corner or something, the parallax angle will be wrong by the angle of the decal actor to the receiving surface.

I will post back the math on deriving the second tangent basis soon (building right now).

Then most likely it’s me remembering it wrong. The only thing I could find was one:
http://i.imgur.com/x9TeTswl.png
I’m sure that rotating it worked (for example to match a wall), but I don’t think I tried to apply the decal to a corner or any other non-planar surface.

Hi, i am having problems with pom because the silhouette keeps shifting depending on the camera angle. it is particularly noticeable where the silhouette intersect the ground.
example, a cobbles pom material is applied to plane, there is another flat plane above cobbles so the pom cobbles half poke through it; when camera moves, cobles shift about like they are moving. it looks silly

how could i do ? would it help the silhouettes stay stationary like real geometry would?
thanks

Can you post an example of the type of artifact you are seeing? Are you talking about how it takes too many steps to look good up close, and you can see ‘ripples’ in the parallax?

If so I have been thinking about how to improve that. My initial idea of going back to the sphere ray tracer did not work so well. I did try another attempt using Temporal AA to offset the number of steps a bit. shows very promising results for not much cost.

Before (showing the bad ripples with 48 as max steps and 8 as minsteps):

Using TemporalAA to randomly increase the number of rays by 1.2x:

You can still see a bit of noise where the artifacts were the strongest, but that is only after letting the camera sit still. In motion it looks pretty good.

Material nodes:

The results are actually about the same whether using 0.8 or 1.2 as the TempAA multiplier so its not all about the step increase (although 1.2 does look slightly sharper). Getting the steps to jump around causes the ripples to be out of phase with eachother. Increasing the number of steps by 1.2x does not reduce the ripple artifacts by much, they just get 20% smaller.

, you are truly amazing. Thanks for information, it’s more than just “quite useful”.

Amazing job,
can we get some examples (best used in various situations / materials)?
way we get a guide “template” to ease and simplify the thing up

again, thanks for your work!

Yes I have an example project in the works. It is most likely going to make it for 4.10 but there is a slight it will have to wait until 4.11 due to the way 4.10 is branched from 4.9 and not main.

It will look something like although probably converted into a ContentExamples type hallway. A few materials with varying levels of options enabled for each such as Pixel Depth Offset and Shadows:

Also there has been some progress on POM for curved surfaces. I made a debug version of POM that renders a 2d cross section with dots for each step location. Then by creating a curved ring mesh I can debug exactly what it looks like when a ray traces through a curved surface and make sure the math corrections result in a straight line.

To show why curved surfaces do not work with the current simple-POM, look at image:

Here we have a POM material applied to the checkerbox cylinder and then a cross section of a single ray through that cylinder is displayed using a translucent mesh. Using an editable widget I can drag around the cross section in realtime and see where the ray hits for any part of the heightmap as well as rotate the tracing angle. The green line is the initial starting ray that can be rotated by rotating a BP vector widget.

The thing to take note of here is that the line that gets traced in the POM material is curved as shown by the red dots (and the yellow is where the POM material found intersection). But if you look up at the top (which shows a non-curved version of the same exact material), you can see that the ray was actually straight in UV texture space.

If you line up the camera with the debug line, you can see how the bending ray causes the POM material to intersect where the dots indicate, near the bottom corner:

The thing to solve is how much to bend the ray to counteract the curving that the heightmap is doing. For that I am using a radius of curvature defined in UV space. So for example my test cylinder does exactly 1 UV wrap around the cylinder to make things simple. That means that the circumference of the circle is 1 in uv terms, and by definition the radius of the curvature is 1/2pi (since circumference is 2pir and in case we know the circumference is 1 in UV space we have r=1/2pi). So for example the radius of curvature is 1/2pi. For simplicity I use the inverse of which is the length of a circle in radians which is convenient. So a full circle is simply 2pir of curved length. You also have to account for the scaling of the heightmap since the ratio of the outer to inner radii determines how close the ray is to the center of the cylinder at any point.

To make the math a bit cheaper, instead of using full rotate-about-axis matrix for the vector rotation, I instead convert the initial ray into an angle and then perform the vector rotation as a simple 2d rotation which is only a sine and cosine in the loop.

Ray with curved correction (heightmap height is 0.1 here):

Moving the cross section around you can see it hit the various parts of the heightmap. Note that the curvature correction is only being applied to the debug line here, not the actual POM material which is why the POM material is not lining up near the edge of the cylinder (it lines up ok when looking mostly down)

There are also limits to how much height your heightmap can use. In our example case, the maximum height value is literally 1/2pi which is ~0.159. If you go any higher, the heightmap would be taller than the radii and thus the bottom of the heightmap would never be hit. If I set the height to be 0.159 you can see how its perfectly fitting:

Also note that since the inner radius is essentially 0 here, the ray completely veers away from the bottom as it approaches the bottom. To actually hit the bottom requires a perfectly downward ray, even a tiny side direction causes it to bend away from the bottom (here its at around 1 degree)

is still a ways off from being user friendly or fully implemented and tested on wide range of content. Still working on the part where the 3d vector is rotated as a 2d vector but its looking good so far. Right now I am relying on a user specified curvature axis but the radius is being solved using ddx/ddy. So it works for general case cylinders or walls that bend only around X or tree roots etc.

It’s nice to see that you are still working on , I appreciate it and I’m sure I’m not the only one.

To be honest, I expected POM to be part of UE4 from the start and was disappointed when I discovered that it wasn’t.
Most of the assets I had were made for CE3 (the engine I came from) and used POM quite extensively, so within UE4 they were lacking the depth POM gave them inside CE3.

But having POM now in UE4 is great and I want to thank you for putting your time and effort into . :slight_smile:

I integrated B’s Parallax Occlusion in my Perfect Tile System! It looks lovely! I tested it, but I’m still wondering what’s the difference between multiplying the offset output and adding that to new UVs instead of just multiplying the result from Parallax UVs to get different tiling factors. In my test, there was absolutely no difference between the two methods. It would be nice to have POM working on any curvature of the surface.

After testing it some more, here are some of my suggestions for the settings (for users who are completely lost with ):

  • Max Steps: 8-16 is good for most cases. 8 is good for thinner bricks where the effect is minimal, 16 is good for thicker cobblestones. More samples is more accurate, but more expensive.
  • Min Steps: 4 seems to work well. There are some cases where it causes too much distortion, but for most purposes I can’t imagine needing any more than 4. You definitely need at least 3 to get a smooth interpolation.
  • Light Vector: 0,0,-1 is actually up. In a blueprint you can get the rotation of the directional light, pass it through GetRotationXVector, and plug that vector directly into the POM (though a Material Vector Parameter). It works so beautifully, it makes me want to cry!
  • Shadow Steps: Around 4 is good enough for most cases. I don’t see a need to increase shadow steps beyond , even on extreme POM.
  • Shadow Penumbra: 1 is good for most cases. Going lower will start to darken everything so I don’t suggest doing that at all. Going higher will lower the fidelity of the shadow and remove it from the places where it ought to be. 1-1.5 range is “safe.”

You were right about the tiling by just multiplying Parallax UVs. I guess I was overthinking it. At one point I had a problem with it but it was probably caused by something else since the rest of the function wasn’t done when I first tried that. I will adjust the comments to the nodes accordingly. Exposing the length of the offset is still useful in some cases if you want to compare the length of different poms for instance.

I agree supporting general case curvature would be nice. As of now I am not sure what the best way to get that data is. The most sensible option for that for now is probably to require a baked vertex normal texture and tangent basis texture for each mesh. The textures could be generated inside of the editor with a simple emissive material with vertex normal as color. I would like for there to be some way for it to be automatic but so far a method like that is a bit outside my reach. The ddx/ddy method has facets that cause too much fracturing.

Supplying the normal/tangent textures would actually allow for much more accurate tracing through arbitrary curves but is probably fairly expensive since it would require a triple texture lookup at each iteration (including a cross product to derive the binormal), and transforming the cameravector from world to tangent at each iteration as well. The approach I am working on for now is trying to find a way around requiring that tangent texture and transform but it also assumes that the curvature will stay constant once the ray enters the surface. I need to get some test content to see if that causes issues with rounded corners etc before knowing for sure which way to go. I think it should be ok for corners though. maybe not corners that suddenly curve back the other direction, not sure.

I still think it’s absolutely stunning that I’m looking at a flat quad right now! The shadows really do make it pop. If for nothing else, thank you so much for that!

Jumping back to the of POM requiring many steps for certain noisy surfaces and becoming too expensive.

I have been messing with a hybrid raytracing approach that uses a 2d DistanceField combined with the standard height map. By using a channel packed texture to store both the distance field and the heightmap, the additional math is very cheap since there is still only a single texture lookup for each iteration. For materials requiring a high number of steps offers a large performance advantage. For materials with 8 or fewer steps it is a bit slower. Note that the distance field here is only sampled from the top of the surface, it is not using full 3d distance fields like some implementations.

requires you to pre-bake the distance field map for a specific Height Ratio. I created a material that can do in-engine. is a slow brute force shader and is meant to be used on fairly low resolution textures since the algorith is 0(n^2) where n is texturesize*height. It exposes a scalar to sample the texture at lower resolutions. One day I need to learn how to perform offline rendering using GPUs to avoid kind of hackery.

Here is what a DF map looks like for our standard debug heightmap at 0.1 and 0.2 height ratios:

Here is a performance and quality comparison of regular heightfield tracing and the Hybrid tracing. For a heightmap like one where there is lots of negative space, using the distance field is a big win since it helps the tracer skip over the large empty areas more quickly. DF time is delta time. Red means it cost more, green means savings.

It also has a bit less edge artifacts which is a plus. For very simple POMs that do not require more than 8 steps will actually hurt the performance though.

Only requires a few lines of code to be changed from the existing POM function.

Here is how it compares in steps debug view (white=128, min steps=4, max steps=512):

Interesting because it kind of looks like volumetric fog surrounding the surfaces when you use the number of steps as a debug output. So the tracer spends more time when it is near a surface and moves more quickly through empty areas.

Have you considered? http://drobot.org/pub/M_Drobot_Programming_Quadtree%20Displacement%20Mapping.pdf
Quad tree displacement mapping could give similar performance gains than 2d distance field but without additional memory usage/bandwith and precomputation. It’s just need minimum filtered mipmap generation. So should integrate pretty well for rest of tooling.

Nope I had not heard of that method. I was aware of some mip map related approximations being used for ambient occlusion calculations but looks interesting. Will have to give it a good read later.

edit first reaction is that it seems like the gains only really showed up with really tall height ratio scales. But they were comparing to relaxed cone step mapping which is already another method that is probably a bit faster. Actually I am not 100% sure which method of pom they were comparing against in the performance chart. I will probably try it since it doesn’t seem that hard to do.

For me the implementation looked quite simple. Keep in mind that presentation is super old(from 2009) so performance numbers are skewed.

More in depth explanation of technique you should read from GPU-PRO.

The only part that would take a little bit of time is making the special min version of the mip maps. I wouldn’t want to try and generate those on the fly but it shouldn’t be too hard to add a “Min” option to the mip gen settings in the editor.

Hmm, I actually wanted to suggest a “Normal to Heightmap”-node to complement POM, but I almost forgot that the custom HLSL node can only sample texture objects.
Using anything but a texture object as heightmap input would be only possible after making modifications to the way the custom HLSL node works, right?