POM material

Here is the solution to non-uniform scaled POM:

6106985db92392ce19af89cee4ee7ce3c5e22ffa.jpeg

The blue box on the right is the only addition. Simply place the divide inline.

The non uniform parameter should match the scaling of the mesh.

Thanks for :slight_smile: It works great!

Awesome work you are doing. Currently experimenting with POM, is it possible to change it to accept vectors for texture maps, so you can combine some textures before hand and use their multiplier as input for POM?

Unfortunately, that is not possible. POM Works by being able to ray trace through the texture. That means that from the starting point the shader needs to be able to march towards the camera vector and read new neighbor pixels at each step. When you pass a vector you dont have that ability, all reference to its texture is gone and you simply have the data at that point which does not help for POM.

The only way to do what you are asking correctly is to actually pass each individual texture object to the shader and have it ray trace all of them together and also pass in the height params for each. means you would even need to pass in a texture of your blend mask and raytrace that too. There is no way to perform any kind of parallax when blending using vertex colors . but that is a very common and usually not that noticeable as long as you don’t use and really deep POM heights.

One way to get a cheap version of :

Make 2 POM materials with separate height maps and everything. But use a mask to crossfade the height values of them so that as one reaches 0, the other reaches its max.

Then supply the parallax UVs to the same heightmaps so you can sample parallax height. Then simply check which surface is higher and use that one. You can use the IF node. if you go back a few pages I showed a method almost exactly like that here already.

Believe me, I wish was possible too. One method is to simply pre-combine your blended heights in photoshop or something. I know you might lose resolution but its an option.

Is usable on Gear VR ? I see some folks who use Unity use POM extensively for Gear VR games. I am not sure how it’s possible without bringing performance down by a lot :confused:

Depends on your scene complexity. My best guess is that a ‘cheap’ POM with ~16 steps would cost at least 1-2ms to render if covering the whole screen. So if your scenes are simple and you can afford that hit then sure. All of cost would be reflected in the “Base Pass” which you can get a glance at by typing “profilegpu” at the console in the editor. Try to take 3-4 in a row and look at the frame time, since oftentimes the first capture you do will show a huge spike.

Of course that is only a guess. What I mean to say is it will cost you some real percentage of your overall budget.

Is there any progress with pixel depth offset and dynamic shadows? I just noticed that there is new CameraSpace at Transform node but documentation is very limited. It’s differ from viewspace at shadow passes. Is that what we need to nudge the depth offset to right direction at shadow pass to avoid selfshadows?

No progress yet, but actually you are indeed on the right path to hacking around . I just did something similar to solve an unrelated tree billboard.

So in code, during the shadow pass, the View Transform gets overridden, replacing the CameraVector with the Light Vector. So that means you have access to the light direction by reading the view transform.

You can use that information in the WorldPositionOffset shader to pull towards the light. note it will only be per vertex, not per pixel.

The way you have to do that currently is define a VectorParameter for your light vector using a blueprint. Then in the material you can do product of 0,0,1 transformed from View->World space and your lightvector. Then you can check if the product is greater than a certain threshold value (ie 0.97 or something I think I used). If that returns true, you know you are in the shadow pass. If not you are in the camera pass.

Now of course a slight limitation to method is that it will ALSO kick in the behavior when your camera is perfectly (or nearly) aligned with your light vector. In most cases that is very rare, and even if it does happen its not a big deal since the views are very close and you won’t be changing the result very much.

Hope that make sense.

Its on the list to one day override CameraPosition to be LightPosition in the shadow pass. Its harder than changing the view transform because at the point where the shadows are rendered the engine doesn’t have direct access to the uniform shader parameters, just a limited set of them.

If Camera Space holds information of shadow pass transformation could that be used to get LightPosition? Just transform origin from World Space to Camera Space and done.
Pixel offset would be so important for SSAO too but I just can’t use it because shadow bugs. Maybe I use tomorrow for hacking.

For non directional stationary lights I assume that pixel offset does nothing for shadows because those are static for non movable objects.

You would think, but it gets camera position from somewhere else in the buffers.

Camerapos is WorldViewOrigin and is part of the view uniform shader params. The transform that is used for the shadow pass is actually in a more limited set of params passed through the RHI. I am actually not 100% up to snuff on the differences and limitations I just know that I tried and it didn’t work.

edit ok apparently it can be done but I just wasn’t doing it right. And changing the ViewWorldOrigin in the shadow pass could have other disasterous unknown side . My content hack above is probably safest.

Check in ShadowRendering.cpp a function called “ModifyViewForShadow”.

You would need a way to modify the FViewUniformShaderParameters ViewUniformShaderParameters;

Currently that function only deals with smaller subset that only modifies on the GPU as I mentioned above. I’m not sure exactly the best way to modify the ViewUniformShaderParameters from that function, and thats all I know on the subject.

Sadly the shadow passes are lot harder to debug than main pass. I can’t just output debug data as color and visualize problem at hand. I probably need to spent whole tomorrow for .

How is the update to the rendering code to support shadows using the pixel depth offset going?

Could curvature be calculated for any arbitrary mesh by using the divergence of the normal (technically its called the laplacian)?

If you could calculate it, and assuming the results would be clean, it would work but it would assume constant curvature. The only way to know changing curvature is to sample the curvature again. So it may work for cylinders or corners but not saddle shapes.

Sorry for bringing up an old unrelated post but I got back to thinking about finding the trace vector through a bounding box as I had originally found another way to get entry and exit points for a cube with some collision detection math but now I believe I’ve figured out the math that can be used with the UVW bounds node.

Here’s the example in 2d which should work in 3d too. Since we know the Eye Vector (E)and the normal (N) we can do a product on the normalized E N to get the cosine of the angle( which happens to be the opposite angle on the other side). We also know the adjacent since it is how far we are from the edge using our UVW components. We want to find our trace vector which is the hypotenuse so we just do some simple trig to work out that h=a/Cos(theta) which is really h=a/(E N). Do for all three components and we should have our vector.

Is what you were suggesting or is there an easier way in a material to know how far a vector is through the bounds?

Ah just realised wouldn’t work if looking across faces. I shouldn’t think late at night.

I just setup my material with Pom by using the help docs in unreal. i think it is looking really nice but…how in the world is the silhouette changing in these images? my bricks look awesome but they’re not breaking the silhouette like i see these rocks doing in the images above. can someone explain what i’m doing wrong?

cheers. awesome thread btw!

Dopiken, you have the right idea. But I would probably do itusing planes and then simply choose the one with the minimum distance for the entry hit. Then for the backface exit you do the same but you change the 3 planes to be the ones opposite the origin from you. Of course I’m sure there is also some simple 5 line code that will do it. Sometimes I solve stuff using nodes and then look at the compiled code for clues as to how to optimize it and gain a better understanding of simpler methods. Ie you can save instructions by not computing a full vector length but only checking the “Z” from each vector of each plane.

re: silhouette, that is just done by checking if the “Parallax UVs” value is either less than 0 or greater than 1. If so, write 0 to opacity, otherwise write 1. Notice that that method only works in very simply cases where the UVs are a square. You would have to actually write a shader to define the boundaries of your surface if you want it to work with shapes other than squares.

ty for the info . you rule!

cheers

My question might be a bit off from the topic, but why dynamic branching seems not to work as expected with POM node?

For example, node network works as expected, and I’m getting a significant performance difference, between two layers.


But when I insert POM into one of the branches, it appears that both branches are being processed, and there is no more peformance gain.

http://image.prntscr.com/image/788051ef78ee4d23982ae2e714d0934a.png

And as a side note, when Sampler Source is set to Shared and MipValueMode is set to Explicit derivative, every texture seems to increase samplers count, up to 16. Is it intended?

I’d be really grateful if anyone could shed some light on these questions.