POM material

You could just pass over the normal map as a texture object. But yes you are correct, if you want to change the sample location of a texture inside of the custom node, you need a texture object. That isn’t something that can maybe be changed one day. If you pass in a texture without it being an object then it isn’t a texture its just a value and you can’t look it up anywhere other else to get a different value out of it.

Yeah, I saw that within the HLSL view when I tried to port a “simple” SSDO postprocess shader.
I had to sample a scene texture, but it didn’t work, so I looked at the HLSL view and saw that it was recognized as Float4.

Not everything works like one wants it to…:frowning:

But we are getting more and more POM candy from you, so no reason to be sad :smiley:

You actually can sample the scene texture as an object in the custom node. I did that once for a opacity mask to distance field converter and for a scene blur material. You can just reference them directly via code.

You just do it like :
DecodeSceneColorForMaterialNode(UV);

Or if you need depth:
CalcSceneDepth(UV);

UV would either be an input you specify as a custom node input, or a variable you declare to manipulate inside the node like :

float2 screenpos=MaterialFloat2(ScreenAlignedPosition(Parameters.ScreenPosition).xy);

Then you do CalcSceneDepth(screenpos+someoffset); and then you can raytrace the depth or blur the scene or whatever.

Also got min mip generation working by changing the mip kernel code. I doubt I will be able to add an actual option to the texture properties anytime soon though.

Isn’t the SceneColor only available to translucent surface materials? Also the code I wanted to port sampled the scene ambient occlusion and world normal as well.

Outside of that, I didn’t knew that was possible. I can actually use that knowledge for a few other materials I plan on doing, so thank you !

After rereading the presentation it seems that it’s using reverted heighmap where surface is 0 and deepest end is 1. So for typical heightmaps would mean max mipmap generation instead. Sorry for confusion.

Just sparking some conversation here…

Do you think it would be possible to write a version of parallax mapping that would push up instead of down? I’m thinking that if you use a box as your surface with the bottom polygon as the plane, then the raised polygons of the sides and top of the box would allow you to render pixels at a height above the plane. I’m thinking aloud here and will need to sketch through the tracing math to see if the inverse of it would work.

Now that I think about it, it would be very similar to a volumetric decal but with the ‘distance’ in 2d only.

Yes that is really easy to add. I had it done at one point and crashed and forgot to re add it. All you have to do is add a portion of the initial UVDist value into the offset. If you subtract it, the POM will be up instead of down.

I might give it a go with a box and try it with a 0.5 mid point based heighmap so we can have parallax in both directions.

Hey , can you identify what POM method cryengine 3 uses?
it’s very efficient…
but I’m sure that with your amazing and hard work you will get better results :wink:

post contains all I know about that (not much):

https://forums.unrealengine.com/showthread.php?74086-SM5-Error-X4532-with-POM-Material&p=327962&viewfull=1#post327962

There are really only a few possibilities though when it comes to curvature. Either they are baking a curvature texturemap under the hood and then using that initial curvature value to bend the ray at a constant rate (slightly cheaper), or they could be actually tracing through the curvature map at each step and re-transforming the ray. Or they could be doing the same thing with a VertexNormal texturemap and tangent vector map. Those are way easier to bake but more expensive to trace since it requires three texture lookups at each iteration. If you are saying its efficient then they are probably using a real curvature map. I am not quite ready to analytically render a curvature map myself (although i’ve been trying off and on) but if we get something from Xnormal that should be possible soon. But I am not sure how that data is normalized since its possible for curvatures to go over 1. More to research. If anybody has more info let me know.

So I tried (without the LightVector) and then I got result, and with the LightVector I get an error as you can see in the first image.

What am I doing wrong? :confused:

Those links are both the same and they look related to the pixel depth offset not being the correct value. Can you post a screenshot of your material?

Oh yeah! Sorry for that, the first link was intended to be of the material. :slight_smile: Here is the material: Screenshot - 577e0943509cfc05c7448a23913fe753 - Gyazo

EDIT: I know there’s an error, and I read on the documentation that the LightVector is deprecated, so I tried to use a collection parameter but i got the same results.

You cannot use light vector in an opaque material in a deferred renderer like UE4. You need to manually create the light vector using a blueprint. I suggest using a material parameter collection to share the value between multiple materials. With a collection param make sure you mask RGB since it will try to pass a v4 RGBA.

The “manual texture size” is not the dimensions of the textures, it is how big one 0-1 tile is in worldspace. That number is definitely wrong and that is why the intersection with the cylinder looks wrong in your animated image.

Oh, alright. Thanks. :slight_smile:

I have another question, why doesn’t it react well with lighting? happens when I place a point like near the mesh with POM. It receives lighting very well when I place it at the same position as the mesh, but I won’t always be able to do that.

I think it’s probably me who did something wrong though, I have “Render Shadows”, “Specify Manual Texture Size” and “Use World Coordinates” off.

EDIT: I noticed it’s because of Pixel Depth Offset. Is there any workaround? I really like it. :frowning:

Depth offset needs to be fixed on Epic side. Not sure on timeline tought.

Volumetric decals…you know what that means?!
Parallax bullet holes!!!
Go give that idea to the Unreal Tournament team ASAP if they haven’t been trying already.

Parallax pseudo-decals are easy (showdown used them and I believe UT already uses a version of them), but actually poking holes in walls using decals or parallax is another challenge altogether.