Is there any alternative ddx ddy for calculating normals when use DistanceToNearSurface?

I want to create small river waves along the shore using WPO, but there is a problem with calculating the normals. Now I use DDY and DDX, but I am not satisfied with the result. Is there any way to calculate the normal map for such cases? I create a wave using Cos and DistanceToNearSurface.



Just asking, but that’s a landscape (sand) and you’re using a flat-mesh for the water, where the water-material samples the distance to the landscape to create waves?

Yes, when you are creating offsets using math functions, you can mathematically derive the normal. This thread shows the formula and an example of this being done using a Sine wave, just needs some minor teaks to work with cosine (or you could switch your wave to be based one sine).

Thanks for the quick reply. I tried to copy your method, but the result is a bit strange. I’m not a programmer, maybe I made a mistake.





Landscape (sand) + Rectangle with High Poly mesh (water).

The issue is you cant just transform the normal into tangent space like I did in the plane because the tangent direction of the mesh tangent is not oriented in the same direction as the gradient you are using as an input. Doing this will result in the normals being bent all in the same direction, instead of in the direction of the wave.
This can be seen by viewing the world normals:


Essentially, you’ve found the normals in 1D. They need to be rotated to align with the direction the pixel is from the surface. You can get the direction to the nearest surface with the Distance Field Gradient node. You can probably just rotate this vector to align with the RG component of the direction gradient.

Yes, I understand the problem. Tried to solve this with DistanceFieldGradient. But my knowledge of UE is not enough(( As far as I understand there are no negative values in DistanceFieldGradient and I stopped at this moment.

It does have negative values, you just can’t see a negative value without doing some math to make it positive. You can prove this by connecting an absolute value node.

I can’t understand how to use DFG correctly. Tried different options - it doesn’t work. I also tried to make DFG as a mask for different waves, it turned out to be difficult and the result is not correct. As I understand it, just putting it in the coordinates is not enough

Think of the RG component of the DFG as a vector that is telling us which direction on the surface the normal should be going along. If the non-rotated normal is correct in one specific direction only, and we can tell which direction that is, then we can figure out how much to rotate the vector by comparing how similar the distance field gradient is to the linear direction.

Theoretically, I understand the principle. But I don’t know how to tell this to the machine
My last attempt





WPO works fine but not without problems
Normal incorrect

You’re lucky I’m a weirdo that finds this stuff fun…
When you want to solve a complex problem in tech art, it can be good to take a step back and break it down into smaller problems you know how to solve, or can find information on.
The first thing you will need to understand is how to rotate normal maps or vectors in general using trigonometric functions.
This video at about the 4 minute mark goes into the basic trigonometry of taking some (0-1) input value and turning it into a 2D vector that can be used to get a new normal vector.

Make sure your comfortable with these concepts because it’s about to get a lot more complicated.
But just so you know it’s worth your while, here’s the result:
The Finished Effect
(Normals below strengthened for visibility)


Now, there is probably a better way of doing this, but this is what I came up with for. Hopefully someone with more free time can optimize it more.

First, a minor modification to the original normal function: We only need the G component, this goes back to when I said the original vector was kind of “1D”. Otherwise it’s the same.

So we have our DFG, we know it tells us what direction our surface is relative to our pixel. We know how to construct a rotated normal map using dot products. But how do we know how much to rotate by? Arctangent2 is what I came up with… It can output an angle value based on a 2D input vector. ArcTan2Fast will cause some visual glitches that can be fixed later using a SafeNormalize, not sure if this is better or worse than using the more expensive ArcTan2.

We take it once with (X,Y) and again with (Y,X) in order to get two rotation values. One for our X normal component, and the other perpendicular one for our Y normal component. Next, we want to remap this value. We only want to rotate positively, so we use Absolute Value. We also only want to rotate one revolution, but later we’ll find the normal is actually rotating several times as we move around our gradient vector. This can be solved by dividing by 4.

Using the knowledge about rotating normal maps, we can now take our two vectors as inputs to our rotation function. Just like the video, we have to perpendicular values going into Sin and Cos to give us the rotational vectors we need. Now we can take the dot product between the original vector and our rotated vectors to create a new red and green channel. Lastly, we append the blue component back in and renormalize.

We also will want our mask to reduce the amplitude of the sine wave at the start of the function, this will naturally blend away the effect using the mask without needing to lerp in a flat normal later. I’ve also added a time node to push the wave away and give the effect movement.

Also here’s an external link to a higher res screen grab of the material example since Forum links are too low res.

4 Likes

Thanks a lot! everything works great!
I have been trying to solve this problem for several weeks
I will try to adapt this function to my waves

Comparison with DDX DDY



There are black artifacts, as I understand it, the problem is in the reflections
Artifacts partially disappear if reduce the value of normals, but then the meaning of normals is lost

I wonder if there are other variants of building such procedural waves? Am I the only one interested in this?
UE Water System seemed inflexible to me. I did not find information for building such waves (not epic waves).
There are many examples of water on the Internet, but all of them do not contact with the shore, only foam
And I see no point in using Houdini for my tasks

You need to sanity check your normal strength against the DDX DDY using a dot product. Our forumla allows us to set a normal strength that is greater or lesser than accurate.
Use a dot product between them, when you have the physically accurate value, the output will be pure white:


Correct Value, viewed in unlit:

Incorrect value, too strong:

Unfortunately this value seems to change with the amplitude of the wave, there is probably a proportional correlation and could allow the strength to be set automatically, but since there’s a little wiggle room for error before anyone notices, as long as there aren’t extreme changes in wave amplitude no one will notice.
Using the correct value will solve most artifacts.

Any remaining black reflections will be issues with Lumen, where black spots usually indicate that the ray has either struck an object that cannot be drawn in reflections (such as metals, before multi bounce support was added in the latest version), or missed its hit entirely.

3 Likes

Thank you for spending your time on my problem. You helped a lot!

1 Like

̶H̶a̶d̶ ̶a̶ ̶f̶e̶w̶ ̶m̶o̶r̶e̶ ̶s̶e̶c̶o̶n̶d̶s̶ ̶t̶o̶ ̶r̶u̶n̶ ̶s̶o̶m̶e̶ ̶t̶e̶s̶t̶s̶.̶ ̶T̶h̶e̶ ̶c̶o̶r̶r̶e̶c̶t̶ ̶n̶o̶r̶m̶a̶l̶ ̶s̶t̶r̶e̶n̶g̶t̶h̶ ̶f̶i̶g̶u̶r̶e̶ ̶a̶p̶p̶e̶a̶r̶s̶ ̶t̶o̶ ̶b̶e̶ ̶a̶r̶o̶u̶n̶d̶ ̶(̶P̶i̶/̶1̶0̶)̶/̶A̶m̶p̶l̶i̶t̶u̶d̶e̶?̶ ̶R̶e̶g̶a̶r̶d̶l̶e̶s̶s̶ ̶o̶f̶ ̶w̶h̶a̶t̶ ̶v̶a̶l̶u̶e̶ ̶y̶o̶u̶ ̶c̶h̶o̶o̶s̶e̶,̶ ̶d̶i̶v̶i̶d̶i̶n̶g̶ ̶t̶h̶a̶t̶ ̶v̶a̶l̶u̶e̶ ̶b̶y̶ ̶t̶h̶e̶ ̶a̶m̶p̶l̶i̶t̶u̶d̶e̶ ̶w̶i̶l̶l̶ ̶m̶a̶i̶n̶t̶a̶i̶n̶ ̶i̶t̶ ̶w̶e̶l̶l̶ ̶a̶c̶r̶o̶s̶s̶ ̶a̶n̶y̶ ̶a̶m̶p̶l̶i̶t̶u̶d̶e̶.̶ Any figure seems to have some error.
The inaccuracies seem strongest in the corners (where both RG are rotated) and most accurate where only R or G was rotated - which is not surprising to me, because the vectors end up being slightly longer at these points. Maybe that could be improved further, but I’ll leave it there for now. Edit: Solved, see later.

You’re welcome!

1 Like

This helps me a lot too as well, thank you for the deep-dive; I learned quite a bit.

You should throw this, and whatever else you’ve worked on into a time-saver kit and place it on the marketplace. :smiley: Seeing some of your other solutions, it would be akin to the Materials-Lab from Epic, here’s how-to-do-this.

1 Like

I’ve certainly thought about it. Glad it has helped.

Actually I think I have now solved this. Because we know how long the vector originally was before we rotated it, we know that it’s length should be preserved. Rotating the vector shouldn’t lengthen it, but it does due to how we’re combining channels.
So we just need to set the magnitude of our RG vector to be equal to the Sine normal length.
The formula is (L/||V||)*V= U
where U is the output vector, L is the desired length, V is the input vector and ||V|| is its magnitude.


Here’s a difference image. The color in the corners shows us that the length correction was greatest in areas of greatest rotation as expected.
And here’s what it looks like in the material:

Also looks like I made a dumb mistake earlier. When I was calculating the dot product to find the right normal strength I had it set to exclude material offsets so the DDX DDY wasn’t actually working, which was making it favor lower normal strengths (as it was just comparing it to a flat normal. So forget what I said about that, and just leave it at a fixed value ( make it negative if you find your normal is inverted), and alter it accordingly to harden or soften them. I do find 0.314 to be quite pleasing. And don’t divide by amplitude unless you want the normal strength to by fixed regardless of amplitude.
The basic premise of sanity checking them comparing them to DDX DDY normals is still valid. Another way I like to compare is with a “time sine debug” node driving a lerp between them.

Okay, I’ve been able to significantly optimize this.
I couldn’t help feeling like there was a way to avoid having to turn our gradient vector into a rotational value only to turn it back to a vector. Turns out there is and it’s very simple. I looked at what our final normal map looks like after all of its rotations, and the answer was it looked like our original gradient multiplied by our 1D map.

Here’s what I mean…
Our rotated normals excluding B:


Our distance field gradient… Look familiar? But we’re still not quite there.

What if we invert it?

Ah ha! it is just alternating between the inverted and non-inverted vectors of our DFG. In hindsight this makes perfect sense, but I guess just had to do things the hard way to see it.
If the DFG points towards the mesh, it only makes sense that half the sine wave would also point towards the mesh, and the other one would point in the opposite direction of the DFG.

Here’s the wave. We can just alternate what way the normal points based on whether our Sine function is sloping up or down.

Anyway, here’s the optimized version of the material, now with 75% fewer trigonometric functions, which should dramatically improve the performance.


Also you may or may not want to correct for the vector length. While it will make the RG vector length consistent, it also appears to make transitions between distance fields pointier? So maybe set that up as a bool in case it looks worse for you.
With length correction:
image
Without:
image

2 Likes