Using the linear gradient node in the lerp’s alpha gives me a gradient but the gradient has heavy banding and is following the camera, i have no idea what i’m doing but at least it looks closer to what i want
Love the progress KhenaB!
Glad you like it, i’m getting there slowly
Sorry for not replying sooner, I wasn’t around a computer over Easter.
It is indeed mathematically a bit more complex to do it properly.
The first step to making it more accurate would be separating the light absorption from the light diffusion. Those are the main you are dealing with and replacing them with a single fog factor is not particularly accurate. Absorption is the effect that less light reaches you the further the ray has to travel through the water, while the turbidity of the water causes the light from above the surface to be scattered to create the diffuse colored haze in the water.
The absorption is only dependent on the distance the light has traveled through the water and can be easily modeled with the Beer-Lambert law.
The diffusion due to turbidity is more complex as it technically not linear and dependent on both depth and orientation. To do it properly you would probably have to use something like raymarching. However, you can probably get quite close with a linear approximation.
If it would help I can see if I can find the time to create another prototype.
Thank you for the in-depth explanation Arnage, doing it accurately sounds really complicated, are we talking about changing how my directional light is behaving underwater, or faking effect somehow, the fog was my way of faking a water body and adding the brightness diminishing effect to the fog was the only way i could think of to fake the light absorption
Would it be feasable to use worldspace Z coordinates mixed with the distance from the camera to create a fog like overlay material that all underwater objects use?
As long as all objects in the underwater scene use the material it should look “correct”
here i did a quick mockup:
Currently it just uses a lerp to switch between two colors, however i can imagine you could use a much more detailed lookup table to determine how colors change with distance and height.
EDIT:
Another way is to use the z height to suck the colors out of your objects using special materials and then use a fog to take care of the the depth effect
Hi MissStabby and thank you for chiming in
From what i understand with your solution only the objects would be affected (their material)
I also need to consider the empty space in which i’m swimming as an “object” to simulate a water density, is what my height fog does, even if it isn’t really accurate it’s the simplest method i know of, without the fog it would look as if i’m swimming in thin air
Therefore to really get the effect i’m after, the water (fog) also has to be affected by effect
Again thank you for chiming in, any help is really appreciated, i hope i understood your suggestion properly
Actually MissStabby is right, adding the effect to the materials themselves can have the same effect as using a post process material. As long you make sure the empty space has some kind of geometry (such as a skybox) to calculate the effect for those pixels. I doubt there will be a significant difference in performance though.
I couldn’t resist prototyping a new method (at least for me, not claiming no one else came up with before me :P) to more accurately (yet cheaply) simulate the underwater light interaction. Here’s the result compared to exponential height fog:
(The scene itself is just an ugly noise landscape with a white material)
Here is a breakdown of the main steps:
As you can see there are two absorption parts. The first is from the sun to the object, and is implemented with a light function. The object to camera absorption and turbidity diffusion are both done in a post process material.
Finally the turbidity diffusion correctly responds to the depth and camera orientation:
I ended up using the Beer-Lambert law a lot so the first step was to create an absorption material function:
The lightfunction on the sun then simply applied function based on the depth along the Z axis. (for simplicity’s sake I placed my water plane at Z = 0)
Finally we have the post process material:
The direct light part handles the absorption between the camera and object. Finally getting a nice approximation for the turbidity diffusion turned out to be simpler then I thought. The orientation part of the effect is handled by simply taking the camera vector Z axis to fade out the further down you look. (As stated in the comment box could also be oriented toward a sun actor, but in your overcast scene a vertical orientation probably works best) Finally the depth effect was achieved by using the absorption function again, but time taking the depth of the camera itself as an input.
Thanks for providing another nice challenge I hope I described my process well enough so the results can also help you out.
PS. I noticed while making post that I used a different Distance Factor in the lightfunction compared to the post process material. It is more physically accurate to keep those the same, but as is art there is nothing stopping you from tweaking the value for each part of the effect.
Arnage, is incredible, i have to try right now
I almost feel like i’m cheating, is way beyond my level of skills, but i’m going to study the materials and try to understand what you did
I will be back with my results
Thanks again
That effect looks great Arnage, I’m bookmarking page just in case I ever decide to do something underwater.
It works wonderfully, is beyond my expectations, you did it again
However i still get heavy banding and strange outlines caused by fringe, the banding could be caused by something in my global post process
About the outlines, should i just avoid fringe or is something that can be solved?
Did you experience any of that?
I have made an observation
Objects in the scene are tinted by the turbidity diffusion depending on the camera angle (lerp A to B)
Shouldn’t distance from them also affect , for example from really close up objects are still heavily tinted
Although i don’t fully understand the whole mechanic behind , I think distance from camera to objects should also be a factor
What do you think?
Sound to me like you are blending at the wrong time. In your post process material make sure that blendable location is set to “Before Tonemapping”. should solve both the fringe and the banding.
That is indeed correct. That’s what I get for trying to finish the shader and post late at night… As my temp geometry was overly bright error wasn’t really visible, but you should indeed add a distance factor to the additive part of the effect to prevent over-brightening objects close to the camera.
For the banding, do you also get same effect if you make a long object and give it a zheight based gradient that lerps between 2 color values?
It might be some floatingpoint inaccuracies, how big is the scene actually, 1:1 scale or is it actually scaled down a lot compared to 1unit = 1cm?
Another thing to check is any display/gpu settings, are you rendering 32bit or 16bit color to your monitor?
Setting my material to before tonemapping solved it
Thanks Arnage and MissStabby
I’m not exactly sure where the distance factor should be added, as an alpha for the Turbidity Diffusion’s lerp, correct?
Here are some of my results with the above distance factor added, the falloff curve seems to be a little to steep but is something i still don’t fully understand mathematically yet
That works, the only thing I would recommend is using an exponential falloff instead of a linear one. I adjusted it to :
Note that I also inverted both the top/bottom and distance . As allowed me to replace the lerps with multiplications. Not really necessary, but it works a little more intuitive.
Edit: We seem to have posted at the same time. That too steep falloff you mention is probably caused by the linear falloff I mentioned here so I may have answered your question before reading it
Btw. the easiest way (at least for me) to better understand the math of a shader is to not only look at the in the viewport. It’s math, so just throw it at a graphic calculator or wolfram alpha and it becomes a lot easier to understand what is happening.
As an example, here’s a plot of that linear falloff compared to an exponential one:
directly shows why a linear falloff causes a harsher transition.