Thanks for the quick reply. I’ve updated my code to (bolded changes):
float Opacity = 0;
float3 Ray = Direction;
**Ray.xy = Ray.xy / Ray.z;**
float3 Step = StepLarge * Direction;
for(int i = 0; i < 100; i++)
{
if(Opacity >= 1 || Ray.z >= HeightMax || Ray.z <= HeightMin)
{
return Opacity ;
}
float NormalizedZ = clamp((Ray.z - HeightMin) / (HeightMax - HeightMin), 0.0, 1.0);
**float W = NormalizedZ * 128 * Tiling.z;
float3 CloudUVWBelow = float3(Ray.x * Tiling.x, Ray.y * Tiling.y, floor(W));
float3 CloudUVWAbove = float3(Ray.x * Tiling.x, Ray.y * Tiling.y, ceil(W));
float4 SampleAtPointBelow = CloudTexture.SampleLevel(CloudTextureSampler, CloudUVWBelow, 0);
float4 SampleAtPointAbove= CloudTexture.SampleLevel(CloudTextureSampler, CloudUVWAbove, 0);
float OpacitySample = lerp(SampleAtPointBelow.x, SampleAtPointAbove.x, frac(W));**
float HeightSample = HeightTexture.SampleLevel(HeightTextureSampler, float2(0, 1 - NormalizedZ), 0).x;
float OpacityAtPoint = OpacitySample * Multiplier * HeightSample;
if(OpacityAtPoint > 0.0)
{
if(Opacity == 0.0)
{
Ray -= Step;
}
Step = StepSmall * Direction;
Opacity += OpacityAtPoint;
}
else
{
Step = StepLarge * Direction;
}
Ray += Step;
}
return Opacity;
However the banding is still there. The multiplication by 128 in the W calculation is because my volume texture has 128 depth layers and the W isn’t normalized.
Result is largely the same:
Edit:
Since I am normalizing the camera → pixel direction vector, don’t I have to take into account the camera position at some point in the calculation too?
Edit 2:
I’ve added the CameraPosition to the ray before the loop. I can now actually move through the clouds and enter them, but they’re woefully low. Moving the high up makes them disappear, most likely due to my small loop count my rays don’t reach them.
float3 Ray = Direction;
Ray += CameraPosition;
Ray.xy = Ray.xy / Ray.z;