Using custom shading model for per texel shading

I’m attempting to use 32x32 2D sprites basically as 3d assets with a chunky pixel look. This means all my textures have a consistent texel size with represent 1 pixel on the source asset. By default Unreal will apply light and shadows smoothly, but I want per-texel for a more blocky look.

I’ve created a custom shading model and am following various sources on the internet to try and achieve this mainly:

This fantastic result with unity forward rendering
https://forum.unity.com/threads/the-quest-for-efficient-per-texel-lighting.529948/

A post where someone briefly described how they followed the above post to implement in a custom deferred rendering engine.
https://www.kickstarter.com/projects/prophetgoddess/anathema/posts/3109106

The main trick seems to be clamping each pixels light calculation position to the center of the nearest texel in worldspace, so all pixels within a texel get shaded uniformly.

From the second article:

The technique above was developed for the Unity forward rendering pipeline, but I realized that I could adapt this technique to our own deferred rendering system by writing the quantized position value to the geometry shader! (We can also do this for normal vectors.) This “tricks” the lighting calculations into thinking that the position values across each pixel corresponding to a texel are exactly the same, which will give the same color output from a light across an entire texel.

The best I can comes to terms with this is they did the calculation in the base pass and lied in gbuffers depth rt? I’ve tried to do this in BasePassPixelShader by changing Out.Depth and GBuffer.Depth, but even setting these straight to 0 seem to have no effect in scenes.

I’m hoping someone can put me on the right path.

What does the second article mean by they wrote the quantized position in gbuffer to “trick” lighting? Isn’t the only position stored in the gbuffer depth? Or are they implying they used a customdata-like slot and then sampled it in the lighting pass pixel shader?

How do lights in DeferredLightPixelShader.usf know the position to use when calculating light?

I feel like the solution is pretty close, but I’m green to shaders and unreal with all of it’s render features is a bit overwhelming to sift through, any thoughts or ideas are appreciated.

This question has also been asked here:

Here was my attempt at changing the depthbuffer as per the second article

// align depth to nearest texel center 
#if MATERIAL_SHADINGMODEL_PIXEL
	if (GBuffer.ShadingModelID == SHADINGMODELID_PIXEL)
	{
		// 1.) Calculate how much the texture UV coords need to
		//     shift to be at the center of the nearest texel.
		float2 TexelSize = { 0.032f, 0.032f }; // placeholder
		float2 OriginalUV  = MaterialParameters.TexCoords[0];
		float2 CenterUV = (floor(TexelSize * OriginalUV) / TexelSize) + ( 1.0f / TexelSize) / 2.0f;
		float2 dUV = (CenterUV - OriginalUV);

   		// 2a.) Get this fragment's world position
		float3 OriginalWorldPos = In.SvPosition.xyz;

		// 2b.) Calculate how much the texture coords vary over fragment space.
		//      This essentially defines a 2x2 matrix that gets
		//      texture space (UV) deltas from fragment space (ST) deltas
		// Note: I call fragment space (S,T) to disambiguate.
		float2 dUVdS = ddx( OriginalUV );
		float2 dUVdT = ddy( OriginalUV );

		// 2c.) Invert the fragment from texture matrix
		float2x2 dSTdUV = float2x2(dUVdT[1], -dUVdS[1], -dUVdT[0], dUVdS[0])*(1/(dUVdS[0]*dUVdT[1]-dUVdS[1]*dUVdT[0]));

		// 2d.) Convert the UV delta to a fragment space delta
		float2 dST = mul(dSTdUV , dUV);
	
		// 2e.) Calculate how much the world coords vary over fragment space.
		float3 dXYZdS = ddx(OriginalWorldPos);
		float3 dXYZdT = ddy(OriginalWorldPos);

		// 2f.) Finally, convert our fragment space delta to a world space delta
		// And be sure to clamp it to SOMETHING in case the derivative calc went insane
		// Here I clamp it to -1 to 1 unit in unity, which should be orders of magnitude greater
		// than the size of any texel.
		float3 dXYZ = dXYZdS * dST[0] + dXYZdT * dST[1];
		dXYZ = clamp (dXYZ, -1, 1);

		// 3.) Transform the snapped UV back to world space
		float4 SnappedWorldPos = { OriginalWorldPos + dXYZ, In.SvPosition.w };

		float4 SnappedScreenPosition = SvPositionToScreenPosition(SnappedWorldPos);
		GBuffer.Depth = SnappedScreenPosition.w;

	}
#endif
1 Like

I’ve made some small progress but looks like there’s still lots left to do. My current method is writing TexCoord[0] into custom data in the base pass and then doing the rounding calculations in the lighting pass. And it almost works, if not a little silly.

The right model has default shading, the left model has per texel shading.

// BasePassPixelShader.usf 
if (GBuffer.ShadingModelID == SHADINGMODELID_PIXEL)
    float2 OriginalUV  = MaterialParameters.TexCoords[0];
    GBuffer.CustomData.x = OriginalUV.x;
    GBuffer.CustomData.y = OriginalUV.y;
}

// DeferredLightPixelShader.usf
		float Dither = InterleavedGradientNoise(InputParams.PixelPos, View.StateFrameIndexMod8);

		float SurfaceShadow = 1.0f;

		float3 WorldPosition = DerivedParams.TranslatedWorldPosition;

		// BRANCH
		if (ScreenSpaceData.GBuffer.ShadingModelID == SHADINGMODELID_PIXEL)
		{
                        // my texture is 512x512 can hardcode for now
			float2 TexelSize = float2(1.0f/512.0f, 1.0f/512.0f);

                        // from unity article
			float2 OriginalUV  = ScreenSpaceData.GBuffer.CustomData.xy;
			float2 CenterUV = (floor(OriginalUV / TexelSize) * TexelSize) + (TexelSize) / 2.0f;
			float2 dUV = (CenterUV - OriginalUV);

			// 2a.) Get this fragment's world position
			float3 OriginalWorldPos = WorldPosition;

			// 2b.) Calculate how much the texture coords vary over fragment space.
			//      This essentially defines a 2x2 matrix that gets
			//      texture space (UV) deltas from fragment space (ST) deltas
			// Note: I call fragment space (S,T) to disambiguate.
			float2 dUVdS = DDX( OriginalUV );
			float2 dUVdT = DDY( OriginalUV );

			// 2c.) Invert the fragment from texture matrix
			float2x2 dSTdUV = float2x2(dUVdT[1], -dUVdS[1], -dUVdT[0], dUVdS[0])*(1/(dUVdS[0]*dUVdT[1]-dUVdS[1]*dUVdT[0]));

			// 2d.) Convert the UV delta to a fragment space delta
			float2 dST = mul(dSTdUV , dUV);
		
			// 2e.) Calculate how much the world coords vary over fragment space.
			float3 dXYZdS = ddx(OriginalWorldPos);
			float3 dXYZdT = ddy(OriginalWorldPos);

			// 2f.) Finally, convert our fragment space delta to a world space delta
			// And be sure to clamp it to SOMETHING in case the derivative calc went insane
			// Here I clamp it to -1 to 1 unit in unity (100-100 unreal), which should be orders of magnitude greater
			// than the size of any texel.
			float3 dXYZ = dXYZdS * dST[0] + dXYZdT * dST[1];
			dXYZ = clamp(dXYZ, -100, 100);
			WorldPosition += dXYZ;
		}

		float4 Radiance = GetDynamicLighting(WorldPosition, DerivedParams.CameraVector, ScreenSpaceData.GBuffer, ScreenSpaceData.AmbientOcclusion, ScreenSpaceData.GBuffer.ShadingModelID, LightData, GetPerPixelLightAttenuation(InputParams.ScreenUV), Dither, uint2(InputParams.PixelPos), SurfaceShadow);

		OutColor += Radiance;

Though there are plenty of issues with this

  • Doesn’t seem to affect shadows
  • Seems to go crazy on certain meshes, and when the camera is too close to the mesh

The background assets should just be texel shaded, instead they have lots of strange artifacts. I built the meshes the same way I did the main mehes so I’m not sure why these behave differently:

Edit: All meshes in this scene use the same texture. The main meshes have their UVs within the first 48 pixels, while the wood floor and brick wall have their UVs at the verry bottom of the 512x512 texture. This means the further from 0,0 the more glitches.

Getting close increases the artefacts
CnJ5zuoIyN

There must be more I need to do than just changing the position and passing it to GetDynamicLighting. Any thoughts appreciated!

Edit:
Ended up getting it working by moving all my calculations into the base pass and ddx/ddy behaved better. I will post my final solution once I get shadows and normals taken care of

1 Like

Can’t really offer any help other than words of encouragement, this is a really cool effect.

Something I have always wondered, do you think it would be possible to get per-texel SSAO?

Thank you for your support, @arkiras. I’m not really familiar with SSAO is that a post process effect? It may be possible to encode something to the gbuffer perhaps the custom depth mask and do something interesting with it in post processing.

Any updates?

For anyone interested I have abandoned this long ago. Someone DM’d me on discord today so I put up a fork with all my WIP.
https://github.com/EpicGames/UnrealEngine/commit/3f535a6e621be7e4c8f0266b0d2c179c6fe5b7cd

Best of luck, show me if you make something cool

2 Likes