The problem with the FC4 method is that they use a really weird method for their normals. They pack the normal, binormal and tangent into a 10:10:10:2 format, by using some filtering and converting back/forth with quaternions. It eats a ton of extra cycles and bandwidth, is pretty **** lossy, but allows them to have the information for the lighting pass (it’s method #2 on my list from last post). It’s pretty much like if you had to run three separate gbuffer normal passes, except they pack three sets of normals inside of 32 bits, to do it all in one normal pass, instead of one set of normals in a 24.
If you’re trying to dive into it, I’d suggest checking out the BRDF.ush in the shaders folder. Within it, look for the anisotropic ggx section.
// Anisotropic GGX
// [Burley 2012, "Physically-Based Shading at Disney"]
float D_GGXaniso( float RoughnessX, float RoughnessY, float NoH, float3 H, float3 X, float3 Y )
{
float ax = RoughnessX * RoughnessX;
float ay = RoughnessY * RoughnessY;
float XoH = dot( X, H );
float YoH = dot( Y, H );
float d = XoH*XoH / (ax*ax) + YoH*YoH / (ay*ay) + NoH*NoH;
return 1 / ( PI * ax*ay * d*d );
}
You’d have to do some digging around to verify:
I think the H term is the half vector between the light and the view, so it should be something like H = normalize(L+V) or whatever the variables are labelled as
N would be your normal
NoH would be normalize(dot(N,H))
RoughnessX/Y would be your tangent/binormal vector information
X/Y would be your tangent/binormal surface roughness values
I might have the roughnessx/y and x/y stuff backwards, but it should be the basic gist.
Something along those lines. You’d have to edit the material so that it’s roughness input takes a vector2(making sure to assign the channels to the right variables), as well as deal with the normal non-sense.
And one more thing: The main reason why you need all three normals is because a regular normal just points outward and it’s tangent/binormals would be undefined. Even if you had two of the three, the third could exist in two positions (left or right, 90degrees from it’s counterpart). That’s why you need all three, so that it can “spin” on the normal and have angular directionality for the anisotropy. Deferred rendering just goes by the pixels within the buffers and no longer has access to the models in order to check their tan/bi normals, hence why it takes three sets of normals. This is why it’s usually just done in forward rendering.