Mip-Map Distance Bias, Cinematic Textures and Advanced Texture Streaming... etc...

Hey sports fans…

I’ve got a series of meshes that join together to form a much larger mesh, with all their pivots being considerably far away from the actual geometry of the mesh itself (and this increased further by the time the meshes are scaled to the correct size). THe reason being is that it makes assembling the final mesh considerably easier and 100% seamless. Since Texture Mip-Maps are streamed in based on distance from the pivot and not the pixel/mesh itself (for obvious reasons), is it possible to offset that distance or should I just forget having Mip-Maps at all and blaze it with Cinematic LOD’s and no Mip-Maps?

At the more technical level now… In terms of real-time memory usage, I know that Mip-Maps are beneficial regardless right? If I’m constantly streaming textures in and out of GPU memory, it’s beneficial to have the Mip-Maps there? Now, would I actually save memory in other places by forcing the use of no mip-maps and potentially cut down the amount of writes I’m doing to the GPU? In terms of disk-space, my texture file-sizes should slice almost in half correct? Or does the DXT Compression not work that way?

Basically I have an awful lot of 4K textures applied to a very split-up mesh in order to achieve a pretty incredible resolution. The workflow is pretty slow so I’m trying to optimize the process as I go rather than make a bunch of changes later. So here are some general techy questions:

  • Depending on the content, would a Packed RGBA texture draw faster than two separate textures, one RGB and the other Greyscale?
  • How much does the Blue channel of Normal Map compression suffer? I know that Normal Map compression makes it so that precision is lost here as the blue channel often has very little data if anything at all. Could I potentially store my RG normal information and a fairly detailed grayscale map in the B channel without suffering high detail loss? This kind of goes back to my previous question too, as I’m not sure if it’s quicker to calculate the blue channel with ‘DeriveNormalZ’ or just stream in two textures.
  • 1-Bit Alpha Compression no longer exists in UE4 right? IIRC it used to be a ‘free’ way to get a blocky Alpha mask into the engine without any additional cost. If it does still exist, is there a way to soften the edge in the shader? Or again, am I just better off importing a grayscale alpha texture instead?

Finally (for now), what are the major differences of TC_Alpha and TC_Grayscale/Default? Grayscale gives very nice compression to 8-bit images but does Alpha do something different that’s more suited to using them for transparency?

Using no mip-maps indeed decreases memory usage (disk and gpu) but will introduce heavy noise/shimmering/aliasing on the other hand, which is one reason why they are used in the first place.
You can bias the mip map level for each texture sampler. Maybe create a simple funtion to adjust the bias based on pixel depth / distance.

The defualt normal map compression stores only the R+G channel (8-bit each) and does a DeriveNormalZ automatically in the shader. It’s possible to use DXT1 (5/6/5-bit) for your normals to store an additional image in the blue channel but the quality would be worse for both the normals and the grey-scale texture.

Oh really? I thought that was purely when you use Uncompressed Normal Maps? That’s a bummer, looks like I may have to sacrifice some bandwidth there then.

In terms, of the biasing, do you know where the setting is to do that or if there’s a function available for materials to bias the mips? I feel that that’d have to be an engine-level material node as opposed to a function.

Tasty Bump…

I also would like to know how much more memory intensive using ‘Vector Displacement Map’ is over TC_Default. UDK used to show you the texture memory usage per-file but I can’t find that in UE4.