Texture resolution vs size vs DXT1

Can someone help me understand how Unreal deals with textures? RGBA.

I did some tests, I created an image in photoshop at 4096x4096. All white, RGBA.
I saved as a few very popular formats, TGA, PSD, PNG and JPG.
Each has a varied size on disk. TGA and PSD the largest at 65Mb, and PNG at 125Kb-ish.

When I import them into Unreal they all read in as 10Mb, DXT1.

Why?! :slight_smile:

If I decide to use the PNG, which is so much more less memory space, at 4k resolution, why does it take 10Mb in memory rather than 125Kb?

Cheers!

DXT1 is realtime compression technique that is decoded directly at GPU. Just before sampling. It’s fixed size 1/8 compression method.
TGA, PSD, PNG and JPG are non realtime compression method that has to be decoded at CPU before these can be used by GPU. Their storage space might me small but memory footprint at GPU is enourmous.

Thank you, I guess that makes sense. I know nothing about GPU memory. :slight_smile:
Is there a way to override DXT1 to test other compression formats/settings, or is DXT1 the only one available? I also noticed in the Unreal Kite Demo files that were released, they use DXT5 instead of DXT1. Other than the size, what is the difference between the two?
I’ve been looking at this : Texture Support and Settings | Unreal Engine Documentation

I guess what I’m asking, my main question, how do I bring down the amount of memory space when using 4k textures? What other GPU compression options do I have?
Can I reduce memory footprint by reducing/disabling the 13mips unreal creates?

Thanks,

N!K

GPUs and CPUs are good at different things, and the JPG or PNG types of compression used on CPUs, don’t work on GPUs. Instead, we have “block” type compressors, like DXT, for textures.

DXT1 compresses color information into 4 bits per pixel, and has a maximum color resolution of 16 bits (as old VGA adapters.) DXT1 can also compress the special color “Black, transparent,” so it can be used for certain kinds of pre-multiplied alpha. This is about as much as you can compress a texture and still retain some semblance of useful color information.
DXT5 uses DXT1 for the color part, but adds another 4 bits per pixel of alpha. This gives you better transparency. Also, with a bit of shader magic, you can use the “alpha” bits for something other than alpha, to get a bit higher quality than raw DXT1. Normal maps are a great example.

DXT compression (also known as S3TC) has been around for a long time (15 years at least!) For more modern graphics APIs, there are more compression formats, that improve on DXT in certain ways, such as using 24-bit or even more for the base color resolution (useful for HDR rendering) and encoding things like “edges” within compressed blocks of pixels.
But, those formats generally use slightly more storage to achieve higher quality, rather than trying to squeeze colors down to less than 4 bits per pixel.

MIP maps increase the size of the texture by about 30%, but removing MIP maps is a really bad idea, because if the object with that texture is in the distance, the graphics card still needs to read across the whole, big, texture, thus tanking performance. MIP maps are very well worth it for rendering performance!

When it comes to 4096x4096 textures, the best way to save more space is to drop them to 2048x2048!

When you double-click an imported texture in the Unreal Editor, you will see a number of “texture usage” settings, and a number of “texture compression” settings. You can play around with those if you want. However, generally, the goal there is to get higher quality, not to get smaller on-card storage. There’s a reason modern graphics cards have 8 BG of texture memory! Also, a reason most games use smaller textures for most less-important and smaller-size objects. If an object will never be taller than 40 pixels on the screen, how much texture do you really need? :slight_smile:

Very well explained… Thankyou very much!