Why do I need to reduce the size of the original textures if Texture Streaming works anyway?

If I understand correctly the purpose of the Texture Streaming system is that it should take arbitrarily large textures and compress them into mips. And then load into the video memory (streaming pool) the mip that will look adequate to its size on the screen.

If so, why can’t I just set Power of Two for 4K textures and do nothing else? Why do I need to manually limit the resolutions of the original textures? After all, theoretically, even for an 8K texture, the same small mip should be created as for a 2K one, if the object occupies the same size on the screen.

Example: Let’s say the processor sees fit to render a tree texture in the background. The original texture in the project has a resolution of 4096x4096 and weighs 20mb. However, the tree in the frame is small and, using Texture Streaming, the processor loads the video memory with its reduced mip, which has a resolution of 256x256 and weighs only 43kb.

However, if I use 4K textures everywhere, the required pool for streaming textures will fly into space. But why? After all, only a light mip is loaded, and not the entire 4K texture?

1 Like

Packaging is one thing. If you don’t offset the mips, all mips will get packaged with your game, leading to a much bigger download.

If you work with smaller textures, the maximum the streaming system can load gets smaller, too, so less bandwidth going through the streaming system.

Also, the streaming in editor, doesn’t work in the same way as a packaged product. In editor the buffer just fills up, and that’s it. In game, the buffer gets managed properly, so it’s not an issue.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.