Automatic Power of 2 Cropping

This is about my first experience working blueprints in Unreal and I’m just getting started with Unreal in general.

In order to efficiently handle mipmaps I understand texture dimensions should be a power of 2. But I have a bunch of pre-existing images that are various sizes. So I am thinking of pulling them all in and using the “pad to power of two” mode on each texture to allow me to then use “SimpleAverage” Mip Gen Settings to generate mipmaps. Then I need to figure out how to properly scale the result to fill the geometry (rectangular) with the original texture size. To do this I am creating a material blueprint like the one below of which I can create many instances to more easily scale each texture to its original size. I am wondering if there is a simpler way to do this or if this is advisable.

My main frustration/concern is in determining the texture’s original dimensions which I currently implemented by just adding two parameters where I’ll have to manually enter the original width and height.

Mipmaps only really become useful when you have a lot of the same texture or very high res textures that are moving with respect the camera. Unless you have this stuff all over the place, the best thing would be to just not worry about it. It wont affect performance.

Does that statement apply to mobile (Quest 2) development?

I’ve never developed for mobile, so can’t be totally sure. But I assume it’s still an optimization thing. Unless there are hundreds of these on billboards all over the level, it’s better to hit an area that’s really loading the system.

I don’t know how to determine what’s the biggest load on the system, and actually, there probably isn’t much of an issue yet. I have an almost empty project and was just trying to do things correctly from the start. Things are running fine (except there are no shadows when running natively on the Quest 2 for some reason). But regardless of the optimization issues, to help me understand blueprints and textures better in general, do you know how the depicted blueprint could be improved? Is there a way to retrieve the un-padded size of a texture? Are the TexCoord and Multiply nodes both necessary in order to get adjustable UV coordinates? Is it truly impossible to get a Vector2 parameter, thus necessitating 2 scalar parameters instead?

To read the texture size, use

image

Yes, you need Texture coord and the multiply to make the UV adjustable in a material instance. Otherwise you’ll have to go back and re-compile the material.

One parameter is the easiest:

image

But if you want to adjust x and y independently, then you need two.

It’s up to you if you use two variables and append them, or use a vector. The problem with the vector is it has the extra dimensions, and always looks like you’re adjusting a color.

I can’t find a “Get Texture Size” node type when attempting to add to the blueprint. How did you get that? Also, does it get the original size as opposed to the padded size? I was using a TextureProperty node with value “Texture Size” to get the texture size, but that only gets the padded size.

I made a variable of type ‘material object reference’, which engine are you running?. If you pad to power of two, it will give you the padded value, because the texture has been adjusted.

I also just read that using power of two for mobile is good practice. Not because it runs fast, but the libraries used aren’t as flexible.

Not sure how much I believe that, as you can definitely show a non-square image on a phone.

It might be easier to the textures outside the engine if you have to.

I just upgraded the project to 4.27 today. I was running 4.26.2. When I am in the material blueprint editor (as I showed earlier), I can add a “TextureProperty” node and get the padded texture size that way. But I also tried creating a new blueprint to help me create these actors more uniformly. I added a Material parameter to the blueprint, and from there I can’t access “TextureProperty” nor figure out how to retrieve the texture from the specified material or get the size of a texture.

Some pics here.

Something you’re confused about ( which I was admittedly in the beginning ), is the difference between a blueprint and a material.

They are not the same thing, they just both have nodes in them.

So, in a blueprint ( not a material ), you can make a texture object variable and call that node, but not in a material.

For instance, open a level blueprint and you see:

in a material, you see

I think this is why Epic did this.

Either way, you’re only going to get the size of the texture, which changes when you pad, I’m afraid.

Inside a blueprint

  1. You shouldn’t be doing cleanup work on assets during the game; that’s going to add needless processing, which is especially bad considering this is for mobile.
  2. You can automate this with blueprints: just draw each texture stretched to a render target texture and convert it to a regular texture. Or you can automate it in another program outside unreal.

I don’t understand what constitutes cleanup work on assets during the game. I thought once the game was running, all textures were basically static, and my material was basically picking coordinates of the corners of a texture to draw; I don’t see any cleanup happening there.

The cleanup (or rather fixup) is that you are stretching the textures to a power of 2.

The way you are doing it, you are “fixing” the textures in the material, which runs in the game every frame per pixel. In other words, every frame in the shipped shipped game on the user’s mobile device, the material is going to be stretching the textures to their intended size. Which is not exactly expensive, but it’s inefficient since that is something that (1) only needs to happen once, and (2) can be done beforehand.

You know what they need to be beforehand, so why not fix them now rather than on the every user’s device? You are wasting the user’s device’s processing power & battery by doing needless work. Understand that without the need for stretching it in the material, your material would just be a single texture sample node.


Looking at it again, it doesn’t look too bad. But nonetheless, it’s still better to stretch the textures to their known intended size beforehand rather than in the game.

I’m not stretching the textures. I just selected the “Pad to Power of 2” option within the Unreal texture definition. I assume this happens at build time, not run time. It applies to the texture, not the material so it doesn’t run every frame. I think you suggested stretching the textures, but I’m not doing that.

After further thought I see you might be referring to the logic in the material as “stretching” but that’s trivial because textures are constantly resized to fit the quadrilateral on the screen where they’re being drawn anyway. The only extra work being done, I think, is calculating new corner points for the source of the quadrilateral. This is nothing next to other work going on in a typical game.

Ok, look at this:


Left is a padded texture, right is a full texture, and both power of 2 (1024x1024). They are displayed at mip level 4. Original image is from: Penguin poop creates a buttload of laughing gas, researchers find | Ars Technica.

Notice that the padded texture (1) has less quality than the full texture because the image takes up less pixels, and (2) has a red border because the padding color is bleeding over to the original texture.

Of course the same MIP level is going to look worse in the one that has fewer pixels of actual image. I think the relevant questions here are:

  1. Does the one on the left more efficient because it’s sampling fewer source pixels (texels)?
  2. Is/should/can the selection of MIP level be based on the source resolution and distance instead of just the distance of the image in order to maintain consistent texel resolution of rendered output (texel:pixel ratio).

It may be ideal to pre-stretch all my images but the problems I have with that are:

  • It will be hard to document/remember the correct aspect ratio for each image that I want to use. There may be hundreds of these images.
  • Stretching an image twice generally results in lower quality than stretching once.
  1. The source pixel count is the same because both textures are the same resolution, so no. It’s actually (slightly) more expensive because there’s math involved in calculating the UV. The left one looks like your material, the right looks like this:
    image
  2. Possibly, but that would require more math. Also, don’t forget the color bleed issue.

It’s being applied to a mesh, so shouldn’t it not matter? The result is the same as what you’re already doing. But if you actually need to keep that, just store it with the texture or in the texture’s name.

It is only being stretched once. The texture is just being stretched/upsampled to the next power of two.


Forgot to mention I automated it using a script. It does it for all selected textures, so you can do it on all of your textures quickly.

But only the stretched texture us drawing all the pixels. The other one is only drawing pixels from the unpadded area.

If I keep the original texture and padding in the project I can still see what the original aspect ratio was and make sure that I apply the same ratio to the object I’m creating so the final result doesn’t look distorted. If I pre-stretch everything, I’ve lost all that information. I can store these as properties of the material instances that use the textures if I can remember them, which will be easier if I apply them to the name as you suggest.

By the time the user sees it it has been scaled/stretched twice. Once from my pre-build stretching and once as it is drawn onto the screen.

It’s not going to just get faster (at least not noticeably); though, maybe adjacent pixels sharing the same texels could.

Anyway, seeing how simple your material is, I think the performance difference will be negligible whichever way you go. I was just thinking since it’s for mobile, you’d want to optimize as much as possible.

Ok, I see. Note: if you need just the ratio, not the resolution, you can store it as a single float. Also, if you go with the padded texture, you have a bunch of unused pixels, so you could store things there, too. But to access that in the material, that would be another texture sample.

The pre-stretching and stretching in the material are the same stretch (they’re doing the same thing); the only difference is when they’re being done.

Also, stretching is only a problem if there’s information being lost. Since it’s being upscaled (not downscaled), no information is lost; it is being “stretched” across a larger area (more pixels) rather than being “compressed” into a smaller area (less pixels).