Automatic Power of 2 Cropping

@midgunner66 -


Observe how much more distorted the circle edges on the right appear due to having stretched them twice instead of once.

It looks like you did that in Paint. Paint doesn’t filter the image when you stretch it; you should do it in another program that does. This is what it looks like using my blueprint script:
comparison
Also, notice how on the full texture, the circle hits the edges completely (same as with the original image), whereas on the padded texture, the circle doesn’t by one texel; this is the color bleed & mip problem I was talking about.

My point is that it is being stretched twice, and that is different than once. You can see it in your example too. When you say, “The pre-stretching and stretching in the material are the same stretch,” it sounds like there should be no difference, but there is. That impact can be reduced with better interpolation, but it’s still there, and interpolation results in a fuzzier image after the second stretch because it gets applied twice. There are trade-offs so pre-stretching the image might not always be the better choice. In my case, I am starting with relatively low resolution images that are going to be in a rectangular frame anyway, so the edge bleed is of little consequence (I don’t need the edges to match anything) and the distortion, aliasing and fuzziness are more significant concerns.

I am, however interested in the prospect of having multiple images share the same texture. Can that be done (UV mapping?) within Unreal? I think I have done this a long time ago in Blender, but am not sure if there’s a good/easy way to do that entirely within Unreal.

FWIW, my project is making a sort of VR (Quest 2) museum of artwork created by a family member. I’m doing this partly as something to try while familiarizing myself with Unreal. The source material is at:
http://maryann.enigmadream.com/

1 Like

Haha, just noticed that in my example. When zooming in close, the pre-stretched one is fuzzier:
comparison

It’s caused by a mixture of stretching and filtering (I set it it nearest filter and it fixed the fuzziness, but the circle wasn’t as smooth anymore). So it is better to use the padded one. Makes sense since the original image literally stays the same resolution, lol.

So, really, the only problems left are the mip issue (which is being over-estimated because the padded texture is larger than what’s actually being sampled) and the border issue (which you’ve said you don’t mind).

Though, to solve your original problem:

There’s many ways I can think of, but I think the simplest way is to store it in the name, then set the material parameter at runtime using the name. Pretty simple, and you can automate it with blueprints, too, which will save you some manual work.

Also, is there any reason you’re using squares instead of just powers of two?

That would be a more efficient use of the textures, so you can if you want, and you can automate this with blueprints, as well. Though, you will only be able to put textures together if you don’t have to stretch or squash them to make them fit. But that shouldn’t be too much a problem, especially since they’re low resolution.

I wouldn’t know where to begin applying UV mappings within Unreal let alone automating the whole process of combining textures. Any pointers?

Oops, forgot you also need to set the UVs. Guess you can use customized UVs for that (I would also suggest using this for the padding idea as well, since it’s more performant because it only runs on the vertices & not the pixels). But I only see this being useful for rectangular shapes (or the same shape as the artwork).

But with automating the rest, you would just draw it to a render target, then save the render target as a new texture. But you would have to make sure all images can fit together in the same texture without resizing, which will require more scripting than you probably want to do.

I don’t think packing a bunch of textures together is necessary (though, I’ve never worked with VR), so I I think just setting all the textures to padded (power of two) will be enough. You can automate this by running over all the textures and setting their power of two mode; here’s an example:


This just runs over all the selected assets, then plays a sound and prints “done” after it’s done; the delay is put in there so that the editor doesn’t completely pause while doing it. This is just one way of doing it; there’s many other ways, too (for instance, doing all assets in a folder).

Cool! I think this will be very instructive whether I decide to use it or not. I have three related questions:

  1. Is that “script” (I assume script is the term you are using to refer to such a node arrangement?) something that would run during startup / run time (depending when the event is triggered) - not at development or build time?
  2. Is there a good reason to automate this power of two padding rather than simply selecting power of two padding at development time?
  3. If I did want to try to manually combine my rectangular textures into larger power-of-two spaces, is there a way within Unreal to extract the images from the pre-combined texture to show them as individual images within a material?
  1. This blueprint/script (both terms work, but blueprint is probably better to use) runs in the editor with a button (call in editor event), so it happens at development time. You can also use a blutility, which lets you add a button to the context menu (would link documentation, but the page is down :confused:).
  2. Since it’s just one setting, no, not really, lol. And, I just realized you don’t need to do this at all: you can just use the property matrix:

    Make sure to also enable mips and disable “Never Stream”.
  3. Yeah, you’d do it the same way you did it in your first post (or in my last post using custom UVs). The only difference is you’d need to also store an offset so you can sample different parts of the texture.

So without an interactive visual utility natively available in Unreal, it seems like it would be easier to just use Blender or something to create the combined textures and apply them to rectangles that pick up the appropriate portion with a UV map. Is that true?

My first post was scaling the image from the texture, but I haven’t tried to create the combined texture or pick up portions of it not aligned with the origin.

Yeah, that would be easier and is how you’d normally do this. Automation would just speed up the process.

The only difference is that after you scale the uv, you move it to the correct location on the texture. Or a simpler way to think about it is that you store the position of the top left pixel and the bottom right pixel, and that forms the box where you sample the texture.