Is it possible to get a lower resolution of the SceneTexture's PostProcessInput0?

Hello everyone,

As a newcomer to the Unreal Engine, I’m loving the possibilities it offers for creating immersive games and stunning visual effects. However, as I delve deeper into the engine’s capabilities, I find myself facing questions that are not always easy to answer.

One such question is whether it’s possible to get a lower resolution of the SceneTexture’s PostProcessInput0? From my understanding, this texture contains the final output of the scene before any post-processing effects are applied? Can I somehow scale down this texture and continue my Blueprint with this texture/image?

Or can I get a mip map of the SceneTexture?

Thanks in advance,
Patrik

Downscaling any texture can be as simple as just sampling it at every other pixel (or less) instead of every pixel. You can do this using the UV input of the sampler.

2 Likes

Ohh, that’s interesting! Thank you so much for the advice. I will give it a try. Have a lovely weekend!

This is exactly what I’m looking for as well!

I’ve been trying to understand how to sample from the SceneTexture’s PostProcessInput0.
I was thinking the only option would be to translate SceneTextures to Texture Objects? Might have it all wrong, though.

Any help would be appreciated.

1 Like

The scene texture node itself is the sampler. It has a UV input you can use.

1 Like

Hi again @BananableOffense,

Thank you so much for pointing me in a direction last week. I’ve been doing some tests and I think I’m one step closer but there is one thing I don’t quite understand.

Here is the post processing material:

And I’m able to scale down the image using the following Blueprint:

Here is the result:

Is it possible to create a new texture from this because I want to do post processing on this smaller texture?

Thank you so much in advance,
Patrik

It’s not enough to just divide, because UVs range from 0-1. If you divide, then you will only fill the 0-0.5 space.
You actually need to have multiple repeated coordinates to increase the apparent size of a pixel. Imagine you have a grid of pixels:

(0,0)(1,0)(2,0)
(0,1)(1,1)(1,2)
And so on…

To downscale, you would want to sample the same coordinate for multiple pixels. So the end result is:
(0,0)(0,0)(2,0)(2,0)
(0,0)(0,0)(2,0)(2,0)
(0,2)(0,2)(2,2)(2,2)
(0,2)(0,2)(2,2)(2,2)

Each pixel looks twice as big because it’s actually twice as many pixels sampling the same coordinate.

1 Like

Thank you for your answers! Really appreciated.

Unfortunately, I think the information is more aligned to what this question is about. To able to manipulate each pass / scale individually and then at the end bake them.

1 Like

Thank you so much for your answer @BananableOffense. Yes, I think you are right @wiquid_78… I’m able to “pixelate” the post process material (see images below) but what I want to do is reduce the number of pixels in the texture. If the original screen texture is 1000x1000 px, then the scaled down version should be 500x500 px… then 250x250 px and so on.

Render targets are used for baking material outputs as textures.

1 Like

Oh, interesting. I’ve been researching it and cannot find information on how to connect it to a post-process material (PostProcessInput0). Do you know how to achieve this? :slight_smile:

1 Like

I don’t think it matters what kind of material it is. Only that the desired output is connected to the emissive. I don’t see why a post process material would work any different than anything else here, but I haven’t had a reason to try it myself.

1 Like

So I haven’t tried this, but it seems relevant to what you’re trying to do:

3 Likes

Right, this is pretty much what I was proposing, other than he’s using a Scene Color sample instead of PP0 to write to the RT.

1 Like

This is very very helpful! Thank you so much too for your help @Arkiras :pray:

I made a video about an engine modification that does just that, scale the SceneColor so shaders run at a downscaled resolution : https://www.youtube.com/watch?v=K398K2VWSxQ