Multiple Continued Functions on a Material

I’m trying to perform multiple operations on the same material.

This is just an example, whether it makes sense or not.

Down Scale -> Sharpen -> Up Scale -> Sharpen.

ScaleUVsByCenter gives UV’s which I can pass to a TextureSample or to the UnsharpMaskTexture. I have to use the UVs with UnsharpMaskTexture because UnsharpMaskTexture wants a TextureObject not RGB from TextureSample. UnsharpMaskTexture returns the RGB after it’s finished I guess? Not much documentation on it. So the UV’s passed to the UnsharpMaskTexture from ScaleUVsByCenter scaled it and sharpened it, but now I only have RGBs left and can’t seem to do more manipulation. I can’t pass that result to Sharpen again with new UVs because it wants a TextureObject.

So my question is, how do I keep manipulating my result with UVs and RGBs when everything seems to be made for 1 step with passing in a TextureObject?

Most of the work done by a material occurs in the pixel shader, which inherently only acts on one pixel at a time. Texture Objects are required for effects that need to look at *multiple *pixels of a given texture in order to find the resulting color for the current pixel. In order to “chain” effects like this, you need to reference the results of pixels from the previous “pass,” but you can’t look this up directly, since the needed adjacent pixels from the previous pass haven’t necessarily even been calculated yet. That’s why you’re running into a roadblock in situations where you need to pass in arbitrary UVs.

Fortunately, there’s a way around this limitation in most cases, but you need to draw the first pass into a render target so that it can be fed as a texture into the subsequent pass. This naturally incurs a performance hit, since the entire first pass needs to be completed and stored before the second pass can begin.

There’s actually pretty good documentation that covers how to render a material to a texture at Creating Textures Using Blueprints and Render Targets | Unreal Engine Documentation. Depending on what you’re trying to accomplish, this might not work as expected (especially for effects that are applied to the entire screen). In that case, using a SceneCapture2D might be your only option, but it’s generally not worth it, since you’ll be rendering the entire scene from scratch in addition to whatever effect you’re hoping to achieve.

Thanks for your reply amoser.

I have a video stream coming in from a media stream. My problem is there are no mipmaps for this texture, everything looks really bad at any distance, so I was trying to generate my own based on the distance of the camera(get distance, downscale by half based on distance, “enhance”, upscale to fit on actor). Performance hits are okay as this is the only thing and the main thing that needs to work. From what I’ve searched there didn’t seem to be a method to generate a mipmap on the fly for video. Render Targets have an “auto generate mipmaps” function, but it is broken, also not sure what it would do as it doesn’t give any sharpening or blurring options like on regular textures.

So is every pass going to have to have its own render target then? From what I read in your links still doesn’t seem like I can chain anything through the render target, I would just need more render targets?

In general, yes, you would need a render target for each pass. It seems like you might be able to use just one in your case, though.

First, this is the most straightforward version that requires two render targets:
-create a material that simply displays the video
-draw that material into a low-resolution render target
-create a material to “enhance” the low-resolution render target
-render that into a second low-res render target
-use the resulting texture on your actor material as appropriate

I suspect you could (and probably should) actually apply the “enhance” step while drawing to the first render target. Since this render target is low-resolution, you won’t be calculating any additional pixels, and you’ll save on the overhead of using an additional render target. This version will probably look better than the naive version as well, since you’ll be applying the “enhance” to the full-resolution image, versus the already-aliased image you’d get with a low-resolution render target.

In this case, the steps would be as follows:
-create a material to “enhance” the the full-resolution image
-render that material into a low-res render target
-use the resulting texture on your actor material as appropriate

If performance is really not an issue, you may actually be able to get away without generating low-res versions at all. For example, you might want to try using something like the “spiral blur” node on the full-resolution image, based on the distance from camera.