I know you can set a custom LUT in the settings for post-process volumes, but I’m wondering if there’s a way to lerp between multiple LUTs in a post-process material (e.g., across depth).
The game Firewatch has some pretty classy artistic colouring, some of which involves applying a tint gradient over depth. I was able to implement that pretty easily by mapping pixel depth to UVs in a gradient texture and lerping that with PP0. I was hoping to extend the method to give more refined control.
You certainly can do it, but you would need to move color correction into a post-process blendable material. There you can easily lerp between two LUTs based on scene depth.
So it sounds like I must be missing something obvious (and I hope that’s the case!).
To clarify: I have no trouble using a post-process material to do a basic change like applying a tint over depth with a custom colour gradient passed as a texture, e.g.,
with the expected results:
The trouble I’m having is that I’m not sure how to access or modify the LUT within a material… It doesn’t seem to be exposed, but maybe I’m just missing it?
Just bring in your LUT the same way, as the gradient texture.
Sorry - I’m not trying to be dense. But if I just bring in the LUTs as textures, and lerp them by depth…
… then I’m just overlaying a picture of the LUT table over the scene
How do I specify that the texture be used as the lookup table?
Ah. So I actually have to implement the lookup function, rather than just swapping a new table into the pipeline.
That’s a shame, since whatever I hack together in blueprints probably won’t be as efficient as the engine’s own HDR->LDR step, but if that’s the way it has to be, so be it!
The part, where you need to figure out how to calculate UV coordinates depending on scene depth, is on you.
In case anyone arrives here with a similar problem, here’s one potential for generating UV indices into a CLUT:
Result, from interpolating between two arbitrary CLUTs based on depth (Post-processing off at top, on at bottom, near and far CLUTs show beneath):
Make sure to adjust the settings on your uploaded CLUT, particularly setting filter to ‘nearest’ and clamping edges instead of wrapping:
Unfortunately, this approach gives a bit of a pixelated effect, presumably because the lookup isn’t being applied at the ideal stage in the pipeline. For certain styles it might be ok, but it looks like the better bet is to just lerp between raw adjustments (e.g., contrast, hue, etc.) rather than using premade CLUTs.
Have you tried changing your pp material so that it replaces tonemapping? Then the lut will be applied to the hdr scene. Perhaps that’ll help? If you do this you’ll also need to do your own tonemapping. I’ve been playing with the 3 point levels node which seems to get use able results but I don’t think it’s an ideal solution.
I’ve had trouble getting good results replacing the tone-mapper. I think the engine must do something a bit more sophisticated than a direct lookup when it does the HDR->LDR compression (maybe working in light at the same time?).
At any rate, I’ve been having much better luck skipping the LUTs and just using parameters for transforms. I’ve got a reasonable setup to change smoothly between near and far settings for per-channel black, grey, and white points, and level adjustments for colors:
The per-channel black, grey, white adjustments look like this:
I think the 3-point levels node is doing something similar, but I haven’t gotten it to behave the way I expect, so I just do a simple version of the math as above.
The whole material ends up being pretty cheap (102 base instructions, 29 vertex instructions, 1/16 texture samplers) and it gives me the control I need, so I’m happy enough.
Using the multiple lerp node, this would be trivial to extend to three or four settings, and to save instructions you could pull out the color mapping and just multiply by a single color for most purposes.
For really complicated remaps over depth, you could bring in a texture with all your settings mapped to gradients (e.g., a gradient for black point, for gray point, for white point, etc.) and use UVs to read out. That would start to rack up texture samplers though, and I can’t really think of a setting where you’d need more than a few changes over depth…