How to apply a UV distortion Map on the camera?

Hi Martin,

Comments Below.

then it does distort the result but it creates severe aliasing (and there is no code change) that I suspect coming from the 16bits limitation.
I just would like to separate the multiple problems you are having. As said the filtering should help - if you go that route - make sure the works as expected (e.g. if you update from CPU the byte order easily can destroy the results).

OK, but could you please be a little bit more specific and provide me with a complete C++ example of that filtering ?
You mentioned " // if we are in a postprocessing pass if(View.RenderingCompositePassContext) { PostprocessParameter.Set(ShaderRHI, *View.RenderingCompositePassContext, TStaticSamplerState::GetRHI()); }
You would need to change TStaticSamplerState to TStaticSamplerState.", but that is quite vague…

how to update that texture anyway ?
Ok, lets focus on that

As I said, I finally sorted out that part myself. I found a way to create G32R32F textures in C++ and inject them into the Editor to get the precision I need for the shader.
A couple of function calls were missing in the initial source code I posted. The “override_texture()” is one of them for example.

32bits floats, so it is a different problem
I don’t think you want 32bit floats - and distortion even down to sub pixels can be expressed in 16 bi. Maybe even 8bit is enough (if either scale is small or you don’t care about sub pixel precision).

That part, I just don’t understand how that can be technically or scientifically possible.

I’m not talking here about getting fake distortions like the example 1.16, but pixel-accurate distortions, simulating the exact behavior a real lens which has been thoroughly calibrated using a physically correct model. Every change of focus in a vari-focal lens model creates severe non linear distortions than cannot be correctly simulated without floating point values.

To my knowledge, any conversion from float 32bits to 16bits or 8bits integers (are you seriously talking about converting infinite values of floats between 0.0 and 1.0 to max 0-256 integer values ?) will severely damage the result and cannot be compensated with a filter.

But I would be happy to be wrong. So could you please provide me with an example of what you are saying, i.e. down conversion from float 32 bits to 16 bits (or 8 bits) + filter that would eliminate artifacts and be as accurate as using 32bits float textures ?

In all cases, shaders (as well as HDR maps and DDS textures) have been supporting 32bits float for a while, so it still sounds a bit weird to me not having that feature in the editor. That’s also why I’m bypassing that limitation using C++.

I do not find any post-processing material that helps with regards to my issue (I also went through the whole forum and answerhub…).
Sorry but we cannot provide samples for all possible modifications.

Well, you are the one who said “It’s some hallway with multiple postprocess volumes. It’s on of the later vaolumes.” So I went to look (again) at the content examples and did not find anything useful, hence my remark…

Which ones are left? Can you start a new thread/question? - it makes it much easier to give a focused answer

I have already created another thread, and I will try to continue doing it.
But here, at the beginning of that thread, there was (for example):

  • “Is this the best way to achieve the lens distortion ?” I suspect not: I would like to just apply my own pixel shader but the documentation is quite laconic about adding custom shaders to the engine. I can see that many people in the forums have the same problem… But I will create another specific thread about custom shaders later.
  • “Any idea why the texture update does not work ?” solved that myself…
  • " is there somewhere a description of what is exactly expected as a UV set to be linked to the UV entry of SceneTexture:SceneColor ?" Does not matter anymore as I figured out how to use a float texture anyway.

Regards,

Phoenix.