TLDR: I tried to setup fixed uint 16b rendering pipeline using USceneCaptureComponent2D but still someting is done in float16.
My high level idea is to get an RVT heightmap access from CPU level.
My goal is to get array of uint16 pixels with given part of this RVT.
In general that what I already did works.
Issue is low precision (high quantization) in case of large output values.
Such behavior is for sure caused by some steps of processing that are done in float16 space.
Do you have any idea how to fix it?
I isolated an issue to be in USceneCaptureComponent2D that I use to move part of RVT into render target.
Render target is created by RenderTarget->InitCustomFormat(RenderTargetSize, RenderTargetSize, PF_G16, true);
So it is PF_G16 that for my platform (Windows DX) is internally DXGI_FORMAT_R16_UNORM
So plain 16bit that covers a range from 0.0 to 1.0 in fixed step of 1/65535.
USceneCaptureComponent2D uses orthographic projection to draw a simple quadrant plain using material (Surface, Opaque, Unlit) that samples RVT ad return value as “Emissive Color”.
All fancy rendering stuff (lighting, fog, AA etc. are off)
For testing purposes material is simplified to output plain const value.
Capture Source is:
SceneCapture->CaptureSource = ESceneCaptureSource::SCS_FinalColorHDR;
I also tried SCS_SceneColorLDR, SCS_FinalColorLDR - without any change.
Other sources returns zeros or -1.
Finally, using FRHIGPUTextureReadback I got an CPU access to this RenderTarget by:
FTextureRenderTargetResource* RTResource = RenderTarget->GameThread_GetRenderTargetResource();
…
FRHITexture* Texture = RTResource->GetRenderTargetTexture();
…
FRHIGPUTextureReadback Readback(TEXT(“RVTReadback”));
Readback.EnqueueCopy(RHICmdList, Texture);
…
and just memcopy to output bufer (that parts seems to work perfectly).
For small numbers returned as emissive color it works ok.
123/65535 results in 123 output
124/65535 results in 124
125/65535 results in 125 etc.
But since 2048/65535 steps are by two:
2048/65535 is 2048
2049/65535 is 2048
2050/65535 is 2050
2051/65535 is 2050 etc.
Later every next power of two (4096, 8196 etc…) step is twice bigger.
So it is exactly how unsigned float16 (with extra mantissa bit instead of sign bit) works.
But my intention was to have whole pipeline in fixed point 16bit to not loose any data in range of 0-65535.
But it looks like, at some point my uint is packed into float16 and then back into uint.
That what I found suspicious is during PIE execution, when I double click in UE5 editor on this render target, window with this render target appears.
I details section there is “Format: G16” (as expected), but below there is an Texture Render Target 2D section with “Render Target Format” that is RTF RGBA16f.
But as experiment I tried:
RenderTarget->RenderTargetFormat = RTF_R32f;
Property in debug window changed, this format should be far more precise but it not changed anything.
I also tried to change “Float precision mode” material property to “Use full precision for every float” but is also do not changed anything.
I know simple workaround for general issue of packing large integers into floats - it is enough to divide this integer (in “bigger float” space) by power of two (like by 256 in this case would be enough) to make use from lower exponent bits in float.
But it is not so simple when this uint->float->uint conversion is done implicit in UE5 internals.
Any other ideas?
Probably first question would be “why USceneCaptureComponent2D”?
In my solution this material would sample RVT.
Sampling it in Scene Capture triggers UE5 to refresh that part of RVT and gives high quality output.
While sampling using UKismetRenderingLibrary::DrawMaterialToRenderTarget not.
So I guess this problem could be tackled in some other way?