UE5.3, I am capturing scene with USceneCaptureComponent2D
and UTextureRenderTarget2D
as below.
UTextureRenderTarget2D* RenderTarget;
USceneCaptureComponent2D* SceneCapture;
//...
RenderTarget = NewObject<UTextureRenderTarget2D>();
RenderTarget->InitCustomFormat(ResolutionX, ResolutionY, EPixelFormat::PF_B8G8R8A8, true);
RenderTarget->ClearColor = FLinearColor::Transparent;
RenderTarget->UpdateResourceImmediate();
SceneCapture->CaptureSource = SCS_FinalColorHDR;
SceneCapture->TextureTarget = RenderTarget;
SceneCapture->bCaptureEveryFrame = true;
Then I copy the RHI raw texture
FRHITexture* TargetTexture = this->RenderTarget->TextureReference.TextureReferenceRHI->GetReferencedTexture();
FTextureRHIRef CopiedTexture = this->GetOrCreateTexture(TargetTexture);
RHICmdList.CopyTexture(TargetTexture, CopiedTexture, FRHICopyTextureInfo{});
// then CopiedTexture is passed to thread pool to get its pixels.
So I am not using tranditional approaches ( ReadSurfaceData
/ MapStagingSurface
) to read pixles from the render target because they may stall the render thread for ~10ms or more.
By leverage Apple M CPU’s Unified Memory Architecture, I can access the MTLTexture
’s getBytes method from thread pool and the render thread only takes less than 1us to copy the texture. The idea is explained here but I am using it in an oppsite way. In short, Apple M CPU allows CPU and GPU to access the same texture in memory.
So this approach works perfectly except that – the captured image is LDR rather than HDR although SceneCapture->CaptureSource = SCS_FinalColorHDR
is configured
Here you can see the image from CopiedTexture
. It is LDR
And here you can see the image from Back Buffer of viewport, it is HDR.
So my question is : why do I get LDR instead of HDR? Is there a special pass in the rendering pipeline to convert LDR to HDR? and the underlayer render target actually stores LDR image? Is it possible to convert the LDR image to the same HDR image gotten from Back Buffer?
Thanks in advance