Can I call RHICmdList.ReadSurfaceData or ReadPixels equivalent without flushing

I’m working on a project that requires I read the pixel data from a render target very frequently. In order to do so I have been using the render request logic I found here. My project, which usually runs an average frame time of ~15-20ms, jumps to 250ms every time ReadSurfaceData is called. Looking at the profiler, it seems there is a call to RHIMETHOD_ReadSurfaceData_Flush that is causing this huge hiccup.

Is there a way around this? A similar question asked here describes how one might defer a ReadPixels call to a non-blocking thread, but that doesn’t work for me because I NEED to block until I get data. Surely there must be a way to read the current pixel values off of a render target without this flush call?

Some additional information that may be useful to future answerhub searchers:
I did not find a way to get around calling the flush. HOWEVER, the amount of time the flush takes can be heavily mitigated with either (but preferably both) of the following two tweaks:

  1. Disable HDR support on the TextureRenderTarget2D.
  2. Fix the resolution of the TextureRenderTarget2D to the resolution of the game window. This might have been especially egregious for me because my render target was BIGGER than the window, but regardless making them equal seems to help performance immensely.

Hi,

There is a way, in my case i use it for getting the pixels of the current viewport but it works in the same way.

	//ScreenArray is a TArray<FColor>

	int XRes = viewport->GetSizeXY().X;
	int YRes = viewport->GetSizeXY().Y;

	struct FReadSurfaceContext
	{
		FRenderTarget			*SrcRenderTarget;
		TArray<FColor> 			*OutData;
		FIntRect				Rect;
		FReadSurfaceDataFlags	Flags;
	};

	FReadSurfaceContext ReadSurfaceContext =
	{
		viewport,
		&ScreenArray,
		FIntRect(0, 0, XRes, YRes),
		FReadSurfaceDataFlags(),
	};

	ENQUEUE_UNIQUE_RENDER_COMMAND_ONEPARAMETER(
		ReadSurfaceCommand,
		FReadSurfaceContext, Context, ReadSurfaceContext,
		{
			RHICmdList.ReadSurfaceData(
				Context.SrcRenderTarget->GetRenderTargetTexture(),
				Context.Rect,
				*Context.OutData,
				Context.Flags
			);
		}
	);

With Vulkan there is a massive performance improvement, but vulkan is still experimental.

Caution: about the viewport, this only works if called in a overriden Draw function.

I also need to get the pixelsvalue of a render target, so when it’s working i will of course update this post.

source: A new, community-hosted Unreal Engine Wiki - Announcements - Unreal Engine Forums (propably much better explained)

Thanks for the response, but this is not quite what I’m looking for. As you can see in my question, I linked the same answerhub question you’ve gotten your code from. What I’m hoping to do is find a way to read the pixel values from the texture target itself rather than reading them back into the render buffer, which forces the current rendering instructions to get flushed. Without going into the render source it’s hard for me to tell whether this is possible. Using a deferred call to readpixels seems to be sufficient, however, the editor window needs to be at least as large as the texture targets or (I think) the renderer is forced to repeatedly re-allocate memory to accommodate the larger buffer size needed for the texture targets.

i’m actually at this exact point now, reading from a RenderTarget is really dificult to do without some masive performance hit, which renders a lot of GPU driven calculus to be useless… nobody seems to know how we could achieve this without some degree of uncertainty