SceneTexture:Velocity Data Integrity Issues

We’ve been working on rendering out Depth + Motion Vectors (optical flow) from UE4 via SceneCaptureComponent2Ds with a post-process material applied to them and we’ve made good progress, but we’re having issues when we actually inspect the data we gather from the SceneTexture:Velocity node in the material graph.

***TLDR: ***The velocity vectors are not correct - not only do they produce values outside their expected range, but they also do not accurately represent the motion of pixels between two successive captures. I’ve attached some images showing two successive frames, the depth values, and the motion vectors rendered in HSV (though I’ve also checked the raw motion vector data from the texture and it is similarly incorrect).

Long Story:

We have a multi-camera setup (4 SceneCaptureComponent2D components on a single actor) where each camera is set up with the following config for each camera:



camera->bCaptureEveryFrame = false;
camera->bCaptureOnMovement = false;
camera->bAlwaysPersistRenderingState = true;
camera->CaptureSource = ESceneCaptureSource::SCS_FinalColorHDR;

// setting the shared flag gives a performance boost according to
// https://forums.unrealengine.com/development-discussion/blueprint-visual-scripting/43244-taking-screenshots-with-scene-capture-2d-while-in-game
camera->TextureTarget->bGPUSharedFlag = true;


And each render texture is of type


RTF RGBA32f

The cameras are configured to have a depth + motion vector post processing material (attached below):

As in the code above - each camera is not set to capture every frame, but instead we use a timer callback in our actor code to manually capture multiple frames per camera (first frame uses depth + motion vector post process, second frame is without post process to capture RGB). To enable D+MV output, we have set the **bAlwaysPersistRenderingState **to true for each camera.

The capture code looks like:



auto rt = camera->TextureTarget;
// capture D+MV
camera->CaptureScene();
TArray<FLinearColor> dmvData;
rt->GameThread_GetRenderTargetResource()->ReadLinearColorPixels(dmvData);
RunAsyncImageSaveTask(dmvData, dmvFileName, rt->SizeX, rt->SizeY);

// remove the post process for the depth+motion material so we get color output
// set the weight of the D+MV (first element in the array) to 0 to not use it
camera->PostProcessSettings.WeightedBlendables.Array[0].Weight = 0;

// capture RGB
camera->CaptureScene();
TArray<FLinearColor> rgbData;
rt->GameThread_GetRenderTargetResource()->ReadLinearColorPixels(rgbData);
RunAsyncImageSaveTask(rgbData, rgbFileName, rt->SizeX, rt->SizeY);

// now reset the post process settings for the camera back
camera->PostProcessSettings.WeightedBlendables.Array[0].Weight = 1.0;


OUTPUT

Some example output when moving directly backwards (as in the _moving_directly_backwards screenshot attached):



Motion x (min, max): (-2.2929688, 0.75683594)
Motion y (min, max): (-2.0625, 0.7109375)


Now the output when moving directly forwards (as in the _moving_directly_forwards screenshot attached):



Motion x (min, max): (-3.4277344, 0.66552734)
Motion y (min, max): (-3.6445312, 0.37109375)


NOTES:

The really interesting things to note:

  1. The values are definitely outside the -2, 2] range that we’d expect based on the source code (which should encode -2, 2] → (0, 1]) and the attached material graph where i decode from (0,1] → -2, 2] based on


// for velocity rendering, motionblur and temporal AA
// velocity needs to support -2..2 screen space range for x and y
// texture is 16bit 0..1 range per channel
float2 EncodeVelocityToTexture(float2 In)
{
    // 0.499f is a value smaller than 0.5f to avoid using the full range to use the clear color (0,0) as special value
    // 0.5f to allow for a range of -2..2 instead of -1..1 for really fast motions for temporal AA
    return In * (0.499f * 0.5f) + 32767.0f / 65535.0f;
}
// see EncodeVelocityToTexture()
float2 DecodeVelocityFromTexture(float2 In)
{
    const float InvDiv = 1.0f / (0.499f * 0.5f);
    // reference
//    return (In - 32767.0f / 65535.0f ) / (0.499f * 0.5f);
    // MAD layout to help compiler
    return In * InvDiv - 32767.0f / 65535.0f * InvDiv;
}


in the UnrealEngine source shader code Common.usf.

  1. For opposite direction of camera motion (into the screen vs. out of the screen) the direction of the motion vectors is identical (e.g. the HSV hues are all in the same positions between both the BACKWARDS and FORWARDS attached images). If these motion vectors were correct, the hues of one should be rotated 180 degrees compared to the other - e.g. the blue would be where the yellow is, the pink would be where the cyan/green is and so on.

Here is an example of our data gathered from our system in Unity for a camera moving backwards - note how the hues are rotated as I said above they should be for an object moving backwards:

Has this been fixed in engine yet?

Hey @mrboni I’m not sure what happened to the forum thread (most of the posts seem to have been removed), but here is a PR (that has not been merged into the engine yet) which has more information: https://github.com/EpicGames/UnrealEngine/pull/6933 Based on the status of that PR, the team has not merged these changes into the engine so the bug is likely still there in mainline.

2 Likes

Hi @william.emfinger PR mentioned in your last comment( https://github.com/EpicGames/UnrealEngine/pull/6933) is no longer available. Do you still have the content of that PR? It would be helpful. We are struggling to have optical flow working correctly (currently using UE5.2).

You can view it using a cached version of the website SceneTexture:Velocity Data Integrity Issues - Rendering - Epic Developer Community Forums

Groom Cache has been recently patched by the devs.
Check out this recent tutorial released by Epic Games: