Problem interpreting SceneTexture:Velocity Data

We’ve been working on rendering out Depth + Motion Vectors (optical flow) from UE4 via SceneCaptureComponent2Ds with a post-process material applied to them and we’ve made good progress, but we’re having issues when we actually inspect the data we gather from the SceneTexture:Velocity node in the material graph.

Does anyone know how to properly get motion vectors so that they correctly represent pixel to pixel motion between scene captures?

TLDR: The velocity vectors are not correct - not only do they produce values outside their expected range, but they also do not accurately represent the motion of pixels between two successive captures. I’ve attached some images showing two successive frames, the depth values, and the motion vectors rendered in HSV (though I’ve also checked the raw motion vector data from the texture and it is similarly incorrect).

Long Story:

We have a multi-camera setup (4 SceneCaptureComponent2D components on a single actor) where each camera is set up with the following config for each camera:

camera->bCaptureEveryFrame = false;
camera->bCaptureOnMovement = false;
camera->bAlwaysPersistRenderingState = true;
camera->CaptureSource = ESceneCaptureSource::SCS_FinalColorHDR;

// setting the shared flag gives a performance boost according to
camera->TextureTarget->bGPUSharedFlag = true;

Each camera has our depth+motion vector post process material applied (attached below).

And each render texture is of type RTF RGBA32f

As shown above - each camera is not set to capture every frame, but instead we use a timer callback in our actor code to manually capture multiple frames per camera (first frame uses depth + motion vector post process, second frame is without post process to capture RGB). To enable D+MV output, we have set the bAlwaysPersistRenderingState to true for each camera.

The capture code looks like (simplified for readability here):

auto rt = camera->TextureTarget;
// capture D+MV
TArray<FLinearColor> dmvData;
RunAsyncImageSaveTask(dmvData, dmvFileName, rt->SizeX, rt->SizeY);

// remove the post process for the depth+motion material so we get color output
// set the weight of the D+MV (first element in the array) to 0 to not use it
camera->PostProcessSettings.WeightedBlendables.Array[0].Weight = 0;

// capture RGB
TArray<FLinearColor> rgbData;
RunAsyncImageSaveTask(rgbData, rgbFileName, rt->SizeX, rt->SizeY);

// now reset the post process settings for the camera back
camera->PostProcessSettings.WeightedBlendables.Array[0].Weight = 1.0;


Some example output when moving directly backwards (as in the _moving_directly_backwards screenshot attached):

Motion x (min, max): (-2.2929688, 0.75683594)
Motion y (min, max): (-2.0625, 0.7109375)

Now the output when moving directly forwards (as in the _moving_directly_forwards screenshot attached):

Motion x (min, max): (-3.4277344, 0.66552734)
Motion y (min, max): (-3.6445312, 0.37109375)


The really interesting things to note:

  1. The values are definitely outside the [-2, 2] range that we’d expect based on the source code (which should encode [-2, 2] → (0, 1]) and the attached material graph where i decode from (0,1] → [-2, 2] based on (in the UnrealEngine source shader code Common.usf)

    // for velocity rendering, motionblur and temporal AA
    // velocity needs to support -2…2 screen space range for x and y
    // texture is 16bit 0…1 range per channel
    float2 EncodeVelocityToTexture(float2 In)
        // 0.499f is a value smaller than 0.5f to avoid using the full range to use the clear color (0,0) as special value
        // 0.5f to allow for a range of -2…2 instead of -1…1 for really fast motions for temporal AA
        return In * (0.499f * 0.5f) + 32767.0f / 65535.0f;
    // see EncodeVelocityToTexture()
    float2 DecodeVelocityFromTexture(float2 In)
        const float InvDiv = 1.0f / (0.499f * 0.5f);
        // reference
        //    return (In - 32767.0f / 65535.0f ) / (0.499f * 0.5f);
        // MAD layout to help compiler
        return In * InvDiv - 32767.0f / 65535.0f * InvDiv;

  2. For opposite direction of camera motion (into the screen vs. out of the screen) the direction of the motion vectors is identical (e.g. the HSV hues are all in the same positions between both the BACKWARDS and FORWARDS attached images). If these motion vectors were correct, the hues of one should be rotated 180 degrees compared to the other - e.g. the blue would be where the yellow is, the pink would be where the cyan/green is and so on.

For reference - here is a correct backwards motion vector image (Captured from our similar pipeline in Unity3D):


I have gone into further detail regarding my exploration of the velocity texture data issues on this forum thread: SceneTexture:Velocity Data Integrity Issues - Rendering - Unreal Engine Forums

So, what is the question ?

Does anyone know if this is an actual issue in the Engine or if I need to do something else to the SceneTexture:Velocity data that I get (e.g. combine with camera velocity)

Sorry - tried to bold it near the top, but can’t figure out how to edit the original question to make it more prominent; Here it is:

Does anyone know how to properly get velocity data from the SceneTexture: Velocity material node - e.g. am I doing something wrong, or is this expected / known behavior of the Engine?

I have a forum thread where I’ve gone into more detail here: SceneTexture:Velocity Data Integrity Issues - Rendering - Unreal Engine Forums

Having gone into it deeper I’ve found that the code that computes scene velocity (when the velocity in the texture from moving objects is 0) is incorrect. The code they run in that case to compute the velocity of static objects w.r.t. camera motion is this:

        float4 ThisClip = float4( UV, Depth, 1 );
        float4 PrevClip = mul( ThisClip, View.ClipToPrevClip );
        float2 PrevScreen = PrevClip.xy / PrevClip.w;
        Velocity = UV - PrevScreen;

Which produces the incorrect output like I posted above if you run that code in a Custom material node. The correct code they should be running (as found in Chapter 27 of GPU Gems 3) is this:

// Inputs to our custom node
float2 texCoord = UV;
float  depth = Depth;
float3  WorldPosition = AbsoluteWorldPosition;

 Code From GPU Gems 3 - Chapter 27 Motion blur post processing effect:

// Get the depth buffer value at this pixel.
float zOverW = depth;
// H is the viewport position at this pixel in the range -1 to 1.
float4 H = float4(texCoord.x * 2 - 1, (1 - texCoord.y) * 2 - 1, zOverW, 1);
// make homogeneous coords for world position
float4 worldPos = float4(WorldPosition, 1.0);

// Current viewport position
float4 currentPos = H;
// Use the world position, and transform by the previous view-projection matrix.
float4 previousPos = mul(worldPos, View.PrevViewProj);
// Convert to nonhomogeneous points [-1,1] by dividing by w.
previousPos /= previousPos.w;
// Use this frame's position and last frame's to compute the pixel velocity.
float2 velocity = (currentPos - previousPos)/2.f;

// return depth / 100.0 to convert from centimeters (UE4 coords) to
// meters
return float4(depth / 100.0, velocity.x, velocity.y, 1.0);

Note: the main thing they seem to be doing incorrectly is coverting between clip spaces of the two cameras when they should in fact be converting between view spaces of the two cameras.

As can be seen in the discussion on this thread (specifically, this post )where I’ve been tracking my investigation into this issue, changing to this code in a custom node produces the correct velocity / motion vectors for the pixels in the scene. As i don’t see a ViewToPrevView function in FViewUniformShaderrParameters I will see what modifications to the engine are required to

  1. add ViewToPrevView convenience matrix, and
  2. update their velocity calculation code to produce correct data for the velocity texture.

(post deleted by author)