Basically objects in motion would be blurred but motion blur from camera movements (translation/rotation) don’t cause the whole screen to blur.
Currently disabling motion blur on the camera (or post-processing volume) disables motion blur entirely.
Camera motion blur (want to disable)
According to this examples page it is possible to at least get a debug view of the game’s motion vectors using the command ShowFlag.VisualizeMotionBlur 1.
These motion vectors move when the camera moves around the scene, so is it even possible to disable these vectors from having an impact on the final motion blurred render?
In the SceneView.h header file, there’s an FSceneView class that contains abCameraMotionBlur boolean.
class ENGINE_API FSceneView
{
public:
...
/** Whether to enable motion blur caused by camera movements */
TOptional<bool> bCameraMotionBlur;
It does appear to be used by a few motion blur related classes, however it doesn’t appear to be something exposed in the nDisplay plugin, and not something available by default to engine users.
Interesting, thanks for the details! I guess I don’t fully understand what PreviousFrameSwitch is doing in this context. Are the first two screenshots just viewing the material applied to an object in the editor?
PreviousFrameSwitch is used to get the correct motion vectors when using world position offset (so you can have correct motion blur in vertex animation for example). In the material screenshot, it’s telling it the vertex has moved 20 units forward, which makes it have constant motion blur, even when it’s not moving. The first screenshot just shows that in action, and the second screenshot is using compensation for the camera motion. Both are Play-In-Editor.
I’ve gotten much closer to what I’m after by setting the bCameraMotionBlur flag to false in a custom ViewportClient. Unfortunately this has the effect of blurring first person objects (such as the arms/gun).
I imagine I should be able to use what @midgunner66 posted above to compensate for the camera motion at this point. The motion blur shader appears to compensate for camera movement across the entire screen, so perhaps adding the camera’s velocity to the first person object’s material might be enough?
EDIT: Though it would be nice if the shader could automatically compensate for certain flagged attached-to-camera objects instead.
EDIT2: Alternatively, I could investigate if there’s a way to render out first-person details (hands/accessories) on a separate layer or render target, and then comp that on top of the previous render layer.
Is LastLocation the camera’s previous location? Is that built in, or are you keeping a reference and updating the material? Sorry, I’m not great at material blueprints.
EDIT: Fixed, all of these tests were done against static objects, not movable ones.
Wow I must be doing something wrong, I picked up 4.27 just to see if there was something jank with 5, but I can’t get the PreviousFrameShift to work at all.
Motion blur is enabled.
The debug view shows the material motion, but not the motion in the editor. Also, neither the material editor nor the viewpoint show the material blurring.
Alright, first problem was because I was testing on static meshes. As per this video the object being tested should be marked as ‘Movable’.
Second problem appears to be attempts at getting the last location compensation to work. I’ve got a parameterized vector (called LastLocation) that I’m updating every tick to be the camera location of the previous frame (and storing the current location of the camera at the end of the tick). Although the motion according to the visualization overlay appears to be compensating, the final blur still appears strange. I’ll need to investigate this further.
Tracking the compensation sphere is still blurred
For that, I just use the location of the current (game) frame (I’m guessing it’s already one frame behind):
Good to see you found that. I don’t know if the camera compensation idea will fully work, though. It works for linear camera motion, but when the camera rotates, it still blurs. I think it’s because, since it’s rotation, the vectors are different for different points on the screen, which the compensation doesn’t take into account. Maybe it’s possible to?
It might be possible. I found a node called RotateAboutAxis that let me test if it’s possible to output both a translation and a rotation into the PreviousSwitchNode and at least for simple examples, it seems to work.
Try to rotate and translate in a single switch
Either that or I’m misunderstanding the inputs and outputs of these nodes. I was thinking about creating a MaterialParameterCollection that would contain pre-computed translation and rotation values for Current Frame and Previous Frame and then update that globally.
Again, not entirely certain what structures are being used as inputs/outputs, maybe matrices? In the case of PreviousFrameSwitch it has to be more than just a single vector.
There is a plugin in the marketplace called Custom Motion Blur that allows you to add “cages” to specific elements to blur. Perhaps you can use that to blur those projectiles.
So the code in the engine that controls ‘canceling’ the motion blur (enabled via the bCameraMotionBlur flag) currently subtracts the existing camera motion from the current velocity. This has the same observable issue where objects close to the camera moving in the same direction of the camera appear to have negative motion blur (which is what I was observing in this post).
One workaround is to subtract the camera’s motion, but clamp the subtraction depending on the relative motion of the pixel being blurred.
I came up with the following inputs and outputs based on what I reasoned should be the correct observable result.
The left value represents velocity components in a single dimension. First number is velocity, second is camera velocity, third number is subtracting the two, and fourth value is the value I would expect this function to clamp to.
After a bit of playing around with the min, max and step functions, I arrived at the following code.
// Line 56 of MotionBlurVelocityFlatten.usf
Velocity = min(max(Velocity - CameraMotionVelocity, step(Velocity, 0) * Velocity), step(0, Velocity) * Velocity);
This result still isn’t perfect, but I don’t that perfect per-object motion blur without camera motion blur is possible.
In the video there’s a row of spheres going half, exactly and twice the speed of the player. You can see there’s a small amount of motion blur on the half speed spheres, but once the player starts moving to the right all apparent motion blur is cancelled for the first and second row of spheres.
Ok digging further into what the flag the cancel motion blur flag does I’m beginning to think it’s bugged.
I was able to get access to the debug motion blur render targets via the vis command.
When the game is running hit r.RDG.ImmidiateMode 1 to make the Render Dependency Graph run in immediate mode. Then hit vis Debug.MotionBlur.Flatten to visualize the debug render target. I’m not 100% why you have to set these flags in this order, but if you don’t VisRT won’t show the render target.
Hi Sineaggi, I’m trying to achieve what you did in this post (disabling camera blur but maintaining first person blur). Could you tell me how you did this? I’m a noob when it comes to C++