New Cinematic Motion Blur Method

Hello Devs!

It’s been a little while since I had something worth sharing, but I finally had the chance to test out a new method for motion blur which I have been kicking around.

The latest version of motion blur in Unreal is a marvel, and works extremely well for cost of implementing it. Motion vector based motion blur is used in the CG industry to save on time also. It is a tried and true method.

Part of my day job is pushing real time rendering for production. I am constantly being asked for examples of why Unreal makes sense for production work, which is a topic I would gladly discuss. The list goes on and on. Motion blur is the perfect candidate to add to the list. Unreal has the power to render very quickly. In theory we should have the ability to render more frames to build a single image. Temporal effects work along similar lines, but we can go deeper… if our focus is exclusively for real time cg production. Meaning, this method is not intended for true real time. It is not efficient.

In the simplest terms, this method samples multiple frames over a period of time which is then added up together to present a single frame. Motion blur is an artifact of photography. Within the length of time it takes for a camera shutter to open, then close, subjects in the frame can move. Those rays of light move across the film or sensor and you get a blur. What we can do in Unreal is sample snippets of time in between the shutter opening and closing and add them up into one image.

The methodology is extremely simple, as was the writing of this tool. All we have to do is sub sample a defined time before or after the target frame, and we get beautiful accurate motion blur. Currently my tool is in its simplest form. It is averaging all the sub frames in a ping ponging render target. There is a lot more going on in film which we can also simulate.

For example: The value of light sampled from a film camera isn’t linear over the length of the exposure. The aperture is opens over time, then closes over time. Motion blur is not linear in the same way. The beginning of the sampling should have a lower weight over the average. Technically, it should be out of focus as well. I’ll be experimenting with these ideas if time permits, but I would recommend that anyone who knows more about photography/cinematography to give it a shot!

If you are using Unreal Engine for film, TV, or commercial work, you will be bee floored with how good motion blur can look. And just how quickly we can generate it.

Now for some examples:

And a clip of a rotating object with the traditional motion vector motion blur:

Here are two stills for a better look. The first image is from the beginning of the animation where the wheel is hardly spinning. The second image is in the middle where the wheel is spinning at roughly 30mph. This was captured with a 180 degree shutter angle.

If you are interested in visualizing how motion blur works, check out this page: https://support.solidangle.com/displ…UG/Motion+Blur
https://bit.ly/2QItvps

Hi
This looks really cool.
For some weird reason the SolidAngle link you provided is not working.
And is it posible for you to share the tool, i would love to use it to get the right motion blur

Sorry about the broken link, I don’t know what’s going on with the post edits. I’m back on firefox trying to get it to work. I added a bitlink under the original.

I wish I could share the tool as it is, but unfortunately I cannot. I am going to try an build a tutorial soon and discuss how to build a bare bone version. It’ll include the fundamental process and approach I took.

In the mean time, take a look at the content examples package for the 2d water displacement. It also uses ping pong render targets. The effect is different but the method is similar.

I’m going to try and keep updating this post as the tool evolves. The latest update works with multiple camera setups. The subway sequence is a good candidate for testing. This video shows the sequence recorded at 180 degree shutter angle with 50 levels of sub sampling. The results look good! The render was not using a majority of post process, so all the blur in the video is coming from the motion blur. I will do a few comparison shots tomorrow, but here is the video.

Game speed needs to be adjusted when sampling the motion blur, as material effects or particles updating on a tick run too quickly. Tappy Chicken is just a blur, and some of the fog effects are spinning like a top.

This is a side-by-side comparison of the standard version of motion blur vs my own method. This scene holds up well a majority of the time under standard motion blur, but there are a] couple of artifacts which are slowed down at the end of the video.

Very cool stuff! I feel with the updated DOF, and Convolution Bloom options, motion blur is one of the only big areas that could use a revamp for high end cinematics- the current implementation is great but it tends to produce smeary artifacts that bleed light and can look bad in certain situations (I’m sure this is due to the fact that the implementation has to run extremely fast, so it’s already very impressive). I wonder if it’s possible to do what they did with the new DOF system and get essentially a pixel perfect solution while still being cheap (can’t believe how good the new DOF looks). From the image of the spinning wheel above, your implementation looks really clean, hopefully Epic takes notice and perhaps works with you on a future upgrade to the motion blur in engine.

Thanks! The new DOF is incredible, and it is making is so easy to frame a good shot. It makes you wonder what other improvements are just waiting to be developed.

The tricky thing with Epic’s motion blur is reproducing pixels which are occluded by the subject which is moving. If the temporal buffer has no pixel history you get that blocky smearing. In my experience, if Epic cannot make a new method for art practical at real time frame rates, they wont invest the time. Hopefully that’ll change more in the future as more production companies get on board, but my method is way, way too inefficient to be considered for now.

Mutli sub frame motion blending might be feasible one day. If you compared the motion vector data over a period of time you could reduce the number of sub frames you need. Still very inefficient though. The current implementation is so good already.

Something interesting which just popped up recently is Nvidias DLSS toolset. Apparently there are reports that DLSS is helping fix temporal artifacts, by creating a kernel of “the way your game looks.” It requires devs to submit their games to a server to get crunched, but the potential is very interesting.

I think AI might solve our motion blur issues for good. Probably a lot of others too. It’s a good field to invest in.

MotionCompare.JPG

Wow, that is incredible- those artifacts around the arm and hand are exactly the kind of things that I feel let the current system down for cinematics- in motion you don’t see them, but when freezing frames it’s extremely obvious, and for videos / films would be unacceptable. I hope at some point in the future they’ll implement something like your method even if it is expensive. As long as it’s a switch you can flip at render time, like the convolution bloom, it could be one of the key factors that takes an image from the “almost good enough” to be pixel perfect. We will see what happens when the RTX stuff rolls in- I imagine they’ll be chasing parity with offline renderers, so they might just implement something! Great work again and thanks for the explanation!

This look amazing but not for realtime rendering. Budget for motion blur is 5-10% at max. This method cost is multiplier based on subframes. If you need 8 subframes then the cost is +700%.
WIth added pixel jitter this would also work as super resolution filter.

The performance cost is nuts I see a diminishing return when sub frame samples hit 200 or so, but there are some examples were 200 samples look best. I just put the finishing touches on an optimization where I dump the remaining frame time when the motion blur sampling is finished, but a single frame still takes 8 seconds to render. Although running with as few at 10 samples can work in some cases. The tool allows the user to change the sample count per frame.

I think the tool is something I wouldn’t run until I am nearly finished with a project. An extra layer of polish, which doesn’t need to be seen until final passes.

I definitely need to look into a pixel jitter. My tool does a good job at wiping out tempral artifacts like TAA ghosting and SSR ghosting, but so much of the extra pixel data is lost, or not being utilized. Any idea if the traditional jitter runs in 2d in screen space, or if it uses camera space offset?

Finally, world time dilation is running properly. This means that FX will run properly without any adjustment. Although physics will not behave properly after setting a custom world time dilation. Give and take. I am also dumping the remainder of the frame after I have finished collecting my motion blur data. This turned out to be a little tricky, since fx have to be updated at a similar pace.

Motion Vector Fields are not running properly. They are updating on GPU when my render target are updated. Seems like a bug which could be fixed if the vector fields would obey game time.

The next task is to give the motion blur samples shape, but adjusting the exposure of the sub samples, compared to a bell curve. Essentially mimicking the way a camera’s aperture opens.

Hi Stimpanzee,

Is this tool or some sort of preview available to try out in Unreal Engine? This is looking great by the way! I’m currently working on some VFX related work and would be great to get proper motion blur on my propellers. Definitely willing to pay some money for a solution like this as well.

Erik

Hey Erik, this is a custom tool I wrote entirely in blueprints. I’m crazy swamped with work right now so i had to put this project on the back burner. Hopefully in a couple of weeks I’ll be able to revisit and put together a tutorial.

Sounds a lot like old good accumulation buffer, if this is the case one can quite easily use those samples for an additional AA and DoF as well. (Like Gran Turismo series do in photomode.)

A tutorial on this would be great. :slight_smile:

Accumulation Buffer!! That’s exactly what it is, but I never had a good term for it. I’m really looking forward to 4.22 dropping to see how the system handles Raytracing samples. I definitely need to look into adding the micro camera jitter. I believe that i could even just use a camera shake component.

Sorry the wait has been so long! I’m becoming notorious for that. I just got off a couple of projects at Psyop which were consuming my life. Starting next week I will get to work on a tutorial. I will do everything I can to get it out before the end of the year!

Another update which I was unable to post - weighted motion blur! The system now accumulates (thanks Pottuvoi) the frame with a weighted value, which is sampled by an editable curve.

Having the ability to shape the motion blur with a curve gives an artists an enormous amount of control over how the motion blur looks. In the latest example the sparks at the very beginning of the video were replaced with simple circles, simulating real sparks. The weighted arch visual that you see is the affect of the motion blur doing its thing.

Traditionally this particle system uses a shaped particles which changes its direction based on velocity. Now we are no longer emulating the effect, we are capturing it!

That’s great :slight_smile:

Hi Stimpanzee,

I have sent you a private message and it would be great if you could reply when you get a chance please. Thanks a lot.