I’ve been exploring the use of UE4 for realtime VFX. I’ve managed to get relatively acceptable results when attempting to convert my older, offline rendered scenes. This is one of the tests:
However, it’s almost a miracle I got this far given how many issues I had to overcome. The workflow (especially sequencer) was so difficult that it ended up being actually less realtime and less interactive than when using regular offline DCC like 3ds Max with a regular offline renderer like Vray. Especially on the WYSIWYG front. That being said, I hope most of these difficulties are consequences of my lack of understanding of proper UE4 workflows rather than UE4 workflows being worse. So I’d like to ask a bunch of questions about the workflow that will hopefully make using UE4 for this task a lot less painful.
1, Every single time I save my scene, render sequencer movie, or do one of another dozens of random actions, UE always ejects my viewport from the cinematic camera I am working with back to Perspective view. I have to go to view menu and switch back to my cinematic camera about 30-60 times per hour when working on a sequence. It makes it very difficult. What am I doing wrong?
2, I failed to find a simple loop button in the sequencer. I surely must be missing something. If I want to play a shot in a loop to take a good look at it, I have to keep manually pressing spacebar until my hand falls off. There must certainly be better way to do that.
3, To achieve certain cinematic qualities, I had to rely on high quality effects like DFAO or Volumetric Fog. Unfortunately, both of these use temporal reprojection. What that means in case of fast moving objects like the car above is that the car leaves a dark, fading trail of DFAO and a bright, fading streaks of volumetric spotlight ghosting behind. My rendertimes are so low the file output renders pretty much as fast as it plays back. I would not care even if the rendertime was 10x slower.
My question is, is there any way to warm up each frame? There are settings for warming up pre-shot, but not pre-individual frames. The DFAO and Volume Fog ghosting is a really big issue, and if this was a commercial project, and client spotted those, I’d be pretty much screwed as there would be nothing I could do about it.
4, On some older Sequencer videos on YouTube, I saw that when someone stopped sequencer playback at a certain dynamic frame, the motion blur stayed in the frame. I failed to find a way to achieve that effect. In my case, I can see the motion blur as long as I am scrubbing and playing back the sequence, but if I stop and watch a particular frame, the motion blur is not there, it’s gone How do I turn that on?
5, This was the biggest issue, which broke the realtime and interactive workflow for me. I had a car rig which adheres the car wheels to the ground. I have absolutely no idea how to make that rig refresh with sequencer. It is a macro which traces 4 lines from the wheel centers to the ground, then adheres wheels to the ground, and then positions car chassis based on those wheel positions. It was a macro triggered by a construction script.
A, When I animated the car position using sequencer, construction script did not update the macro, so the car was flying.
B, When I enabled “Run construction script in sequencer”, the construction script was running every sequencer frame, but in a way that it actually spawned the car each frame as static, which meant the motion blur on the car was all wrong. It was as if the car was still each frame, so it was blurred by the camera as much as the background.
C, I found out that the Event tick is actually ran by the sequencer, but ONLY as long as you are rendering the the sequencer into a movie (to a disk). When I plugged my car rig macro into a tick, it worked, but that meant I can’t actually work on my sequence in realtime. I had to always do some blind tweaks, then quickly render to AVI, and review if it works. That’s the opposite of interactive workflow. On top of that, the events ran by the “Render this movie to a video” button were behaving a bit differently then when ran by “Play”.
I ended up with 3 different behaviors. One in sequencer playback, one when running the game by “play” and one when rendering movie to a video. And I had no idea which one to rely on. That was a manner in which I had to finish these shots. I had to make blind changes, and then render to file to actually verify what happened.
So my question is what’s the correct way to update an actor function by the sequencer, so I can actually be previewing the exact same thing that will be rendered in real time? I found out I can add event track, and that it has even some repeater feature, but I was unable to get it working, even after half day studying the sequencer documentation The events passed from sequencer through the sequencer director did something, but the results were completely different between the sequencer playback in the viewport and the when rendering into a file.
I know it’s a wall of text, but I will be very thankful to anyone who knows an answer to at least one of these questions