Alter the scene dynamically in the
viewport, perform a high res screen
shot, and store to disk. This
actually works, except we’re unable to
properly “wait” for the task to
complete before queuing another task.
We have a question out as to how to do
this (if possible) from python. Doing
them in a loop causes the prior tasks
for that operation to be overwritten
so we only get a single image output
in the end.
The reason for this is that the high resolution screenshot tool is currently implemented as a ‘global’ (since it can be triggered via console commands). What this means is that on each tick in the engine, at a particular point in the frame, it checks to see if someone has requested a global screenshot. If so, it then reads the settings and executes the screenshot. There is not a queue nor multiple instances of it so this is why the last one overwrites the rest. You could try splitting it out over multiple frames (one render per frame) and just continually re-request the global screenshot, just one per frame.
Alternatively, we could automate the sequencer, place the objects onto the
sequencer and upon each revision,
insert a key frame, and continue.
This isn’t a bad approach (certainly much more consistent in terms of output resolution, etc.)
We’ve encountered two issues here -
the first is that we can’t have each
“image” be a single keyframe as the
cinematic creation process seems to
want multiple frames to give us an
accurate “shot”. Putting each unique
scene as a single keyframe generates
blurs, light artifacts, and even some
random elements showing up.
This is definitely a known issue. You can try making a Shot track and putting each combination in its own track, and then using per-shot warmups but it’s not great. The first frame is usually problematic in rendering because there are no motion vectors and there is no temporal history (more on this later).
What we’ve seen is a need to have each
unique scene generate 30 keyframes.
Once exported, we grab the 29th frame
from each 1 second sequence. This
seems to give us a good image.
Knowing that does decrease our overall
production rate, as we can render and
export out of the sequencer at around
68fps but it requires 30 frames for
each clear picture, so we’re
effectively at 2FPS (more or less).
This is effectively what per-shot warmup time settings do, but it is still flawed. The actual high-res screenshot tool is rendering 3 frames under the hood (and discarding their results) before rendering the last one and saving that one to disk. This give some time (frames) to fill temporal history for anti-aliasing, and if nothing in the scene moves between the frames the motion vectors become zero so you don’t end up with wild motion blur.
The bigger issue is that creating a
sequencer and injecting all of the
keyframes programmatically does not
seem possible from the python API.
We’ve not found in the documentation a
way of doing this at that level.
This should be (mostly) possible. Have you looked at the examples in /Engine/Plugins/MovieScene/SequencerScripting/Content/Python?
So my question is can this be done
effectively and can we do it in
python, C++, or would we have to
engineer an alternative approach to
batch render these custom scene
iterations?
So the problem you’re trying to tackle is surprisingly complicated (and nuanced). Without knowing exactly what you’re trying to render, if you need every frame to be ‘unique’ from the last (different camera position, different materials, new objects, etc.) then you will always need to render some amount of “warm up” data to fill temporal history. The results of these warm up frames don’t need to be saved but they do need to be submitted to the GPU and run so that anti-aliasing has time to get the data it needs to anti-alias the edges.
Before embarking on creating your own you may be interested in a new plugin coming to 4.25 which is an update to the existing Sequencer rendering and designed to solve a lot of these problems. The new plugin (called Movie Render Pipeline) handles one-frame-long shots/camera cuts correctly (ie: no extreme motion blur on the first frame) and has better control over warm ups (such as rendering the frames and discarding them which is faster than writing them to disk). It also allows you to disable anti-aliasing and use n number of real samples made by rendering the scene several times with a slightly offset camera position and the results are added together. This is slower than TAA (and is optional) but will produce real anti-aliasing without artifacts that you may get from TAA.
The new plugin may not do exactly what you need but might be a starting point for writing your own as well if it does not do what you need. It is experimental and changing constantly, but can be found in the /Engine/Plugins/MovieScene/MovieRenderPipeline on the Dev-Editor branch in Github if you would like to look at it before its eventual experimental release.