Batch editor automation of sequencer, HRSS, python

We’re developing an automation routine for the UE editor in python (4.24.1).  Our requirements are to generate individual PNG image frames from a scene that we are manipulating in script.  The # of combinations we have go into the thousands for our use-case, which is why we’ve been trying to leverage UE’s real-time rendering with GPUs.

We’ve been approaching this from two directions…

  1. Alter the scene dynamically in the viewport, perform a high res screen shot, and store to disk.  This actually works, except we’re unable to properly “wait” for the task to complete before queuing another task.  We have a question out as to how to do this (if possible) from python.  Doing them in a loop causes the prior tasks for that operation to be overwritten so we only get a single image output in the end.

  2. Alternatively, we could automate the sequencer, place the objects onto the sequencer and upon each revision, insert a key frame, and continue.  Upon completion, we’d export the frames to individual images.  We’ve encountered two issues here - the first is that we can’t have each “image” be a single keyframe as the cinematic creation process seems to want multiple frames to give us an accurate “shot”.  Putting each unique scene as a single keyframe generates blurs, light artifacts, and even some random elements showing up.  What we’ve seen is a need to have each unique scene generate 30 keyframes.  Once exported, we grab the 29th frame from each 1 second sequence.  This seems to give us a good image.  Knowing that does decrease our overall production rate, as we can render and export out of the sequencer at around 68fps but it requires 30 frames for each clear picture, so we’re effectively at 2FPS (more or less).  The bigger issue is that creating a sequencer and injecting all of the keyframes programmatically does not seem possible from the python API.  We’ve not found in the documentation a way of doing this at that level.

So my question is can this be done effectively and can we do it in python, C++, or would we have to engineer an alternative approach to batch render these custom scene iterations?

You can also explore using the NVIDIA Ansel Photography plugin which has both a blueprint API in addition to a few configurable rendering settings that might help with the blur and artifact issues you’re facing (specifically the r.Photography.SettleFrames and r.Photography.AutoPostprocess variables). You can find several tutorials online about using Ansel in UE4 (here’s one). Note that the latest GeForce Experience version seems to have broken the Ansel plugin. I’m not sure if this has been fixed already, but if not you can install GeForce Experience 3.15.0.164-20137 which is known to be compatible as noted here. Hope that helps!

I appreciate the hint. I wonder if it’s scriptable - ie, via C++ or python through the UE API…
We don’t really want to use Blueprint, it’s going to be a bit cumbersome for our needs.
Thanks for the info. We’ll continue to explore options.

I believe it’s scriptable through C++, provided you have access to the UE4 github repository you can find the API here and some docs.

Sorry both those links were 404. :frowning:

You’ll need to be logged into your Github account and have it linked with your Epic account by accepting the Unreal Engine EULA and following the instructions here, which will give you access to the engine source code and allow you to view these files.

Thanks, we’ll take a look but another github repo trying to hack around some limitations stated this…

“As the Ansel api does not allow to trigger ‘snaps’ programmatically,”

So that might be a key limitation.

Is there no way to simply call the existing take_high_res_screenshot function via python (which we’ve done successfully), yet wait for completion in a way that doesn’t block the Editor?

My guess is that we may have to drop down into C++ and spin off the task as a thread so we can poll for completion, sleep between polls, and upon completion just come back to our main loop with some kind of status.

Has to be a way…

Alter the scene dynamically in the
viewport, perform a high res screen
shot, and store to disk. This
actually works, except we’re unable to
properly “wait” for the task to
complete before queuing another task.
We have a question out as to how to do
this (if possible) from python. Doing
them in a loop causes the prior tasks
for that operation to be overwritten
so we only get a single image output
in the end.

The reason for this is that the high resolution screenshot tool is currently implemented as a ‘global’ (since it can be triggered via console commands). What this means is that on each tick in the engine, at a particular point in the frame, it checks to see if someone has requested a global screenshot. If so, it then reads the settings and executes the screenshot. There is not a queue nor multiple instances of it so this is why the last one overwrites the rest. You could try splitting it out over multiple frames (one render per frame) and just continually re-request the global screenshot, just one per frame.

Alternatively, we could automate the sequencer, place the objects onto the
sequencer and upon each revision,
insert a key frame, and continue.

This isn’t a bad approach (certainly much more consistent in terms of output resolution, etc.)

We’ve encountered two issues here -
the first is that we can’t have each
“image” be a single keyframe as the
cinematic creation process seems to
want multiple frames to give us an
accurate “shot”. Putting each unique
scene as a single keyframe generates
blurs, light artifacts, and even some
random elements showing up.

This is definitely a known issue. You can try making a Shot track and putting each combination in its own track, and then using per-shot warmups but it’s not great. The first frame is usually problematic in rendering because there are no motion vectors and there is no temporal history (more on this later).

What we’ve seen is a need to have each
unique scene generate 30 keyframes.
Once exported, we grab the 29th frame
from each 1 second sequence. This
seems to give us a good image.
Knowing that does decrease our overall
production rate, as we can render and
export out of the sequencer at around
68fps but it requires 30 frames for
each clear picture, so we’re
effectively at 2FPS (more or less).

This is effectively what per-shot warmup time settings do, but it is still flawed. The actual high-res screenshot tool is rendering 3 frames under the hood (and discarding their results) before rendering the last one and saving that one to disk. This give some time (frames) to fill temporal history for anti-aliasing, and if nothing in the scene moves between the frames the motion vectors become zero so you don’t end up with wild motion blur.

The bigger issue is that creating a
sequencer and injecting all of the
keyframes programmatically does not
seem possible from the python API.
We’ve not found in the documentation a
way of doing this at that level.

This should be (mostly) possible. Have you looked at the examples in /Engine/Plugins/MovieScene/SequencerScripting/Content/Python?

So my question is can this be done
effectively and can we do it in
python, C++, or would we have to
engineer an alternative approach to
batch render these custom scene
iterations?

So the problem you’re trying to tackle is surprisingly complicated (and nuanced). Without knowing exactly what you’re trying to render, if you need every frame to be ‘unique’ from the last (different camera position, different materials, new objects, etc.) then you will always need to render some amount of “warm up” data to fill temporal history. The results of these warm up frames don’t need to be saved but they do need to be submitted to the GPU and run so that anti-aliasing has time to get the data it needs to anti-alias the edges.

Before embarking on creating your own you may be interested in a new plugin coming to 4.25 which is an update to the existing Sequencer rendering and designed to solve a lot of these problems. The new plugin (called Movie Render Pipeline) handles one-frame-long shots/camera cuts correctly (ie: no extreme motion blur on the first frame) and has better control over warm ups (such as rendering the frames and discarding them which is faster than writing them to disk). It also allows you to disable anti-aliasing and use n number of real samples made by rendering the scene several times with a slightly offset camera position and the results are added together. This is slower than TAA (and is optional) but will produce real anti-aliasing without artifacts that you may get from TAA.

The new plugin may not do exactly what you need but might be a starting point for writing your own as well if it does not do what you need. It is experimental and changing constantly, but can be found in the /Engine/Plugins/MovieScene/MovieRenderPipeline on the Dev-Editor branch in Github if you would like to look at it before its eventual experimental release.

I guess it’s a bit late now but I think I found a solution for my personal purpose to do something similar. In order to wait I use a callback unreal.register_slate_pre_tick_callback This will call my function each time there’s a return in the console. So I can check if my screenshot is finished with is_task_done() and launch a new one. It’s not perfect but it worked for me.,I guess it’s a bit late now but I think I found a solution for my personal purpose to do something similar. In order to wait I use a callback unreal.register_slate_pre_tick_callback This will call my function each time there’s a return in the console. So I can check if my screenshot is finished with is_task_done() and launch a new one.

Hi, I know this has been a long time, but I’m having this same exact issue. I need to generate a large batch of synthetic images for ML labeling. I’m only able to get one on a script run, so basically all my work was useless. Would you mind elaborating on your final solution that worked for you to get many screenshots with the callback? Or even better, if you could share your rough code that worked or GitHub repo? Thank you!