UE4 + Nvidia + 360 Video Capturing

The thread’s been pretty quiet, but has anyone else experimented with automating things so that it’s possible to render the next frame when the previous is done, instead of having to set a timer.
(The timer is the worst, overshoot and you waste a lot of time idling. Undershoot and suddenly you’ll have issues with the macro triggering while rendering.)

The event ‘Photography Multi Part Capture Start/End’ Triggers on start/end of multi part rendering, aka the 360 rendering process.
So if there is some kind of way to get unreal to ping an exterior program to activate the auto-hotkey programs, then that would let us render more efficiently, and without fear of suddenly messing things up because the script triggered 1 second before the rendering was finished.

(Though exactly how, I have no idea.
It could be possible to just have unreal output a logfile every time a render finished, and have a hotkey program parse that whenever it updated, if any of them have that capability.)

Edit: Looks like Pulover’s Macro Creator can look at a certain region/pixel of the screen for a specificed colour.
So it’s possible to
Run macro to start up , input settings, take picture.
Wait for render, while checking if the ‘start’ button has turned green, once a second.
Render finishes, Start button becomes green again.
Macro closes , moves to next frame, restarts again.

Apart from that, has anyone looked at the performance of capturing from ?
I noticed that it renders way less parts/tiles the larger the resolution of the play window is, and I was curious if that would make it more efficient.
In addition to that, there’s also a console command: r.Photography.SettleFrames 10
To render a couple frames to settle temporal effects, which I don’t always use.

Anyhow: to testing: Just taking out a 360 Stereoscopic 8kx8k pic from a scene I have, all from the same view, to see how rendering time changes with various settings.
Times are with a stopwatch, so not perfectly accurate.
(And also, I imagine rendering would be a bit faster with a packaged product, instead of running in a standalone window with the editor up. But I’m testing from the editor right now.)

Render round 1: Settleframes 10,
Window resolution: 1280x720, 474 tiles, 90sec
Window resolution: 1920x1080, 214 tiles 47sec.
Window resolution: 2560x1440, 120 tiles, 35sec.
Window resolution: 3840x2160, 56 tiles, 28sec.

Looking at that, there’s obviously a huge overhead related to how many tiles you’re rendering. You get the exact same render in less than a third of the time going from 720p to 4k.

Render round 2: Settleframes 0,
Window resolution: 1280x720, 474 tiles, 12.7Sec.!!
However, this led to a couple graphical glitches in the render. (Chunks of buildings not being where they’re supposed to, etc.Seemed like some parts were skewed.)

Render round 3: Settleframes 1,
Window resolution: 1280x720, 474 tiles, 19.4cec. Not really a noticable difference from Render round 1. Though I’m not using any temporal effects, not even TAA.
Window resolution: 1920x1080, 214 tiles, 13.2sec.
Window resolution: 2560x1440, 120 tiles, 11sec.
Window resolution: 3840x2160, 56 tiles. 9.5sec

Render round 3 doesn’t have as big a difference when rendering at higher resolutions, compared to round 1. But it’s still a pretty big savings just from having the game window be higher res.
But obviously, if you want larger savings you should look at how many settle frames you actually need for your render. As it seems to be the majority of the rendertime.

Still! That’s a huge difference in rendertime.
Going from a 90sec render to a 11 sec render, with no difference in the output. (Again, no temporal effects in the scene, or any other effects that might get anything from settling.)
And even if you didn’t want to touch the amount of Settle frames. (Honestly though, I imagine they have 10 as a default super high quality thing because you’re not expected to try to render a video, after all.) Then you’d still be able to go from a 90sec to 35sec render ( nearly 3x as fast!) just by rendering at 2560x1440 instead of 1280x720.

Rendering at a higher resolution than your screen, with a standalone / windowed process is also possible. I can do a 4k render on my 3440x1440 monitor. Though it means that 's ui goes off screen, but that’s a non-issue if you’re using an automated script.

Of course, higher resolution might not be possible for everyone, since it chugs up a lot more VRAM.
(Thankfully either uses ram, or just saves out the tiles. It doesn’t massively balloon in vram use atleast.)

Checking Gpu vram usage: (Just from task manager.) (All of this is from the standalone player from editor. Doing this from a sequence in a separate process, or packaged project would be different I imagine.)
Idling in editor: 3.9-4.1GB.
in standalone player: 720p: 5.3GB
in standalone player: 4k : 6.7GB

That’ll of course vary from scene to scene.

Still, the takeaway is that for faster rendering, at the exact same quality, render out stuff at 4k or something, it’s a huge jump from the default window of 1280x720 or 800, when just rendering shots from the editor.