I’m gonna use this setup
with the camera managerThe thread’s been pretty quiet, but has anyone else experimented with automating things so that it’s possible to render the next frame when the previous is done, instead of having to set a timer.
(The timer is the worst, overshoot and you waste a lot of time idling. Undershoot and suddenly you’ll have issues with the macro triggering while rendering.)
The event ‘Photography Multi Part Capture Start/End’ Triggers on start/end of multi part rendering, aka the 360 rendering process.
So if there is some kind of way to get unreal to ping an exterior program to activate the auto-hotkey programs, then that would let us render more efficiently, and without fear of suddenly messing things up because the script triggered 1 second before the rendering was finished.
(Though exactly how, I have no idea.
It could be possible to just have unreal output a logfile every time a render finished, and have a hotkey program parse that whenever it updated, if any of them have that capability.)
Edit: Looks like Pulover’s Macro Creator can look at a certain region/pixel of the screen for a specificed colour.
So it’s possible to
Run macro to start up , input settings, take picture.
Wait for render, while checking if the ‘start’ button has turned green, once a second.
Render finishes, Start button becomes green again.
Macro closes , moves to next frame, restarts again.
Apart from that, has anyone looked at the performance of capturing from ?
I noticed that it renders way less parts/tiles the larger the resolution of the play window is, and I was curious if that would make it more efficient.
In addition to that, there’s also a console command: r.Photography.SettleFrames 10
To render a couple frames to settle temporal effects, which I don’t always use.
Anyhow: to testing: Just taking out a 360 Stereoscopic 8kx8k pic from a scene I have, all from the same view, to see how rendering time changes with various settings.
Times are with a stopwatch, so not perfectly accurate.
(And also, I imagine rendering would be a bit faster with a packaged product, instead of running in a standalone window with the editor up. But I’m testing from the editor right now.)
Render round 1: Settleframes 10,
Window resolution: 1280x720, 474 tiles, 90sec
Window resolution: 1920x1080, 214 tiles 47sec.
Window resolution: 2560x1440, 120 tiles, 35sec.
Window resolution: 3840x2160, 56 tiles, 28sec.
Looking at that, there’s obviously a huge overhead related to how many tiles you’re rendering. You get the exact same render in less than a third of the time going from 720p to 4k.
Render round 2: Settleframes 0,
Window resolution: 1280x720, 474 tiles, 12.7Sec.!!
However, this led to a couple graphical glitches in the render. (Chunks of buildings not being where they’re supposed to, etc.Seemed like some parts were skewed.)
Render round 3: Settleframes 1,
Window resolution: 1280x720, 474 tiles, 19.4cec. Not really a noticable difference from Render round 1. Though I’m not using any temporal effects, not even TAA.
Window resolution: 1920x1080, 214 tiles, 13.2sec.
Window resolution: 2560x1440, 120 tiles, 11sec.
Window resolution: 3840x2160, 56 tiles. 9.5sec
Render round 3 doesn’t have as big a difference when rendering at higher resolutions, compared to round 1. But it’s still a pretty big savings just from having the game window be higher res.
But obviously, if you want larger savings you should look at how many settle frames you actually need for your render. As it seems to be the majority of the rendertime.
Still! That’s a huge difference in rendertime.
Going from a 90sec render to a 11 sec render, with no difference in the output. (Again, no temporal effects in the scene, or any other effects that might get anything from settling.)
And even if you didn’t want to touch the amount of Settle frames. (Honestly though, I imagine they have 10 as a default super high quality thing because you’re not expected to try to render a video, after all.) Then you’d still be able to go from a 90sec to 35sec render ( nearly 3x as fast!) just by rendering at 2560x1440 instead of 1280x720.
Rendering at a higher resolution than your screen, with a standalone / windowed process is also possible. I can do a 4k render on my 3440x1440 monitor. Though it means that 's ui goes off screen, but that’s a non-issue if you’re using an automated script.
Of course, higher resolution might not be possible for everyone, since it chugs up a lot more VRAM.
(Thankfully either uses ram, or just saves out the tiles. It doesn’t massively balloon in vram use atleast.)
Checking Gpu vram usage: (Just from task manager.) (All of this is from the standalone player from editor. Doing this from a sequence in a separate process, or packaged project would be different I imagine.)
Idling in editor: 3.9-4.1GB.
in standalone player: 720p: 5.3GB
in standalone player: 4k : 6.7GB
That’ll of course vary from scene to scene.
Still, the takeaway is that for faster rendering, at the exact same quality, render out stuff at 4k or something, it’s a huge jump from the default window of 1280x720 or 800, when just rendering shots from the editor.
Yup! (Sorry for the double post, but I figured this warranted a new one. )
Pulover’s Macro Creator can actually just check for an image every so often. Meaning no more setting a generous timer to avoid problems.
Right now it works like this.
1: Start up process
2: Manually open , and close it again. ( So that the next time it opens, it starts with the ‘done’ button highlighted. )
3: Script runs, sets settings to 360 stereo + desired resolution.
4: It starts the render.
5: pauses for a second
6: starts checking if the ‘Snap’ button is green again, once a second. (It’s greyed out while rendering.)
(It does this using the image/pixel search macro function. Allowing it to check a region of the screen for a specific image. like the snap button being green.)
7: once the render is complete, and the snap button is green. It waits for a few seconds.
8: Then closes , and presses tab, to make time progress by a frame.
9: then it loops.
The only thing I need to wait for now is step 7, when the render is done and where processes + saves the rendered image.
Because if seems to slow down the processing a lot if you’re rendering something at the same time (understandable.), and if render 2 is done rendering before render 1 is done processing, it’ll just throw out 1. On my pc, reading from and saving to a normal 2.5 ssd, it seems to take approx 4-5sec to save a 360 8x8k jpg. Massively longer for an .exr.
While if I render while it’s processing, it’ll easily take 10-15 seconds, aka long enough for the next render to interrupt it.
With 4x4k images I only need to wait around a second before progressing to the next render.
Still, that’s massively more flexible than the old way where you had to account for how long it might take to render.
Now it’s just waiting for the render to complete, then waiting a few seconds for the image to process, then on to the next one.
(It doesn’t matter if the render takes 5 seconds or 500 seconds.)
Image: how it’s set up.
At instruction 50: Space to start rendering.
51: wait a couple seconds, since there’s no point in checking if it’s done rendering at <10 seconds.
52: starts checking for the green snap button / the render being done, once every second.
53-61: After render, wait for a bit, close window. End of macro, so that it can loop once it’s done.
Can you show how it works in a video?
How it works in practice / while rendering. Or how the script is?
Because scriptwise, the only difference compared to hotkeying it normally is that instead of waiting for a set amount of seconds before playing the ‘finished rendering’ part of the script.
It’s not a super interesting video, but here you go.
The only difference between the 4k and 8k scripts is that the 8k one waits 10 seconds before checking if it’s one, and waits longer after a successful render.
( To let the image process before the next one.)
I guess that can be made flexible as well, but I just don’t see the point in it, when the processing+saving is pretty uniform in length. As opposed to renders where it can vary a lot.
**ax448 Thank you.
The problem I am having now, is how to use with a premade animation in the sequencer. So that I could capture a video frame by frame.
I found other options and used panoramic captures, just wanted to see how it goes with . Because it apparently offers better performance and quality. **
Well there’s no need to ask me Alex_Cross, just go through the thread. That’s already been covered (there’s even a gumroad link to a video covering everything you need to know.)
The only thing you need to do is to set up the game so that it pauses, you run to render, then progress the gametime by 1 frame. Then render again.
@ax448 Great insights! Indeed, having to account for the render time adds a great deal of uncertainty and unnecessary waiting.
Now I’m curious if anyone else has had any issues with Ghosting. And or has a solution to it. (Specifically for 360 stereo.)
It seems like it’s primarily because doesn’t handle stitching objects that are close ( <1 meter? <1.5? ) to the camera.
But it’s really strange because this isn’t an issue in the same way if you render out a monoscopic image.
Here’s an example. (Image shifts a bit to the side when rendering stereoscopic, that’s because the camera goes a few cm to the left, takes a pic, then a few cm to the right and takes a pic.)
Apart from the ghosting, the stereo and mono pictures seem more or less identical, though due to the ghosting the stereo one is significantly less clear when it comes to stuff close to the camera.
I wonder why this is? Maybe they’re just running a lower quality version of the stitching to save time while rendering out stereo ones?
In either case, it makes it really tempting to just render 2 monos instead of 1 stereo pic, and just slap them together in post instead.
I’m very curious if there is someone who -doesn’t- have ghosting of objects close to camera, while doing stereo 360 renders.
Edit: Nevermind I guess, there is some correction in the stereoscopic stitching that makes the view behave better. ( Like no flipping when turning 180 degrees around etc. ) Unless there’s a way to fix that without too much hassle, rendering in stereo is the only way to get a usable result for stereo viewing.
That still leaves the issue of why the stitching/ghosting is so bad in stereo though. The only way I’ve seen to reduce it is to render at higher resolutions, but it’s still visible even doing 16kx16k renders, and those take so long they’re useless for video.
Edit 2: I guess the issue with stitching is because of how the stereoscopic 360s are captured.
( Mono 360 is captured by spinning a camera around itself and taking pics. While Stereo has both of them spin around a sphere.)
But it is still kind of weird that I didn’t really notice the ghosting previously, but that might be because I’ve mostly been working with larger, outdoors-y er scenes, with nothing within the 1-2m area it has issues with.
Would still love it if people could tell me if they have the same issues or not.
Does anyone has this problem when using ? When i activate , the entire scene turns into non-photorealistic, or cartoon style. I just click Alt+F2 to activate the screen, nothing else.
That’s because F2 while in the standalone editor triggers ‘unlit’ mode.
Just rebind the key to something else and you won’t have that issue.
How do I jump to the next frame by pressing tab in Standalone Game? I’m guessing there is a function similar to Pause in blueprint?
There isn’t a specific blueprint function that does that: Just a blueprint that progresses time by 1 frame, a couple different examples of them have been posted in the thread so far.
(It just unpauses the game, waits for long enough for 1 frame to pass, then pauses the game again. It’s like… 4 blueprint nodes total.)
i’m running into same problem for up to 6 hours
Hi, just to inform you that i have released a plugin for automating 360 captures via :
Nice!
There are a lot of things I’m curious about:
How does it work in practice:
When you execute the ‘Start Capture’ node, it looks like it does the open/select type/select res/render/close. loop that you can do once you’ve launched once. Correct?
Is that hardcoded, or is there a way to adjust that? After all, it’s just a matter of how many up/down/left/right inputs you send. It would be great if we could just select the type and res from a dropdown.
(Or honestly, even just select how many left/right directional inputs we want.)(I do both 360 and 360Pano stuff pretty interchangably, and at different resolutions. )
And for the Start Capture node, does it just run for 1 capture, so that we can set it up to trigger again afterwards? Or does it somehow loop by itself?
(It does have an outgoing executor, will that trigger once the window closes again?)
And does it trigger a new frame rendering the moment the previous frame is done rendering, or once it’s done processing/saving?
Because I noticed a big issue there with , if you finish a rendering while the previous frame is still being processed/saved, the older frame is often discarded.
So I had to add a small delay between finishing a render and starting a new one to avoid that.
It does seem super promising though! And not needing an external program like pulover’s macro makes it more practical and less error prone.
Why do you use the mouse/cursor to press the snap button, by the way? It can be triggered with keyboard inputs just fine normally. Or is this something specific to how your branch of it functions?
The plugin doesn’t work properly. I have a sequencer scene that is 600 frames long. I tried to record the scene with this plugin, but It shoots only around 290 images instead of 600…
Does anyone know if it’s possible to create simple 180 stereo video that’s quicker than current methods?
Perhaps directly from stereo cameras as testing both and the other 360 plugin it will take ages to output even short five to ten minute videos.
I though epic would have had a good solution to go along with their sequencer and video making features by now.
Epic’s focus on the sequencer is primarily focused on fitting into their own pipeline unfortunately. I’m still hoping that some day it’ll support audio export alongside everything else.
(just run through once at low resolution, purely to capture runtime/dynamic audio. )
Having stereo, 180/stereo and 360/stereo as part of sequencer itself would be amazing, but I doubt any of them will ever happen.
But yeah, you can’t really get out 180 renders any faster any other way right now, unless you want to do cubemap-rendering and bend the image in post.
I’ve done 2x 360 images, sliced them down to 180, then combined them into the same video to get VR180 stereo, but that still leaves you throwing away half of the stuff you render.
(You could do the same with the stereo 360 render, slice it down to get 180, but the stereo-360 render just generates too many artifacts for that. )
It would be nice if nvidia pushed in 180 & 180 Stereo rendering into , but I doubt they’re interested in doing that.
Thanks i guess it might only be worth doing for short clips right now then. I was hoping to try making 180/360 animations for mobile vr but the time it takes and lack of features is disappointing.
I never even thought audio would be an issue! I hope they add options at least for proper exporting to mp4 along with audio.