Inconsistent Crash at 400% sequencer max res, dual 2080Ti

Using 4.23.1, Ray Tracing for reflections (between 1 and 10 bounces) on 2 beefy computers, one with a 2080Ti and the other with 2 2080Ti’s.
We’re rendering a sequencer sequence at 7680x982 pixels at 400 screen percentage in a post process volume.

It crashes most of the time in inconsistent ways: “Update your driver” (they are up to date), “Out of memory” (yes monitoring the memory shows that the engine goes above the available video mem) and the crash reporter with “Assertion Failed: SizeX <= GetMax2DTextureDimension() […/D3D12Texture.cpp] [Line: 701]”

We ran 100’s of tests on our different scenes and computers, all very random.
The attached “empty” scene (created on the single 2080Ti computer) crashes when rendered at 400% on either the single 2080Ti or the dual 2080Ti, but if we re-create a similar scene on the dual 2080Ti computer, it can render at 400%.

It is very annoying as rendering below 400% gives hard to accept aliasing for the quality we’re aiming.

On the single 2080Ti, the max screen percentage that we can get is 212, which maybe by coincidence is 7680*2.12 < 2^14 (which is max texture size I think?). On the single 2080Ti computer we managed to render at 400% very complex scenes, and when we try again with minor minor changes it crashes. (all rendering is done with all other apps closed)

Then how does the engine get to render 400% of 7680 when it works?

How to get consistent 400% renders and no crashes?

What is actually going on?

We tried 10s of driver versions, different computers, different engine versions (even the 24 preview 1), disabling ray tracing, separate processes… not a ounce of logic in our results, appart that it crashes most of the time, mostly before outputting a single frame at 400%.

might be related to Unreal Engine Issues and Bug Tracker (UE-48716) as we use a wide camera and fov.

link text

Help please!

Asking Sequencer for a 7680x982 resolution image combined with setting the screen percentage at 400% is going to cause the engine to try and allocate a 30,720x3,928 texture internally. Most graphics cards have a maximum texture resolution of 16k on the longest dimension, so I am not sure how this would have worked in any case.

You are going to need to find another method of producing such a high resolution render for the time being (such as building your own output rendering that renders many samples with a small offset and adds them together) or a system that uses off-center camera projection matrices to render sub-regions of the screen (which will cause issues with post processing effects), or wait until a better movie rendering solution is released.