Maximum Resolution to Render 1-High Res Still Image?

What’s actually the limit / maximum pixel resolution to render one high-resolution image?

thanks for any info,
appreciate it! :rolleyes:

From an engine standpoint? As in how much can you super sample?
performance starts breaking at 8k.

Since a still shot has no performance I don’t see why that couldn’t be exceeded for this use case.
The question is more as to how this is getting output.

here is an example

The bultin system multiplies your screen res X4 and makes a composite by rendering a few bits at a time…

So a highres screenshot at 4k would easily exceed 8k by a ton…

Depends on how much GPU memory you have, needs to be enough to load everything in the image along with the rendered image itself.

Even a 1060 can handle a maxed out level at 1fps though. Assuming best practices (appropriate pool size, salability etc.) were followed when building things, shouldn’t it just render as expected regardless and end up just taking a lot longer in time?
Even if you were to do this manually yourself - set up a render2d, pan it around the scene and capture as you go writing from 1 temp to a buffer - it would require a lot before the memory of the buffer exceeds the GFX.
IF instead of writing to a temporary buffer you write down the image tiles directly you can also circumvent that making the hard disk the limitation on size - possibly. what for? no idea :stuck_out_tongue:

Basically what I’m suggesting is that if you capture 1,000 images of 256x256 which each cost 4GB of vram to render one at a time, you can still render a 256*1000 res file on pretty much any system (with 4gb of vram).

I could name you a reason: Creating High-Res shots of any resolution with sample numbers (for raytracing stuff and whatever requires samples etc), that are beyond good and evil and would melt every graphic card, as soon as you type the numbers in ;). it would allow some kind of offline render, where the full picture is not rendered per tick or fps, but just one small fraction, and then the next small fraction is rendered in the next frame and stitched together. Like they did with the clouds in Horizon: zero dawn. There they said, the most part of the scenerey was rendered in realtime, but the clouds were so heavy hitters, that they would break the fps. Therefore they built it that way, that only one part of the clouds was rendered per frame, then in the next frame, another part was rendered. I think, they used 16 steps -> 16 frames for one complete cloud actualisation instead of one complete cloud every frame, which would have broken the fps.

I mean, the smaller the rendered picture is, the higher you can go with all quality settings and still be able to run the program without crashing it or roasting your comp in the process. So your suggestion with 1000 small pictures, that combined create one big high quality picture, i am pretty sure, there are enough people, that would love this ability.
Especially, if you build it in a way, that it automatically create the high res picture. I thought about this possibility too, and i would be very interested, if something like this is possible here ^.^

It’s not about time, it’s about memory. If you’re rendering a large image it’s going to take up more memory on the GPU, it has to temporarily store it there while it renders before it can do anything with it to save to the hard drive. The render image is going to be pretty large due to the extra passes it does internally to make the image and that it’s going to be uncompressed.

Highresshot used to render the image in pieces but I don’t think it does that anymore because you get problems when image processing is done on individual pieces rather than the whole thing. You can get edges that don’t match.

hmmm. yeah… all good points. Just wondering what you do for Stills, I mean high res stills if a client is requesting 30.000 pixels by 30.0000 pixels.
Most likely… not doable, right? :rolleyes:

Try it. The link I shared has commands for custom sizing. See what the engine can do itself on your setup.

If it doesn’t work you can try the scene render approach. You just have to remove vignetting from the PP first. And oversample each shot by 30% so that you can stitch it together without problems. (Obvisouly wind nodes or anything moving needs to be stopped).

Again, so long as each individual shot can render with your gfx and you save the file out you won’t have issues.

I would create a bluetility script to pan the camera the preset amounts and capture the still. Then save the render texture manually, and click the bluetility to have it move and capture the next tile.

Afterwards assemble in photoshop. The panoramic stitching will probably help with that.

Not sure, if panning the camera is helpful in such a case, because of focal points and disortions caused by the camera lenses. Programs like Cinema 4D have a camera/film offset function for this scenario, in which the camera itself is never moved, the perspective etc. never changes

Here from the manual about this exact scenario:…JECTPROPERTIES

Makes sense panning is what we do normally with orthographic view. On a perspective shot it would be a bit different.
perhaps tilting would be best. Would be something to try out.

The client probably doesn’t know what they’re talking about

NVIDIA Ansel Plugin Overview | Unreal Engine Documentation perhaps?

We do a lot of such large Prints. Those are 8meter wall prints where you can go close by.

As far as photography goes the 30,000 is peanuts. Especially at 72dpi.
at 300 dpi it becomes something a little more noteworthy.

Yeah, but you don’t need to have the image at that actual resolution, you render it smaller and scale it up.

If you stretch a regular 72dpi 2560x1440 image to 30k pixel you are printing a blur that will only look decent from the other side of the room.
even 24 megapixel digital shots come out somewhat blurry at wall size, in fact, I’d use a full frame at 36 and composit at least 3 shots just to have the “extra” definition.

Obviously photography is a whole other world then rendering a scene in unreal.
You are limited by the system as far as what you can or cannot do. no way around that.

Yeah, don’t render it at 2K, but you probably shouldn’t render it at 30k

72dpi or 300dpi doesn’t matter – 30k pixels is 30k pixels. The dpi is just a metric of how many of them are in an inch. We do 30k pixel prints all the time. BUT we do the renders at 15k and rez them up in PS. Much faster render times and the loss of detail is negligible since you’re starting out with a lot of resolution.

This is not a solution. Why have High Resolution Capture option in the first place? With most high end graphic card still only having 8GB of Video RAM, the current implementation is useless for Raytracing. I can’t take shots higher than 3X screen resolution. And why need it large? Billboards. Typical billboard even at 30dpi require 14,400 pixels width which UE4 currently can’t deliver with Raytracing on RTX-2080. I can only get half that resolution.

Needs a way to offload the buffer to system RAM even if this takes longer.

Meanwhile, as a workaround, I render smaller image with Raytracing and blow it up. Then render larger full size image without Raytracing. Then retouching them to sharpen some edges back. But really would prefer not to do this.

UE4 is a real time renderer, if you want to benefit from its features you’re going to have to deal with the limitations that come along with it