Why can game engines not use variable framerate? (Noob question)

I was wondering, why can a gameengine not render things in the distance at lower framerate, and things in the foreground at a higher framerate? Forexample forest covered hills in the distance only need to change their appearance/re-render every few seconds if the player is moving on foot.

They can, sort of. Animation and variable updates can happen at fixed intervals (see Tick Rate and Timers) instead of on Tick, the game’s running frame rate. Certain parts of the screen can also be updated at a lower rate than something you want to be in focus or have a different resolution than the focus.

The former can be done in UE4 out of the box with Timers and changing an actor’s Tick properties, but the latter I would think requires some engine modification on your part, unless I missed a release note somewhere.


Epic pushes LODs / Billboards / Proxy Geometry Tool as best practice for distant actors. That implies far off actors with a ‘low level of detail’ are cheap enough to render continuously versus temporarily disabling them. Makes sense, otherwise Dynamic-Lighting etc would break. Plus, presumably there would be noise and other aberrations from not updating things for too long (PBR materials???). You could test-drive Was Recently Rendered. Maybe it can offer some practical insights…

[USER=“13335”]Jared Therriault[/USER]

I’d always assumed using Timers and changing Tick priority really just saved resources devoted to physics / collision / movement etc, not rendering… How much of a benefit do these bring to the renderer, any idea?

This question/topic sparked when I was thinking about the main limitation of 3D bilboards - their texture size requirements. I thought: “Why cant the GPU make billboards of distant hills at runtime, and just update the image every 30 seconds”.

[QUOTE="Certain parts of the screen can also be updated at a lower rate than something you want to be in focus or have a different resolution than the focus.[/QUOTE]

I expect this only helps the CPU side of things. I was thinking of the GPU aspects.

Unless there is no movement and nothing is happening on screen you pretty much have to redraw everything on every frame.

Imagine you are in first-person perspective in a forest, with a mountain in the distance.
Imagine leafs falling from the trees, occasionally blocking part of the mountain. That means the leaf changes position on screen every frame, every frame a different part of the mountain gets blocked from view while the part of the mountain that was previously blocked by the leaf in the previous frame is now visible again.
If you only redraw the mountain every, say, 5 seconds, then for 5 seconds the previous falling-leaf images from the previous frames wouldn’t get overdrawn and you’d have ghost images of the previous leaf positions.

Imagine the background/skybox, like mountains in the distance, only getting drawn every 5 seconds.
When you move your camera by moving the mouse, the background/skybox would still appear the same, motionless, not reacting to your camera, while the rest of the level would turn correctly.That’d look very disorienting. You have to redraw even far away objects on every frame when moving the camera around.

Redrawing has to be done no matter what, but you can save ressources by replacing 3d objects, if they are far enough away, with models with fewer polygons and fewer materials or just use 2d billboards.

Thanks stefanHohnwald for explaining that. But what about storing the image in memory so it is a sortoff billboard that is generated on the fly?
Sorry, I do not understand how GPUs or CPUs really work, please bear with my silly questions.

What you’re wanting is basically LOD’s, you have to manually create a low-detail mesh for something further off. UE4 can do that for large sections of the level as well, rather than doing it for each object. When you do that, things are simplified to the point that it’s not very impactful for performance.

Other things can also be set to reduce the amount of updates if it’s further away–for example if you notice in some games you might see animated characters off in the distance that look like they’re moving in a low framerate, but once you get close enough they switch to the full quality animations.

3D bilboards essentially render many different bilboards at multiple angles and load them in depending on the viewing angle of the bilboard. Would it not be more efficient to generate the really large scale bilboards on the fly, and update them depending how much the players viewing angle has changed? That way only one bilboard angle has to be stored in memory, and the bilboard will always be accurate regardless of viewing angle.

You could render everything further than x meters to cubemap. And then render scene normally but only stuff that is closer than x meters. Then use that cubemap as skybox.

No, it would be very difficult for the computer to figure out how to do that, by setting up LOD’s beforehand and setting them for specific meshes/locations it is much much faster (and would look better).

You just described what a 3d game engine is :
Look at where the player is looking at, determine what’s visible and render on the fly everything visible on a flat 2d image/large 2d billbox which then gets displayed on screen. :slight_smile:

Yes, technically we could store everything that has NOT changed since the last frame draw, but you can’t see into the future what will change on the next frame, so you don’t know what part of the frame to store. You could store the entire frame/entire game state of the last frame and then compare every single object of the previous frame with everything that happened on the new frame to determine what has changed, which essentially is more work than just redrawing everything every frame.

I know it’s quite unintuitive for us humans, but for the computer it’s usually cheaper and faster to just redraw everything, even if nothing changed, than to compare what changed since the last frame and still having to do its render work.

About using such a billboard ingame for background:
To determine how much the players viewing angle has changed and draw the background billboard on the fly is same work (if not more) as just rendering it as real objects, because in order to real-time render a scene on a 2d billboard the game would have to determine what to actually draw, aka look at an angle on a scene, determine what’s visible and then draw it… sounds familiar? It’s exactly what would have been done if it were real objects instead of being projected on a 2d billboard, only with real-time rendering on a 2d billboard you’d have the additional work of storing that 2d billboard and merging it with the rest of the scene, while without real-time-billboarding it’d just have to render the scene normally without having to temporarily store an additional billboard.

As an analogy : Imagine wanting to draw a forest with a mountain in the backgorund on a piece of paper.
What is more efficient:
a) Just drawing the entire picture in one go (the screen render), with time saved since you don’t have to draw parts of the mountain because you can determine that part of the mountain will be blocked by trees, or
b) drawing the entire mountain (regardless whether parts of it later get blocked by trees, you draw the entire thing) on one paper (the billboard), glueing that paper on another paper (the screen render) and then draw the trees over it, potentially overdrawing parts of the mountain that you already have drawn, thus having wasted time and paper?

Real-time billboards/real time rendering to texture are only really viable for camera screens, mirrors, minimaps or for other screen/texture effects, not for rendering backgrounds in real-time

Wouldn’t real-time rendering everything to a cubemap and then rendering that on screen take at least as much work as just rendering it as real oobjects?
Or do you mean pre-render it as a prerendered background and shove corresponding mesh objects in front of it as soon as you get close enough?

I understand most of what your saying there, thanks for such a detailed awnswer. It makes me think…

The render target camera creating the bilboard texture, only has to update the texture once every 20 seconds or so, not 60 times every second. During that 20 second interval, all the 3d objects rendered to the bilboard, can be culled from the main camera, and do not need to be rendered by the render target camera. would this not save performance?

There’s no way for the camera to know when something won’t need to be re-rendered, and it would be very difficult to have the computer be able to figure out what things would be part of the billboard.
And again, the things that are far enough away for what you’re thinking already have a very low performance impact due to LOD’s

Yes, I guess so. and yes, UE4 already has good LOD tools for sure.


Sounds quite interesting. Sortoff a skybox that updates on the fly, for the really extremly distant landscape. Could be useful for really massive open worlds.

Cubemap also ccould use lower resolution also and pre calculate far DOF to hide low res artefacts. Updating it only when camera has moved enough would be key for performance.