How do games like or The Legend of Zelda - Breath of the Wild achieve such seamless beautiful world without using too much memory? More specifically, how do they manage to make the terrain look so beautiful from far away? I already know about LOD and how to set things to only render in when you’re close enough to them, however I don’t know if there are similar techniques to use on the terrain rather than meshes. The new Zelda game allows you to climb to the top of really high mountains or buildings, and from up there you can see mountains that look miles and miles away. Obviously the mountains aren’t rendering their full detail, in fact it’s rendering hardly any, and just looks like a cartoony water color painting. But how is it rendering at all? Does the game render the whole terrain at once then only add trees, enemies and such as you go? Adding fog and such to make it look prettier? To me that sounds ridiculous for a map as huge as that one, so i’m hoping there’s a better way that looks just as visually pleasing.
To be fair, this is handled more or less the same way throughout most of open world games. World is split into tiles. Only tiles, that are near the player, are being loaded into memory and are loaded out when player moves far away from them. Terrain tiles, that are outside this range, are replaced by low complexity meshes, that represent terrain with prominent meshes and trees.
In UE4 such system is called World Composition.
The funny thing is distance and object scale don’t effect processing that much. Say you have a small little hill that looks like it’s 3 feet tall when your character walks up to it. Say you take that hill as an asset and move it 2 miles away but make it 1000x larger. It appears as a mountain but the actual processing the mesh requires is the exact same. Same material calls, same geometric orientation of the hill means very large assets are not what are consuming.
Large amounts of unique texture/material calls, multiple physics objects, AI pathfinding and behavior and lighting are much more taxing than simple geometric calculations of a mesh like a mountain LODed down to probably around 500-2000 triangles. If you stripped the zelda scenes down to an empty scene you could place thousands of low detail mountains everywhere if you set it up right.
What you don’t want is many highly detailed objects on screen, with complex physics, complex lighting and complex AI. That will slow the game down to a crawl even if the background is completely empty with nothing in it and the game is on a single plane.
uses tiles which are actually NIF models with a different extension name. They are divided into several LOD and use a background mesh. Everything in the distanced LOD is billboard so it is reduced to only a few vertices. it is based on a nine block system where the 9 blocks are centered on the player and the surrounding blocks are reduced LOD. The LOD is generated from the actual meshs and world at build time so all those buildings, trees and rocks in the distance are actually a billboard. When player moves into a new centered block the LOD in the distance is replaced with the actual models and the area behind is reduced to a LOD if it was actual. In other words the engine swaps in and out different LOD models in relation to the players location.
Thank you all so much for your help, I did a lot of Googling before I came here but wasn’t getting anywhere. This helps so much.
keep in mind zelda breath of the wild has a huge fog effect that you can barely see the landscape
I read this with great interest. The Sam Jones video above puts into perspective the variables driving performance, the key drivers in my case being geometry, textures, and lighting (coding, physics, sprites, spawned characters, animations all being immaterial). I’m using UE4 not for video games, rather for VR applications, my twist being the use of a novel 3D capture system that delivers virtual environments of real world places, but that fully support relighting, i.e. data-based diffuse and isolated specular. I can control the number of meshes a set is broken into, the polycount per mesh, and the number and size of textures per group of meshes (a mesh in the photogrammetry engine is exported in parts, or sub-meshes). I welcome any thoughts about what constitutes optimization at the front end during export from the photogrammetry software, how then to leverage World Composition and LOD in UE4 to support 8K textures. A user will explore a cave environment on the macro scale, just like a real caver, heading down a cave passage down the middle, but when the passage gets tight is pushed close to walls, not to mention the value of allowing the user to explore the tiny stuff in a cave close up. The source imagery is 52 MP, the datasets providing way higher detail than any game engine can hope to pace, so here’s where optimization promises to glean the most from the source data. Big thanks for guiding a newb!