Ah thanks so much “Chocker” for explaining that. Reading the entire thread (that was my proof i did) my was derailed like 75 times per page.
It is actually the most condensed and precise description of the thread so far.
So let me get this straight, people make empty terrains, that look pretty, with a bunch of layers and get decent fps and consider that an accomplishment? Throw in even a few hundred static meshes, trees, grass, 10 or more active enemies with decent AI, particle effects, animations, sounds, widgets, etc etc and get back to me on your performance… Ensuring you have fancy 4k landscape textures and subtle overkill blends, using even more 4k textures, are going to be the least of your worries because there will be bigger performance vs quality battles to fight. Yes, my project is going to be capped at 30fps, but will be 1080p(currently developing on a crappy laptop at the moment with a 540m). I’ve done a lot of recent testing on a 940m laptop and can easily do the same stuff that I was talking about, at 1080p with way higher frame rates. My game is geared around reasonable system specs. Too often, people develop with some ultra high powered PC and then wonder why they spend the next 4 months, after launch, fixing performance issues. Check out steam survey and see what the average computer is using…
As for the age of my account here, yeah, I signed up for UE4 two years ago but only started actually using this engine and developing with it, three or four months ago and have made quite a bit of progress already. I don’t feel the need to share with people and keep my project under pretty tight wrap because I’m not an angsty 14 year old the needs the validations and approval from others; in order to boost my morale. It’s the thing that supposed “professional companies” do.
Again, when you’re comparing the texturing system in UE4 vs other engines, you’re comparing apples and oranges. I don’t know a metric ton about some of the other engines out there, but when you say engine X can have Y layers, the question becomes “At what cost?” Do they have to get baked in(like when you lightmass maps in UE)? Can the individual layers be tweaked in real time like when using material instances? Are there limitations to how many, of the Y layers, can actually be blended at a time(like what was discussed earlier in the the thread); per component; as in you have 10 layers on the component, but only three can be blended with each other at a time per pixel. How aggressive are their LOD systems? During LOD, does the terrain drop out extra layers or material channels(like normals, specular, etc)? Are only the nearest components rendered with more detail than the others? What about the internal rendering and things like filter quality and are they doing some kind of up/down scaling? How about do they use some kind of weighted system that determines what resolution of a texture to use like if it were a 4k texture, but the layer only had 10% average opacity per component, do they scale that 4k down to 512 because it isn’t super visible?
The list goes on and on and on and on. There are so many different ways you can cut corners in order to obtain performance. 99 times out of 100, it’s going to be at the expense of fidelity. That’s why almost every single AAA company custom tailors an engine to THEIR needs. A game engine can’t cater to all possible desires. It would be like dropping a boxer engine in big pickup truck and then complaining that the truck doesn’t tow the weight you want it to. You would either have to modify the engine, as in scale it up in size+tweak it to your needs, OR you’d have to design a truck AROUND the engine; based on what the engine can output. When you’re designing a game, using the stock UE4 engine, you have to design around what it’s limitations are; unless you want to write your own rendering engine.
As for the virtual texturing topic, take a look at how many games actually use virtual texturing… I’d be willing to bet that it’s less than 1% of ALL games that come out per year (indie and AAA combined). Even looking at only AAA titles, I don’t think that many use them either.
Getting back on track for the thread, landscape works great, when you don’t abuse it with overly complex materials per individual layer. Let’s say that each layer was only using 3 4k textures, that’s 192mb of texture information and now multiply that by 10 layers and you’ll be looking at almost 2gb of VRAM taken up with landscape materials. With 10 layers, you’re looking at 640mb of VRAM per texture channel needed like diffuse, specular, normal, etc. Now when you start pulling a whole bunch of 4k textures samples per layer, combined with a bunch of shader math, it’s going to spell out disaster in ANY engine. There is absolutely no arguing this lol…
The first issue to present itself to me is your test is flawed. Here’s how, the issue maximum dev is having is fps drops. So how do we prove or disprove something? We set up a test, which you did. Then we test for the issue in question fps drops. How did you test this? You capped your fps. So what did we accomplish? Did you test for the problem at hand? Did you disprove anything?
How much VRAM does your 540m have? Doesn’t running this test at 720p@32 on a low RAM mobile GPU mean the engine is going to stream in much lower mip levels? Maybe there is enough RAM, I don’t know all the texture resolution requirements people were holding up for a benchmark.
I’m a different person, I’ve got a 1070 with 8GB of VRAM and tested it in 1080p and got 100+ FPS.
God bless you.
@, I read your first line, don’t know what you’re talking about or how that’s anyhow related to the purpose here. Didn’t read the rest of your post.
@cyaoeu, It’s good for you that you’re getting +100 FPS with that shader on a 1070. Though the only things being wrong is that’s not mid range hardware, that shader isn’t production quality, and even 100 FPS for that shader on that hardware, is bad performance.
The point is that that optimization is a must and throwing 3gb worth of 4k landscape textures(diffuse, normal, variation, etc over “10 layers”) into the mix isn’t great for performance; as soon as you add actual content to a level.
That shader was never meant to be production quality… It was a quick smashup to prove a point. Speaking of which, I spent 20 minutes messing around in WM, created a 4km map threw 6 layers on it, all with tessellation, and am still getting pretty decent fps. But hey, I figured I’d pretty’fy it up a little for those who wanted to talk trash. It’s using all starter content materials that I plugged into that function I posted earlier. It’s definitely starting to bog down on epic settings with all the tessellation, unnecessary overkill on blending and fancy camera stuff, but that’s just the limitations of this dated 540m. I have an actual optimized material, but I’m using the one for artificially stress testing it.
So that’s with six layers on. Any more and my computer will crash because I run out of ram for shader compiling.
For 2 pages you are only blaming “texture size” for the lack of performance caused by absence of proper open world support without reading other people’s posts. All the point is to have minimum 12 layers on 8k or bigger landscapes on mid range hardware and you come up with test case with small landscape, 6 layers, which happen to crash on your own laptop. That’s not constructive.
Yes, I definitely am blaming texture size for it. You can sparingly use a 4k here and there, but for the other channels, drop them down to 2k. You don’t need a 4k normal map for some gravel on a path. Just like there would be no need for a 4k specular map or noise/variation textures. They aren’t high frequency enough to condone it. Even if it were a micro variation texture, it’s not necessary. If the tiling on the texture is going to be smaller than a magic certain screen size percentage, it’s wasted.
And my landscape was 4033x4033 res, so I could easily scale it up to 8k and still maintain “decent” resolution. The reason why it will crash is because I’m applying whole layer blending with mask files (think splats). When it goes to blend it in with all the other layers, on 1024 components, it uses a ton of ram. After it’s done compiling, it’s done and runs fine. I only have 6gb of ram… If I manually paint on 16 layers, it works just fine… It just can’t handle doing the whole map at once, after six or so layers.
One key thing that I forgot to bring up is when people are using WM for making textures or masks, are you making sure to override the resolutions back to a 2^n power? I’m not talking about height maps that will be used for splat-mapping on whole layers at a time because those have to stay the same size as the landscape. If you were exporting a texture like 4033x4033, to use as something like a normal map, it won’t mip because it’s not a power of two(as far as I know or unless something has changed). That right there could mess with performance and would be pretty easy to overlook.
And roughly 19 minutes were spent on building erosion, with remaining one minute equally distributed between reference gathering, planning, node layout, splatmap setup, and intermediate builds. I suggest you to re-distribute your efforts the other way around.
You should definitely add more blur. I can still see some of the foreground. And don’t forget to bump up distance fog. It helps.
At first I thought the screenshot was from WM preview window.
Also worth mentioning that presentation is lacking. Commonly, you’d want not to include editor interface in your screenshot.
Overall it is pretty decent start, but there is a long road of improvement for you ahead. Keep going!
That is cute indeed, and what is more important, it directly supports the OP. I agree.
It played out more like this: 3 minutes tweaking the setup, 1 minute foldering/naming outputs and 12(?) minutes simulating. During that 12 minutes, I twiddled my thumbs and eventually went into UE4 and swapped a few layer slots around. I also made a quick light function and added a little bit of atmosphere because what good is a landscape, for a game, if there isn’t any atmosphere to it? Eventually, it finished simulating and I spent 15 seconds in photoshop crushing the levels on the flow map, because they had too much neutral gray in them, and I hit quickly hit save. For the next 60 seconds, I imported the height map, added in the material, added in the blend layer info, right clicked, hit load from file; rinse+repeat for remaining layers needed. It was at this point that I realized that I didn’t save the normal map for the terrain and had already closed out of WM (didn’t save the layout either); so I threw my hands up and said: “Oh well, guess there’s no normal map today.” Then I raced around looking for a good scenic spot and dropped a camera in place so fast that even a fly wouldn’t have seen it coming. For the next 2 minutes, I tweaked a few camera settings, played with the fog a little, and added in some DoF(which is a royal pain to tweak when you’re dealing with distances in the 100ks). By the end of it all, I was sweating so hard that I had to go outside and my neighbor saw me and told me that I should change my last name to Sweat.
Jokes aside, I deleted the map 5 minutes after I posted that image because it’s not going in my game. It’s just a joke map(even says it in the screenshot), for the sake of demonstration, showing that I was blending 6 overly artificially complex layers and to prove an easy point that simulating out a landscape doesn’t make you a landscape artist. People insisted on criticizing my lumpy tech demo levels, with rainbow tinted textures, so I figured why not spiff it up a little…
And no, it doesn’t directly support OP because I’m pretty sure OP isn’t working with 6gb of ram… Well technically only around 3gb to work with because W10+browser+PS=already using over 2gb, plus my video card shares ram as well. I have to open/close UE every hour or so to keep it from hard drive thrashing my page file. If I even had 12gb, it wouldn’t be a problem.
7 lines of joke? :eek:
EPIC, OP is updated.
I thought I had already left a message in this thread somewhere, but I guess not.
Anyway, just wanted to say I’ve been reading this thread for a few days as you’ve all be talking. I’ve got some really great feedback out of this. Thanks who is adding to the conversation! I didn’t want to interrupt, but I wanted to let you know we’re paying attention.
Please let us know about any more details that we can be polishing out with these large world rendering performance issues and I’ll keep them in my reports and in front of the developers.
I don’t know if the engine already performs something like this, but I would suggest the landscape tool analyze the weights and then auto mip down textures that are lower opacity in the blend. If you have two 4k textures, one at 30% and one at 70%, you can pretty much go all the way down to 1024 on the 30% one because it will barely have a noticeable visual difference.
Here is a quick example of what I mean. I tried to paired up a couple of REALLY contrasting 4k textures where I have a dark base texture and a light, high frequency, texture blending on top; at 50% opacity. Oh and excuse the slight offset, I must have nudged it on accident:
Here are what the base textures look like (only uploading the 1k versions to show what I mean)
I’ll put this in a way so that everybody understands it.
It would be epic from epic to start giving us some epic performance improvements instead of **all **the new fancy pantsy stuff.
When i see what others can do even on consoles, im getting itchy all over.
This game outputs in 4k with stable 30FPS in an open world on PSpro consoles … https://www.youtube.com/watch?v=qTH8OnQKWeY
while I agree with you, what’s so good about that gif? frustum culling is already performed properly by UE4
Is it possible to observe that from within the engine, e.g. with a second camera actor? I would like to get behind the mysteries of UE4s rendering to find bottlenecks in that regard (while i am sure that everything has been pointed out in this thread already). It’s just that something tells me that this *problem *has not been properly aknowledged yet.
UDK had an Occlusion Preview viewmode but it seems not to be there in UE4.
still you can use the console command freezerendering and move the camera to observe the behavior of both occlusion and frustum culling.
all occlusion methods will basically affect the rendered actors/polygons though, while the bottleneck of landscape performance observed in this thread is the pixel shader (if we don’t consider tessellation which is considered prohibitive)
As said, that is done automatically in UE4, but there is currently not a way to separate out the camera. I’m looking into the possibility of a feature request for it though. It was a feature in UE3, so I can see why you would expect it to also be in UE4.
I’d recommend checking out 's blog posts about culling too http://timhobsonue4.snappages.com/culling-visibilityculling.htm
@ Thanks for the suggestion!