Any update about Texture Array?
By simply adding texture array performance isn’t going to get any better. What kind of changes to landscape material you suggest that will improve peformance when combined with texture arrays?
Simply replacing fetches of individual textures with fetches from arrays would not do anything good indeed.
Now, if you have 16 landscape layers, you need to do 16 fetches.
You need to modify material and and/or weightmap code in a such way, that you would be fetching reasonable number of times, not 16, but 2-3.
As an example, you can have 3 weightmaps. Weightmap 1 controls which array slice is picked during first fetch. Weightmap 2 controls which array slice is picked during second fetch. Weightmap 3 controls the blend between result of first and second fetch. On the output, you have a landscape with a NumArraySlices number of layers, while not being able to blend more then 2 layers at any given vertex, but costing slightly more than a material, with only 2 layers.
Alternatively, just store 3 slice IDs and their weights per each vertex.
It didn’t make into 4.19. We have to wait a bit longer.
I have implemented similar system for last game that I worked.(Hardland). There is one big downside. You can’t interpolate texture id map. My solution was to fetch texture id map with Gather and then do all material sampling four times. Typical case was that all 4 texture id’s were same and all texture samples was same so this can be optimized but worst case is 4 time more expensive.
I have read that some current games skip this manual blending by using stochastic sampling and only evaluate single layer per frame. This could work with UE4 temporalAA.
Yeah but you do not need to interpolate it.
If you sample array 4 times to get 4 layers, that is useless.
You are either restricting blending span to a single triangle or separate ID maps from blend maps. In any case, array slice ID should only change in areas, where layer weight is 0.
As for distributing sampling over time, I am unaware of any shipped or soon to be shipped titles, that would be using it. If you have any doc to link, i’d be grateful for a good read. But so far my thought about it is that we already have enough blurry mess all around, introduced by temporal supersampling on other features. I don’t think that introducing one more source of it is justified.
Landscape material system can’t work with vertex data because of lodding. Distant landscape component may have only 2triangles. Data has to live in textures. Restriction that ID can only change when associated blend weight goes to zero might work but it’s also can cause other headaches. Painting system then need to take care of this restriction. In our system there were multiple biomes and that was only sane way to blend between them. In other kind of use case there might be better ways.
Can’t find any links to that stochastic material layer blending system. I remember reading it in twitter comments. Really hard to find those. I will keep searching. It’s might sound crazy but it can be really big performance win. Instead of multiple layers cost would be just one + some spatial + temporal dithering logic.
Technically there is no difference between vertex data and texture in the context. Geomorhping regions require special handling. Rest is same. When blending 3 layers per triangle, of course, you do not have fine grained control of the blend and LOD morphs are visible. But generations of games were done, using this technique, even before texture arrays were introduced and it was fine. What is more important, that it actually runs, not crawls. But it requires changes to landscape and painting tool, which are not backwards compatible. I bet that Epic would not even consider going for that, and it is absolutely justified. Especially considering that virtual texturing has appeared on the roadmap.
However, you can still use existing system to encode data your own way in landscape weightmaps and have exactly the same system. Have 3 weightmaps, each defines ID of a slice(Probably can actually pack it tighter and use 1 weightmap for IDs of two layers) and have 3 weightmaps, each defines weight of the layer. If your texture array has 16 textures, you have 16 layers at your disposal at a fixed cost of 3 layers. Nothing stops you from fetching different arrays for each layer. At this point, neither POM, nor Tess or 3 way texturing or distance blending are that dreadful any more. Additionally, good luck trying to compile a more or less advanced landscape material with 24 layers. This is simply impossible. With arrays, it is just 1 shader permutation for all. Lastly, and most importantly, terrain cost is consistent. You do not get abrupt changes in frame time, just because a landscape component with 6 layers silently drifted into the view.
As for PIA with painting with this,indeed it is so, but exporting weightmaps from content creation software for this method is not hard, and getting used to painting like that takes few days. On top of that, changing the painting tool per project is trivial, unlike whole landscape code. Physical surfaces are also a problem, but ability to bake them from landscape material had been also request since first versions of UE4.
I will try blending layers based on frame, when I have some spare time, just for a sake of experiment, but I think the results will be disappointing.
When I think of “texture array support” I expect full editor and material graph integration. Side-loading hand-crafted DDS files does not count as a proper PR IMO. Editor support would involve a new texture asset type, some sort of editor to manage the slices and the material nodes to access them.
Also, the engine already uses arrays internally (for example, reflection captures are stored in cubemap arrays). If the idea is to use texture arrays for high-performance terrain, simply throwing them on top of the existing permutation-based system doesn’t cut it and you’ll get little or no performance advantage: while you cut down the number of bound textures, if the number of draw calls and texture fetches remains the same, it’s no use.
The key feature of a texture array is being able to have the shader dynamically select which texture (slice) to sample from. For the UE4 terrain to take advantage of that, it would need to go through a paradigm shift like getting rid of the per-sector shader permutations and dynamically selecting the relevant layers directly in the shader, most likely using dynamic branching and looping.
I’d totally agree here, but if choosing between nothing and and DDS import, i’d pick DDS import.
Current system tags along with arrays just fine, apart from actual painting and physic surface layers.
In the end, arrays are not only terrain. There are many other uses.
This is only true with really big caveat. All layers need to be identical and only texture can change or all layers need to pay worst case cost.
Yep, identical in terms of texture size, and features, that are used. Nothing changes, but a texture. If you are doing 3 plannar mapping, you are doing it for the whole graph( or at least for one atlas fetch) . If you want wetness or puddle painting, you are also doing it for whole material, not one layer.
Just to clear a couple things up. That PR does have material support, and it does have a new asset type. The only thing it lacks is a editor to edit the asset directly instead of loading pre-created dds files. I didn’t write the original PR. I just sort of maintain it, as I have time, at this point. I’d be willing to tackle giving it actual editor support now if there was actually any reasonable guarantee that Epic would actually use it. That’s not a small amount of work to do well, and this PR has been open now for almost 2.5 years.
Wanted to first thank Koderz for the work on editor support for texture arrays. Support for them is one thing that I had in Unity that Unreal was lacking so this helps a lot with it.
I was wondering, anyone ever setup a texture array in C++? I’m playing with a lot of procedurally generated textures as well as man-made ones and the need to combine textures into arrays as things are loaded (so as to get efficient groupings). Which means generating the whole array of textures in C++.
GDC2018_TerrainRenderingFarCry5.pdf - Google Drive
Far cry 5 use stochastic sampling for terrain material. Really nice slides.
Nice slides indeed. Thanks for the link.
Worth mentioning, that all layers are evaluated in a single frame there is no temporal sorcery is involved.
I gave a try to this in UE4 and here are my thoughts/observations:
Hashed alpha has remarkable temporal stability.
Without optimization pass, my test Implementation, as referenced in the slides/ paper, adds 62 instructions(Most of the instructions are slow ones) in case if you replace world_ aligned_texture_complex material function with stohastic version, making material using it heavily ALU bound rather than fetch bound.
In connection with above, roughly, whole method starts to get viable, once the number of texture lookups, that are collapsed into single one, reaches something like 6-8
Noise is visible, in the areas where one or more material attributes are contrasty enough, and almost unnoticeable in low contrast areas.
Noise is further reduced, once you blend in additional layer with conventional blending.
Speaking about problems, the main problem is that you cannot heightblend the layers/projections.
Secondly, tessellated displacement and POM is out of question with this.
Lastly, for all the speed benefit it brings, it still introduces noise, that won’t be acceptable everywhere.
These three kinda make the method perfect for distance blended 3-way cliffs and probably worth using on landscape cliff layers in UE4 right away.
But the main point is, that you still need texture arrays to make best use of this method. Without them, you are limited to blending projections/ distance blend.
Whole thing starts to shine, when you add temporal jitter. Noise gets reduced to almost nothing by TAA.
In fact, regular screen-space pattern with temporal jitter does not look much worse, than object space hash, while costing less.
Here is side by side comparison for two 3 - plannar texturing material functions:
Really nice. Sadly these kind of optimizations are really hard to make work with ue4 material editor. Without dynamic branching and texture arrays you need to do everything in custom code block. It would be nice to have option to do stochastic sampling for new material layer blending.
Cany anyone tell me, what happened with the pull request? it throws 404, also it would indeed be great to know, if it will be implemented(full material node support), because I would not bother with implementing it for myself then if its only months away.
In order to view the repository you need to be signed into GitHub with the same username you entered into your settings on this website.
If you haven’t already done this, move your mouse over your name in the top right corner of this page, select Personal, on the left select Connected Accounts, then enter your username into the GitHub field (it is case sensitive), then click Save at the bottom of the page. You should receive an email confirming the change, make sure you are signed into GitHub, then try clicking the link again, it should work.