In theory since Nanite is able to simulate points at or near screen pixel density, tessellation and displacement should no longer be necessary, as you can build the detail right into the mesh.
Obviously if your workflow depends on using displacement maps on, say, hard surface geometry for high detail you’d have to move to a workflow where you’re baking that topology into the mesh beforehand, which does mean larger asset sizes, but the theory anyway is you shouldn’t need to tessellate in materials.
Currently Nanite doesn’t work with any vertex shader world position offsets at all, which I imagine comes down to instancing requirements (same reason per-instance vertex painting is not supported), although I suspect there will be changes to that eventually (if it’s technically feasible) since it’s usually needed to support foliage animation
the removal of tessellation is a major oversite. No disrespect intended. While the intent is to build all data directly into the mesh, the blend/detail phase now has to be done externally and assembled into one mesh set then import into unreal.
yes in practice that looks better but in a practical production standpoint that’s ridiculous and time consuming. this puts alot of weight and issue on the front end. for instance, if i want to blend a brick wall over a large area with blended brick variation and damage, this is now impossible with nanite inside unreal. i’ll need to blend this data in something like Blender or Houdini or zbrush, and take that into unreal.
ya that’s… “cool” …but really bad for production times. Ideally we need to be able to blend/tesselate then convert that data to nanite. Im surprised epic didn’t encounter this possibility. unless theres a workaround we don’t know about yet???
If not, thats a bad call. but im sure they have something up their sleeve in the coming weeks/months.
Fingers crossed
I’m not sure about tessellation (again, there’s no need to tessellate on the GPU if you have that density of geometry already), but I’m sure displacement will be back at some point. I know the goal is to get skinned meshes working with Nanite as well as vertex animation for foliage, so my assumption is world space displacement in general will be back. It would be weird if it weren’t, right?
What I haven’t heard anything about yet is per-instance vertex painting, which would typically be pretty critical for your example of blending damage over a brick wall at run time. I can only assume that’s going to be implemented since it’s pretty critical to texture blending workflows, but I that would probably be a good question to put to Epic for their upcoming live streams, to be honest
Agreed. The importance of getting things done fast is very important in a production setting.
Nanite fails when it comes to very large uneven surfaces with no rocks or foilage to hide flaws. Sure this can be put together using smaller pieces but placing as well as planning the appropriate parts to cover the large as well as uneven areas will be cumbersome and time consuming compared against a landscape that is filled with rocks like the demo. This can be promblematic in projects where you can’t improvise or rearrange things and have to stick to the specifics e.g a large uneven sand surface without any rocks or foilage
I agree the surfaces blended using displacement should be able to be split into smaller pieces by Nanite and converted to Nanite. But those pieces would be unique for such a large surface though.
I hope it does come back. There is no faster way to create terrain than throwing a nice displacement on a plane, and adding Quixel meshes as needed. And some terrain types can’t be done just by patching meshes, like desert dunes or rolling hills. Placing tens of thousands of meshes, moving, rotating and scaling them individually into a naturalistic terrain, while powerful, seems like madness to me, and wouldn’t be appropriate for every project. I’m from a film background so not having displacement as an option in ANY 3d package seems unthinkable.
Ultimately, I’m hoping for a workflow/toolset that allows painting with megascans the way you would with alphas in Zbrush. If the tools are there, and the engine can handle it on moderate hardware, I’m all for this approach.
yeah I agree, Im trying to move some architecture stuff from ue4 to ue5 to take advantage of lumin. my building had allot of displacement textures to add detail- and now my stone facades look flat and awful- the Quixel displacement textures were awesome for adding detail to simple models, and its much more efficient for my workflow then to try to sculpt every brick or tile crack into my model. I hope they solve this soon.
I do find the lack of Tessellation support somewhat disappointing. I am aware that Nanite can replace the need for tessellation in most cases. However tessellation helped to reduce the asset sizes, especially if the original asset is relatively planar. I used this a lot for brick walls and tiled floors and I could easily change the final output without changing the static mesh asset. But with Nanite one would need to keep a high poly original asset, and from what I have seen so far the meshes needs to be kept for each surface pattern (I mean you have to have an asset for brick wall, another one for a brick wall with a different type of brick)
Also, dynamic runtime polygon generation seems to be impossible with Nanite, at last for now.
I personally love Nanite but I have to +1 to the necessity of texture based tessellation as it does bring some advantage when having dynamic/motion displacement. (Unless I’m missing something?)
However tessellation helped to reduce the asset sizes, especially if the original asset is relatively planar.
Yeah, I think this is one of the two really big costs. That and letting level designers build up environments using in-engine modelling and trim sheets. At the end of the day, if Nanite can handle millions of tris, you don’t need to tessellate arbitrary assets on the card, you can just tessellate in your DCC, but what you lose is the opportunity for better quality with lower storage sizes, more authoring flexibility and less data going down the GPU bus. Especially since UE4’s in-engine modelling was just reaching a really good spot, it’s a shame to see the engine team throw it away since visible seams with high poly assets will now make it unusable for all but maybe archvis.