UVs and Nanite

UE5 looks fantastic so far and it’s amazing to have the opportunity to test it out; can’t thank the team enough for that.

I just have one question:

With Nanite allowing for high-poly meshes to be imported into UE5, does that mean, presumably, that UVs should now be unwrapped on high-poly models?

Normally, a low poly model would get an edit (like deleting unnecessary faces, cleaning up edges, etc) and then the unwrapped UVs.

But I assume that the with Nanite the workflow should really change to editted and unwrapping a high-poly model, importing it, and letting Nanite handle LODs and such?

Or should we still be unwrapping (and editing) low poly models?

This is what I don’t understand. Even for film, you’d never unwrap a 4m point sculpt. At best you’d retopo and project, and unwrap the low poly, much like in games, or as I understand more typically you’d be working from an unwrapped base mesh and sculpt details on top of that. A lot of those details would still be baked into maps, and for film quality that would involve a substantial number of maps including a lot of displacement maps and shader work that still isn’t available in a real time pipeline with a standard deferred shader. The pipeline for film assets is not a convenience improvement over games—it’s significantly more costly and difficult.

The copy Epic has been using for the past 18 months or so has been “Directly import film-quality source art comprised of millions of polygons—anything from ZBrush sculpts to photogrammetry scans…” but I have no idea what this means. How would I pbr texture a ZBrush sculpt? At that density you could basically skip UVs entirely and vertex paint all your materials, but as far as I know no tools exist that can do this (ZBrush itself won’t allow you to paint more than rgb maps). Aside from that, you’re basically using a traditional approach to unwrap and paint models. Maybe you can skip baking and bring in the higher resolution geometry (by re-projecting UV data from a lower subdivision), but since baking in Marmoset takes maybe an hour it’s not really a net workflow savings and it’s hardly “directly import[ing] a ZBrush sculpt.”

Since to my knowledge no one from Epic has ever answered workflow questions relating to what producing Nanite assets in house actually looks like, my suspicion is the answer is less “you can directly import a sculpt” and more “outsource film quality assets at great expense or just don’t use Nanite at all,” which means it’s not really a viable approach for anything outside of other industries, like virtual production, or demos using Quixel assets. I dunno. I’d really like to be proven wrong on this if anyone from Epic can weigh in more positively.


Surely this should have been addressed by now, just what is the workflow using existing tools to UV and texture a 1-2million polygon mesh.

Given the lack of information, I can only guess that we are to use MegaScan assets?


Fwiw while it took a few hours to figure out how to get a ZBrush sculpt painted and into UE5, it did work, and quite honestly I’m really impressed.

I took a sphere, dynameshed, sculpted just some nonsense until I had some gross topology and around 4m points. At that point I duplicated, Zremeshed down to around 50k points, exported that, did a quick unwrap, reimported with UVs, subdivided back to around 2m points to get interpolated UVs, projected details from the original sculpt, then exported.

Surprisingly, Mixer did not choke on a 2m point mesh, and actually ran just fine. I painted some nonsense using MegaScans surfaces, and imported my roughly 6m tri .obj as a nanite mesh along with its exported maps from Mixer at which point my, uh, let me check my materials here—very high poly “leather and moss lump” worked perfectly in-engine with absolutely no need to bake.

If you’re not using ZBrush you might have less luck here—I know Quad Remesher in Blender does a really good job in general producing unwrappable topology from a higher topology mesh, but I’m not sure if Blender has anything like ZBrush’s subtool project to get your details back at the end.

However, I can confirm you can get a ZBrush sculpt into UE5 Nanite with a minimum of messing around, and it honestly does work great. The level of detail on the imported mesh was sufficient I didn’t really need to worry about manual retopology as long as the ZRemesh was good enough to unwrap, and I bet you could actually do it just by painting some polygroups and using UV Master—probably 30 minutes of work tops.


From reading the documentation, you still want to stack UVs but it can now support multiple UVs. It’s not cart blanch to start using poor modeling behaviors. You just no longer have to bake details into normal maps, your raw poly count isn’t as important, and draw calls are not nearly as much of a factor. I read the whole UE5 documentation site today, there is a lot of info there.


Forgot to link it, but this section in particular will answer most of your questions.


I mean, with the caveat that I have less than zero interest in starting an argument on here, I can give you my perspective as a former AAA senior rendering engineer turned artist (to answer that one: I like doing it more)—artists should have carte blanche to use poor modeling behaviours, because artists should be making artistic decisions rather than spending time and money (so much money!) on work a computer could do.

The holy grail of pipelines for static geo, in my opinion, is you sculpt (or, sure, hard surface model in maya or whatever), you paint, and it goes in engine. The more technical requirements you add on top of that—it’s just waste. Topology is unimportant on static geometry because it doesn’t need to deform; it exists only to provide good UVs. But UVs are technically meaningless on a sculpt with a poly count in the millions because the vertex density is roughly equal to the texel density, which means neither UV nor topology (to the extent that it’s at least relatively uniform) is such a big deal. The fact that we care at all comes down to limitations in how renderers and GPUs are designed.

With so little info announced about how Nanite was going to work—I mean I’d be lying if I said it wasn’t my hope that we were just there already. We’re not, although if it’s technically feasible at any point it’d be just absolutely massive for pipelines. That said, the industry absolutely should be moving towards automated pipelines where possible (skinned geometry being, unfortunately, a completely different can of worms), and honestly I’m impressed by how Nanite is shaping up already in terms of fitting well into an automated approach and reducing labour times. I honestly think this could enable some very efficient workflows and just save a lot of time.

Again, just my opinion!


You use the unwrap of the model you import.
So you import a high poly mesh, you use that unwrap. it’s pretty straightforward…

1 Like

Thanks, didn’t realize the Docs were up.

I think the confusing thing though is, as stated in the Doc: “Nanite should generally be enabled wherever possible. Any Static Mesh that has it enabled will typically render faster, and take up less memory and disk space.”

While sure, I should still be making appropriate high poly meshes that aren’t going to have clipping faces and such, is the whole idea, at this point, to just strictly use meshes with millions of polys?

A large part of Nanite is that it doesn’t need baked Normal maps, which means that the whole High Poly - > Low Poly → Bake workflow is cut down to just making a High Poly and letting Nanite do the rest of the work (within reason; of course I still expect to have to make a clean mesh, delete unnecessary faces, etc.)?

Low poly, at this point, is just going to be for things that wouldn’t have a high poly version anyway? Like if I had some skyscraper in the distance that the player would never get to. That could be 5k polys and I’d never create a detailed mesh for that to begin with because no one will ever get close. So that can be kept as a low poly mesh, but everything in front of the player can now directly be a high poly one.

I think it’s just the shock of what Nanite is proposing that’s making it a bit hard to grasp. For the past several years it’s been necessary to use low poly with baked detail, so to be told that the engine can handle high poly models directly is odd. It feels like there should be some catch.

Perhaps I’m just being really paranoid though.


I tested out some meshes as well. I had a 1.5 mil cliff that I sculpted in Mudbox, exported it to 3ds max, unwrapped UVs, then sent it to Mixer. After the materials were done, I put it into UE5, built a Nanite and it placed it into a scene. Then I placed around 2000 of them around and there wasn’t even a minor slow down as I was walking around using a Third Person Character.

So with that rough 5 minute test that didn’t have any manual optimization or even that clean of a mesh, there’s definitely a pretty good result.

I think the process that took the longest was the unwrapping and the actual Import.

I get what you mean though, when you were talking about Zbrush. That was something that I was trying to really grasp to. What exactly should my Nanite workflow be?

But I guess, for the most part, I can just sculpt a mesh, clean it, and then push it to Mixer or another such program and export to UE5.


Yeah I just wanted to verify since this isn’t a typical workflow.

The idea of unwrapping and creating UVs 3 million poly meshes just seemed like such an uncomfortable thing to do since it’s not the norm.

I mean I’m amazed that we’re at a point where we can directly use high poly meshes. I just mean that I wouldn’t have expected it would actually be that straightforward.


From what I’ve gathered, it really is just unwrapping the high poly and using that as your base mesh (while still maintaining proper geometry).

I was in the same position as you though; I wanted to know what to do with already created high poly meshes.

I had many that I wouldn’t deem “game ready” so to speak.

But with some minor editing, just like I would a low poly, they could be used.

The most difficult part will probably be actually working with this models in terms of what polygons to keep, which to delete, etc.

@ I’m surprised max had no issues in unwrapping a 1.5m point sculpt! Impressed though; that’s a good sign. I found retopologizing to a lower poly count, subdividing and then projecting to work really well because I don’t even want to think about marking seams on such a dense mesh, and ZBrush will interpolate UVs on subdivide, so I got the same mesh at the same density but was able to unwrap at a lower subdivision instead, which made the job faster.

Every artist I’ve talked to about this has had the same reaction, though—“what am I supposed to do with this?” but it seems like at the end of the day it’s not that difficult, which is good. I’m sure Epic will also do more videos on new pipelines and such in the coming weeks, as that seems to be their MO

1 Like

@SeveralBees Honestly had no idea Max would Unwrap that type of sculpt either. I’ve been trying it with a lot of different high poly meshes and each one has been fine. I had one that was 5 or 6 mil and it did that too.

Your method sounds more practical and worthwhile though. Even if Max can handle it, I really don’t think pushing it to unwrap these types of meshes is a good pursuit, haha.

Yeah I had the same question; how to actually use Nanite. I think it was just an issue of the whole situation being so far from what the norm has been for the past few years that this seemed crazy to deal with!

Well the Hollywood CG guys had to use some sort of modelling tool all this time :yum:

So, yeah, the idea is just to throw millions of polys, at least at static models. Who knows how or if characters and foliage will change. But, even for that “distant skyscraper” scenario. Go ahead, throw 5k polys or a million, it should work fine.

Its what I’ve been at the last few days, working out a feasable way to unwrap 1million+ meshes. Using Houdini and a tool developed by (UVs Last Hope) I’ve managed to get it down to around 6 minutes for an unwrapped mesh, for now using UDIMS also and so far seems to coming into UE5 fine. Next test I’ll be doing is vertex baking as a post above already mentioned, we have the vertex density now to exploit.