How does texturing and animating work if Unreal 5 supports Zbrush sculpts?

This is something I don’t understand. So Unreal 5 has basically no polygon limit and you can just take your uber-sculpts or models straight into Unreal 5? But then how does texturing and rigging work? Other applications still need a lower poly model and UV maps for things like texturing and rigging/animating? If the idea is to just do all the texturing inside of Zbrush and the rigging/animating inside of the engine itself then that doesn’t seem very ideal tbh.

That probably why they started investing in Blender xD going to have to Use EPIC version of blender to work on the unreal 5 engine .

!!!Quixel Mixer 2021!!!

I’m just speculating, but I think they would need to re-write their animation module to support deformtions of models that hi res, but there was no mention of that in the demo video. I suspect character models may still need to be the same resolution as they are now, but static meshes will support unlimited resolution. It would be awesome however if animated character models could be unlimited too though - maybe they can figure that out by next year.

As for texturing, I think you will still need to UV and texture your zbrush models rather than exporting polypainted assets as they mentioned something about assets with up to 8K textures, I think you would just have unlimited textures? I think the point about Zbrush models was more that instead of creating lowres models and baking normals, you could just use the original hi res versions.

You’re probably still going to want to carefully topologize and unwrap/rig your character models the old way. It’s still done in the movie industry even though polycounts don’t really matter in pre-rendered stuff.

The highpoly sculpts/non-optimized photogrammetry assets shown are mainly static environmental or non-animated objects,

Although you can probably get away with pretty high poly character models now, bunch of hair planes etc.

I’m still curious how the high poly zbrush asset they showed was textured. They probably unwrapped it somehow rather than just vertex painted it as it seems to have various maps applied and vertex paint, as far as I know, only has a single layer.

Tbh the nanites sound like a gimmick if there is no way to have third party tools support it. Perhaps in the future once/if third party software starts supporting nanite meshes the technology could be useful. In the meantime I can’t see myself using it for anything outside of niche situations.

I think you are looking at it from a very narrow perspective. Yes, it may not be feasible to do 5 million polygon texturing in substance painter, but who says it have to be 5 mil - Nanite is just a tool in the toolbox to make your life a lot easier (from the look of it). There are still a lot of unknowns about the limitations, but this will enable, not just games but other media too, to tap into the unreal engine workflow that wasn’t possible before. I’ve worked with offline-rendering professionally for 9 years and used ue4 for 3 of those years on private projects, and I’ve been pushing for a shift over to ue4 since rtx became a thing, and Nanite and Lumen is going to help make that transition a lot easier.

Virtual production is also going to benefit from this. You can pretty easily run out of VRAM in ue4 if your not careful. Even at a locked frame rate at filmic 24 FPS and a lot of overhead - you have to plan your scene as it currently is. It ain’t just a question about how many polys you can push on a single model.

Just use render-to-vertex-buffer, no big deal. I think they already do that on appropriate platforms, and the “restrict number of bones per vertex” checkbox is the checkbox that moves it off the GPU and onto the CPU. I could be wrong, though, haven’t read that code in detail.

That being said, perhaps they just do recast to normal/displacement maps on simplified geometry in real time. As long as there’s 5 or fewer pixels per triangle (and ideally, not more than 2 triangles per pixel) it’ll look good enough! It’s actually a problem if you have TOO many triangles for the view size of the object.

Regarding running out of VRAM, that’s what the Quadros are for. But, of course, you can still run out. Then there’s the DGX boxes, but sadly, they don’t take a display out, just running gradient back propagation for neural net training :-/

This is all future tech tho. UE5 might support all this but so does RTX. You dont need to use it until the hardware has kept up.

Quadros is a thing but very expensive. Luckily they added tile-rendering support, yay! Hopefully the nvidia 3000 series will bring us a lot more vram for less money.

I would be super interested in knowing how teams already working towards a UE5 project are handling 3D modelling workflows now?

We do it this way:
Split objects into small parts. For example, a medieval table consists of many wooden planks.
Model each plank with ~ 2mil tris in Zbrush. Give it a basic unwrap with UV Master.
Decimate (keep UVs) it in ZBrush to like 100k tris. We have a slightly stylized art style, so 100k tris is enough to keep all details.

Then build the table in a separate ZBrush scene out of the planks. 2-5Mil tris, depending on model complexity.
Do UV Layout in Maya (very slow) or Rizom UV (faster and better).

Texture the model in Substance Painter, on a good PC it can handle up to ~5 Mil tris well.

How would you further improve this workflow?
I mean we can’t be the only ones working on a UE5 project now already in UE4, right, right? :cool:

Obviously this only applies to static meshes, as skeletal meshes still have the usual polycounts in UE5, as Nanite doesn’t support them.

ye well there are still people who want to be creative…