Improved UI?

Is there any plan to improve the UI and workflow for these new tools? It’s a mostly just a mass of buttons all stacked up. It looks like a programmer made it, rather than someone with UI/UX experience. I’m glad Unreal is getting these features, but I’d like to see the interface and workflow compete with other professional modeling software.

Improving the UX and workflow is something we are absolutely working on. Are there any specific workflows that are of a greater concern?

Good to hear! The first workflow thing that seemed uncomfortable was how vertex/edge/face selection was tool specific and slightly different in different tools. I would expect there to be some top-level selection UI (with hotkeys, etc. that I can learn), then any model operation I choose would be applied to the selection. Right now it looks like a bunch of the tools can only be applied to the whole model?

Thanks Dalai,
The selection workflow is definitely on the list:) If you haven’t had a chance to read the blog on the modeling tools please check it out. I talk about selection a bit more in that post.

Yes, most tools can currently only be applied to the entire model, and we do not have an editor-wide vertex/edge/face selection. The reason UE can do in realtime what standard DCC tools can only do offline, eg render millions of objects comprising hundreds of millions (sometimes billions!) of effective triangles in the viewport at 30-60fps, is that the objects are stored in extremely optimized/compressed forms that cannot be edited. So we have to ‘unpack’ a mesh to get access to it’s vertices/edges/triangles, and it is not plausible to do that unpacking for every mesh in the scene (in a large modern game world this could take hours and require tens-to-hundreds of GB of memory).

The result is, currently we only unpack a static mesh when you explicitly start a Tool on it, and this is why selection is only “inside” a Tool. As posted above, we are looking into ways to improve the UX here, but we do have this fundamental huge-data problem to contend with.

@rmsEG Is this a Nanite restriction or what? Very often you’re not working with millions of triangles.
In UE4 I use the third-party MeshTool plugin on the regular and it manages these operations just fine!

Here’s a quick video of me using MeshTool to make a simple set of walls - currently UE5’s modeling tools would make this a lot more arduous (the actual hotspotting UV stuff I’m demonstrating is also incredibly valuable functionality). I routinely jump into a complex scene and shift some verts around on a particular mesh, or whatever. Cubegrid is an amazing start, but I think you really do need this basic-level modelling functionality as well, even if it’s gated to only meshes below a certain complexity.

Hotspot Texturing in UE4: MeshTool preview! - YouTube

The approach this UI is taking at the moment - one tool at a time, with switching time between them - really adds a lot of task switching and cognitive overhead, the avoidance of which is, imo, most of the reason to have in-editor modeling tools in the first place. It also precludes a lot of really nice tooling - eg marquee selecting a bunch of verts, converting that selection to edge with one click, bridging those edges with a keypress…

1 Like

What I described above about ‘unpacking meshes’ is not related to Nanite specifically, it is how UE4 has always worked. When you import a mesh, we create a “source mesh” (ie the one you would want to edit), and then convert it to “built data” format (ie mainly just what is needed for rendering & physics) and store the built data version in the DDC. The next time you open your project, the built data is loaded from DDC, and the source mesh stays on disk until it is needed (which is basically never, unless you want to open the Static Mesh Editor, use a modeling tool, or use something like MeshTool)

Yes, loading the source mesh is fast on a single low-poly object. It takes increasingly long as things get bigger. In addition, after any edit, the built data has to be updated, because UE cannot render from the source mesh, it only renders from the built data. If the mesh is very lowpoly then updating the built data is pretty fast. But even on a 20k-triangle mesh (still tiny in my books), updating the built data for every edit is too slow. This is what MeshTool does, you can tell because if you do a vertex edit on a 20k-tri mesh there is a hitch every time you let go of the mouse (and on a million-tri mesh that hitch becomes 5-10s), which will also happen on every undo/redo (so, eg, undoing a sequence of 20-30 edits takes…a while). This is also likely why it only shows a wireframe and not a realtime preview of the mesh changing during an interactive edit.

So, essentially, there is no way to scale up that approach to larger (static) meshes. Approaches that don’t scale up to larger (ie even still tiny 20k-tri) meshes are not an option for us. Even without Nanite, in virtual production or enterprise usage of UE, meshes with 10’s to 100’s of thousands of triangles are the norm (and even in modern AAA games, 10k triangles for a mesh is not so much anymore). And when someone is making a 20k-triangle AAA game mesh, they actually need to model a millions-of-tris mesh, and then simplify & bake the details. Modeling Mode has to support all these use cases (and many more).

The approach MeshTool takes is not actually that different than our PolyEdit or TriEdit - it’s a single “Tool” that has all selection and editing operations combined inside it (obvs MeshTool has more operations and a custom UI panel, which clearly has benefits). The fundamental UX difference (IMO) is that it automatically runs this ‘Tool’ on whatever you select. We /could/ do this in Modeling Mode, ie make PolyEdit a sort of “default” Uber-Tool that is always active. However in Modeling Mode we also have ~80 other Tools (in ue5) that do other things, and most of them do not involve tri/vtx/edge selection at all, so it doesn’t really make sense in that context to make poly-editing the fundamental behavior of the entire mode.

(IMO, making low-level vert/edge/face selection be the foundational basis for all modeling is a mistake that most DCC’s made 20+ years ago, and are now stuck with. 3D Sculpting tools escaped from this paradigm and we saw a huge leap in digital art as a result.)

All that said, everyone working on modeling mode 100% agrees that the current UX needs major improvement for basic tri/poly-modeling. We are actively working on it. In UE5.0 we are introducing DynamicMeshComponents/Actors, which give us much more flexibility for modeling because they do not rely on the built-data/DDC system I described above. This means we can move a vertex in a DynamicMeshComponent (whether 100 or 100k tris), or undo/redo that edit, without worrying about how long the user will have to wait for built data to regenerate. DynamicMeshComponent and the underlying UDynamicMesh also provide many other things that will be great for modeling (too much to explain in this already-very-long post). And those things will, in the future, let us build more fluid and efficient interfaces for poly-editing (but sadly not in time for 5.0).