Announcement

Collapse
No announcement yet.

UProceduralMeshComponent for skinned/morphed meshes?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

    UProceduralMeshComponent for skinned/morphed meshes?

    What's the best option for creating a procedurally assembled mesh that includes skinning and morphing information?
    I'm going to download data and generate the actual vertices/normals/tangents/UVs/blend weights in code, but the UProceduralMeshComponent doesn't let me specify blend weights or morph channels.

    #2
    Does anyone have any pointers on how to best set up a procedurally generated skinned-and-morphed mesh?

    Comment


      #3
      Generally, we do morph target stuff for static meshes by packing the morph positions into extra UV channels. Unfortunately, proceduralmesh only exposes UV0 for now, but if you are messing in code it may not be hard to extend to more UVs.

      There are quite a few 3dsmax scripts that can export meshes with morph data that you could look at to see how they work. The idea is pretty simple though. One morph target requires 2 UVs since a UV is 2 channels.... but you could store 3 morphs with 6 UVs.


      Back on gears3, I was using vertex paint as morph targets for a very simple case of crates getting bitten and crushed by the leviathan. In that case, what I did was use 0.5 to represent the neutral pose, and then I could mesh paint the destruction that I wanted. You could also try something like that, but note that vertex colors have way less precision, which is why it will only really work to store offsets from the base rather than full positions.

      To make 0.5 neutral you just do a biasscale so the range of vertex colors is -1 to 1. Use constantbiasscale node with bias= -0.5 and scale=2.

      Then I would just fill the crates with 0.5 and do some painting from there. I could sort of animate it by successively increasing the morph parameter with an animation curve, but it could only actually do 1 morph.
      Last edited by RyanB; 01-12-2017, 09:31 PM.
      Ryan Brucks
      Principal Technical Artist, Epic Games

      Comment


        #4
        Thanks! That makes a lot of sense. I also like the idea of mesh painting for collapsed static geometry :-)

        The question I have is about how best to integrate existing data I already have into the Unreal renderer.
        For example, do I derive from UMeshComponent and create my own UProceduralSkinAndMorphComponent?
        I have many dozens of morph channels (this is character and facial animation) so I'll probably want another option than just mesh paint or UV channels.
        I can morph on the CPU; does the base UMeshComponent give me enough flexibility to update the mesh data?
        How do I deal with multiple, differently-morphed, instances of the same base mesh data?
        It looks like UMeshComponent isn't set up for that, so is there some other base class for streaming geometry?

        To add more meat on the bones of that question.

        What I have:
        - vertex buffers with position/normal/tangent/bitangent/UV/vertexcolor values
        - vertex buffers with bone-index and bone-weight values per vertex (matrix palette style, not bone-space-vertices style)
        - vertex arrays with affected-vertex-index-and-morph-delta values, one per morph channel

        What I'd expect to do:

        1. Some kind of shared container:
        - Create some kind of component that keeps track of the above data on a per-mesh basis -- call it a "base mesh" (This seems similar to the concept of USkeletalMesh perhaps? Or FSkeletalMeshResource?)
        - There's only one of these per base mesh geometry, shared by all instances of this mesh geometry.
        - Does it matter what the base class is for this data?

        2. Some kind of instance container:
        - Keep a reference to the base data
        - Pre-allocate vertex buffer space to use for streaming generated geometry
        - Keeps references to materials
        - Each rendered frame, apply the delta data to the base mesh data, to generate morphed bind-pose mesh data
        - Provide the morphed bind-pose mesh data plus the bone index/weight data to the renderer for this component/instance
        - Provide an animation pose to the renderer for this component/instance
        - The renderer does skinning of the bind-pose data and forwards to the shading bits
        - I assume this should be a subclass of ... USkeletalMeshComponent? UMeshComponent? UProceduralMeshComponent?

        So, collecting the questions into one place:
        A. Is there some base class I "must" base my CPU-side mesh data component off? (Assuming I don't need the memory accounting info kept by the Unreal asset system -- this data is procedurally generated.) Should I think of this as USkeletalMesh or FSkeletalMeshResource?
        B. Which is the most appropriate base component for the component/instance class?
        C. How do I pre-allocate the necessary vertex buffer space for streaming the geometry each render frame?
        D. What is the best hook to detect that it's time to actually generate and stream the geometry for an instance?
        E. What do I need to keep in mind when re-using whatever the skinned-mesh shader is for existing skinned-mesh component?
        F. What's the appropriate way to generate the skeleton? Can I make things like a FStaticLODModel out of whole cloth in my own code at runtime?
        Last edited by jwatte; 01-12-2017, 11:28 PM.

        Comment


          #5
          Honestly you are digging much deeper into this than I have at this point. I have only ever done this via scripts and vertex shaders that were driven entirely via content. I have not messed with the procedural mesh component at all in code yet.

          Another option is to store the morphs using textures. You could in theory fit all your morphs onto one texture. There are also some scripts that can do this, but sounds like you already have a very specific setup in mind.

          I am assuming you want to use the bone weight setup so that you don't have to actually store a complete copy of every vertex for each morph, to cut down on memory waste? In that case I guess you'd be choosing to trade a bit of CPU performance for memory (since there'd be some cost of applying the bone transformations and updating the current vertex buffer, unless I misunderstand your approach).

          I can send this to James Golding who wrote a lot of Procedural Mesh Component stuff.
          Ryan Brucks
          Principal Technical Artist, Epic Games

          Comment


            #6
            Yes, in fact I'd like to morph only position and normal (not tangent bases) to save space!

            Storing morphs in vertex textures is a cute idea. In older vertex shader versions there was a limit to how much of that you could do (I remember trying hard at the time and then giving up!) but I imagine modern (4/5) shader models should be plenty for that. That might let me do everything in shaders and share vertex buffers (without streaming) across all instances.

            Any additional help and pointers would be great! My main challenge is that I know what I want to do at the DX/GL level, but I'm a fair newb at how Unreal C++ wants me to express that.

            Comment

            Working...
            X