About physics assets

How close do the physics asset capsules have to be to a character’s mesh in order to create realistic physics? For example I’d like my character to be able to pick up a soda can (or any object) in real time and have it look accurate. How would I go about doing this?

Also my character is a modular character (meaning he has separate hands, legs, head, hair, clothing, etc) so which part of my modular character should the physics asset be linked to? Do I just need to make 1 physics asset for my character’s main skeletal mesh or do I need to make a separate physics asset for each individual module? There’s not much documentation on this.

I’m trying to make my character interact with objects that are 1:1 replicas of physical objects in the real world such as a box like in Matt Workman’s video:
Next Level MOCAP Tech
Just not sure how to setup the physics assets so that the collisions and static meshes react to my character’s body in real time

How are you attaching the soda can to the hand - via physics constraints? If so, then your physics assets would have to be very close to the mesh dimensions.

You could always disable collision on the can at a certain point and lerp it to an attachment socket.

I would have a single physics asset on the main body mesh for simplicity, but if accurate hand interaction is super important and you possibly have different sized hand meshes, you’d want to have separate physics assets for each hand mesh.

Well no the soda can would be a separate static mesh on a table or something. So ideally the character would be able to walk up to the soda can and grab it or throw it, drop it, kick it, etc but in real time. It wouldn’t actually be attached to the character or socket.

I just used the soda can as an example but I’d like it to be possible to do this with a variety of objects. Just need to figure out how to get the physics right with the character.

As for the modular parts they all share the same skeletal mesh and were created from it. I just split it into modules so that I can change things later on. The hands are the same size originally as they were before being split into separate parts.

Just not sure how exactly to setup the physics asset to do all this. Should the physics asset capsules be as close to the character’s mesh as possible?

If you’re not doing any constraints, I would imagine that yes the physics asset should be very close, you can generate them tight to the mesh by making them Multi Convex Hulls (Advanced settings to give you more hull count/max hull verts) and you’d probably want to ensure you have polished physical materials set up on each interactable - good friction with character’s phys mat, etc. to ensure the object would stay put, though you could run into issues if you squeezed an object too much it would probably still fly off.

But for the stuff like in the video you posted, he says he’s tracking those as well in the mocap, but I assume you’re trying to find a completely in-engine way to do that without tracking the real-world objects?

I haven’t really thought about using physical materials for static meshes. I was just using “simulate physics” for the static meshes like the soda can. Should I be using physical materials instead? I don’t plan to squeeze or stretch the objects since they’ll mostly all be hard physical objects.

And yes I do plan to use mocap similar to what he’s doing in the video. So the objects will be props that are being tracked with mocap as well. That’s why I want the objects to be interactable with in real time and also look realistic when coming into contact with the character.

Physical materials are just a part of the physics system. They define how aspects of the physics simulation work.

By squeezing, I meant if you are trying to pick it up in your hands and squeeze your fingers around it, not the can model itself deforming. If you make too tight of a fist and are relying solely on physics interactions, your fingers will be penetrating the can model and cause the physics to freak out.

But if you’re going to track the props none of this physics stuff seems all that necessary.

Ah ok I see. I definitely want to be able to pick up the props and have the fingers squeeze to it as close as possible without clipping through it.

Should I be using physical materials for the objects?

Don’t use physics.
There’s abaolutely nothing they provide that’s any good in terms of object interaction.
Afterall, its not like you need complex formulas. Its literally just collision checking.

You don’t do that off a PHAT, you do that off visibility collision so as to have accurate results.

If you need full finger detail, then each finger needs to cast a ray for tracing, and upon grab the finger chain needs to squeeze by the appeopriate distances defined by the line trace.

Theres all sorts of caviats to this, including your tick groups and when the calculations for the distance occur in the pipeline/what happens after they are applied over time or over several frames, how they get adjusted, etc.

As a general rule. Whenever you can - avoid simulations like if your life depends on it.

If it helps, think that any time you use physics inappropriately theres a 90% chance some thing will manifest to try and chop off yar head for heresy against computer systems :stuck_out_tongue_winking_eye:

Nothing wrong with stuff like “kicking” the can and having it simulate stuff after its kicked.
But it is wrong to expect any accuracy at all, both off the mesh collision and moreso out of the actual simulation of physics.

Potentially, swap back over to PhysX if you can, which is better than chaos all around. But you still cant really expect something like the grab to “just work”…

More importantly - dont bother adding extra capsules to PHAT.
It supports around 15 or so tops (on cloth anyway, but it extends to just having a lot of them being a bad practice).
Even with lower numbers the accuracy they provide is quite limited since they are approximations from the start - literally nothing about the system is meant to be any accurate…

Hmm now I’m even more confused. It sounds like what you’re describing would best be used in games. I’m planning however to interact with objects in real time using mocap and the objects in engine, such as the soda can, would have a real physical prop that is also being tracked.

So ideally picking up the soda can prop in real life with mocap would look identical in engine. Or as close as possible

That’s the key point.

With a rokoko suit, assuming you also have the cube thing, you achieve a totally no better than mediocre end result.

A result which you still need to manually clean up, which ends up requiring a more than significant time investment (which is what one would normally assume the suit money is spent to prevent).

The coding can come in to help lessen said cleanup. Instead of having your fingers 10cm away from the can like Rokoko would pass into the engine, the engine itself can be made to intuitively adjust itself so as to require Less work than you’d normally need.

However - none of that is actually done with Phat at all.
The mocap suit reads the bone values that are sent (usually via livelink plugin).
The values of it are defined elsewhere.
The “physical” object in real life doesn’t necessarily match or have anything to do with the object in game.

Your example of a can, can probably be done just the same by picking up a vive tracker directly - since placing the tracker in a can becomes problematic to say the least.

Also, don’t go thinking that the finger tracking of the gloves is ANY accurate (for rokoko as for any other similar products that don’t directly use markers and 3d cameras).
Even with the “magic” ef emitting box rokoko offers you get very sub-par results (likely not even worth the expense).

In other words, you get close. Closer than you would with having to do it all manually - but you still have to do Most of it manually.
So, cleaning up the animations by coding custom stuff in engine that you can leverage to produce a final “take” is really worth the time.

But again, it’s not done via Physics, 99% of the time it’s just done with straight up math.
Or data transfer if you will.

The video uses cameras and trackers - which is how it manages to be more accurate than the rokoko/sensor based BS suits (worse expense I ever made hands down - I do wonder if Vicon is around 4K all in or less). Generally speaking - you are better off paying a mocap studio to record what you need since they give you the end product complete of all the adjustments for a fraction of what the system to do it yourself costs.

You can clearly see bits of the video where the gun grips are way off - that’s the nature of passing data without any sort of post-process adjustment.
He could improve on the grip system by just adding a bit of code to do the proper post-processing for him…

Your physics would only come into play whenever you “throw” an object and you want the engine to be in charge of the end result/simulation.
Otherwise, everything/anything is just linked to the data that you import from the mocap system (whatever it may be).

I’m not using a Rokoko suit I’m using optical mocap just like in the video I linked. That’s why I wanted to know how he did it. The props he uses are a 1:1 replica of the objects in Unreal which is what I’m aiming for.

Optical is waaay more accurate than inertia suits like Rokoko and Xsens which is why I switched to it. And a big advantage is being able to use props in real time. This is how all AAA game studios and many Hollywood movies are made. Trying to use props in Xsens was just painful. A Vicon system like the one he’s using in the video costs way more than 4K (like 10x as much) but the accuracy is so worth it. Very little cleanup needs to be done in post unlike with Rokoko.

Anyway I just want to be able to pick up and move objects around in real time like he’s doing. I can make props and create their digital double however the picking up aspect in Unreal is where I’m stuck. At least being able to do it as accurately and realistically as he is doing in real time. I also know for a fact he’s not doing any complex code to get it done

Hes not. But again, all it is is data being transfered to position bones or objects.
Thats all there is to it. The object is tracked and moves around accurately.
The finger is tracked and moves around accurately.
If both are true you get the same kind of results he gets (which is also true for inertia suits btw…)