Reconstruction Stability across Frames

Hi - hope someone can suggest a way for us to take the jitter out of models from sequential frames. We have a sequence of frames taken in a rig with 10 cameras which are zoomed onto our actor’s face. The images are coupled with XMP camera definitions which have the prior set to “locked”, so we believe the cameras to be in the same world location for each frame. However, the reconstructed mesh moves slightly between any two adjacent frames, so the result played back jitters very noticeably. In this sequence, there is nothing in the world the cameras can see which is itself fixed in space, but in a different sequence part of the rig itself is in shot and is of course static in world space, and the meshes we get seem to jitter less. So 1) does it fit with other people’s experience, that having something fixed in the world helps to stabilise a mesh in space? and 2) where we can’t have something fixed in shot for practical reasons, is there something else we can do to anchor the reconstructions to some fixed frame of reference?

Hi HoveMike,

what is your setup for this. Are you using control points? Is it possible, that the actors face is moving between the sequences slightly?   

Hi Ondrej - no, we’re not using control points as the cameras are fixed via the XMP files - each frame is using exactly the same cameras. The actor’s face is moving - his expression is changing - but it’s a gentle movement over half a second, not the jittery motion we see in the reconstructed mesh. 

In fact when we have fixed elements in the scene, we still see significant movement - this is the ground plane in two shots 30ms apart, one frame in wireframe:

 

…and also the torso:

 

and below one of the XMP showing the “locked” prior:

<x:xmpmeta xmlns:x=“adobe:ns:meta/”>
<rdf:RDF xmlns:rdf=“http://www.w3.org/1999/02/22-rdf-syntax-ns#”>
<rdf:Description xcr:Version=“3” xcr:PosePrior=“locked” xcr:Coordinates=“absolute”
xcr:DistortionModel=“brown3” xcr:FocalLength35mm=“32.0229643689044”
xcr:Skew=“0” xcr:AspectRatio=“1” xcr:PrincipalPointU=“-0.00345934196868512”
xcr:PrincipalPointV=“-0.0130290164937476” xcr:CalibrationPrior=“exact”
xcr:CalibrationGroup=“-1” xcr:DistortionGroup=“-1” xcr:InTexturing=“1”
xcr:InMeshing=“1” xmlns:xcr=“http://www.capturingreality.com/ns/xcr/1.1#”>
<xcr:Rotation>0.176154308893342 0.104264025628388 0.978825149052716 -0.626899301859215 0.778526656656792 0.0298916410259767 -0.75892484791759 -0.618890343950009 0.202503870033889</xcr:Rotation>
<xcr:Position>18.2173430089828 14.2321884193251 5.98963055636653</xcr:Position>
<xcr:DistortionCoeficients>-0.187622182292721 1.15721791645963 -2.38235809060244 0 0 0</xcr:DistortionCoeficients>
</rdf:Description>
</rdf:RDF>
</x:xmpmeta>

Hi,

do you use the same XMP data for all datasets? Did you follow this tutorial to use XMPs: https://www.youtube.com/watch?v=5VyYLaNxHz0&t=61s ?

Hi - yes, we’re using the same XMP for all the datasets. We exported the XMP’s for our first frame, and copied and renamed them for all the subsequent frames. Hence we believe the cameras are identical in every frame, and so we’d expect the world space to be identical in every frame too?

What are your prior pose settings for the inputs, when you are re-using the xmps? It should be set to position and orientation, when you import the second set of images.

You can set it here:

Hi and thanks for coming back so quickly but I’m a little confused. Are you saying that for the first frame in the sequence the XMP’s should have xcr:PosePrior=“locked” and that “Absolute pose” field set to “locked”, but then that the second and subsequent frames should have the same XMP’s, still “locked”, (renamed of course) but with the “Absolute pose” field set to “Position and Orientation” for each image instead?

If so, will that not give us a whole new alignment, so our cameras will shift, which is what we’re trying to avoid?

It seems that reconstruction isn’t deterministic - maybe it starts building from a pair of cameras at random and builds on the point cloud from those in a random order - but if I run against the exact same set of images in the exact same bounding box twice, I get meshes which are sufficiently different to be causing the jitter! 

Hi Mike,

no, in the first dataset it is important to export XMPs as locked. Then, it the second dataset you should set the absolute pose as Position and orientation. Then it will take the positions from the XMPs and the images will be on the same spot.

The second case, are those two separate projects? Or two alignments? The important thing in this case is the alignment. The reconstruction is done on the base of alignment. I recommend to use some control points in this case to preserve the same coordinate system for all of your datasets as it is in attached video.

 

Hi Ondrej - thanks for your patience with this - I understand what you’re saying about setting the absolute pose, and you’re right, I see on exporting that the cameras have moved in that second frame so it is a different alignment despite my hoping otherwise! A separate conversation has started on this theme with Capturing Reality and the US side of our company so rather than chew more of your time I’ll park this here, and when we get our workflow resolved I’ll post details back here for those coming this route on a search. Thanks once more for your help!

OK, good luck with your project