Hi!
I want to use AprilTags with Reality Capture but came across a few questions I hope you’d be able to answer:
-Is there a way to Detect April tag in the Laser Scan? That would help merging photos and laser scan more easily. Right now if I hit the Detect button it just get the tags from the photos.
-It looks like the april tag are only used as Control Point where I know you can also get the relative position of the tag. Are you planning on using this feature of the AprilTag?
This would make it easier to determine the camera’s position and reduce the need to overlap photos to obtain an accurate 3D representation.
Hello MaggBunny,
it is not possible to detect the April tags over the LSPs (as in most cases there is a weak resolution)
April tags can be also used as ground control points (with coordinates), but I am not sure if this is what you asked. What do you mean by relative pose of the tags?
“Each image is searched for AprilTags using the algorithm described on this page. Using assumptions about how the camera’s lense distorts the 3d world onto the 2d array of pixels in the camera, an estimate of the camera’s position relative to the tag is calculated. A good camera calibration is required for the assumptions about its lens behavior to be accurate.”
My idea is that by using this method, you can estimate the camera position in all the pics and optimize these positions afterwards by combining them. That way, we’d have less need for shots with tags that overlap each other. Gaining in speed and efficiency.
Yes, but that is the basic of a photogrammetry processing. Or maybe I am missing something…
As I wrote, you can use the tags also with known coordinates and then you will get the positions for your aligned cameras. But to get the values precisely, you still need to have a proper overlap and capturing.
Of course it uses the same principle, except that it needs only one shot to get an idea of the scene (camera location/object location).
Knowing any coordinates just gives a more precise representation, but with one shot and no coordinates, you already get a 3d representation.
Also the main difference here is that it automates the process: since the control point is identified (already doing that in RC), it can be found in other shots. But what it can also do is, per shot, estimate the location of the camera and refine when combining all the shots. So instead of 3 to 5 shots to get a location of one control point, you’d need 1 (probably a bit off) or maybe 2.
OK, so I suppose this won’t be supported in RealityCapture (at least in near future), as it uses slightly different principles to define the camera positions.
Thank you for your idea.