Accept 180-degree and 360-degree spherical images as input

I found elsewhere on the forum some discussions about using 360-degree images as input, but no feature request yet. So here it is. Please provide support for 180-degree spherical and 360-degree spherical and equirectangular images. I have a Samsung Gear 360 camera and taking images with it is really easy. I want to just walk through a street with the camera in the air on interval image capture mode and input the results straight into RC. Combine these with aerial images from a drone and voila. Here an example image from the Samsung.

Doesn’t it spread the pixels over a huge area - can RC pick features at such lo res?

I found  a tool to concert the equirectangular images into cubemaps. Below an example, almost 2k x 2k. Should be enough for reconstruction. But… since these cube images don’t have any overlap, RC doesn’t handle them well. From each 360 image, more than 6 images should be generated, with 60% overlap with their neighbors. That’s why this feature should be built into RC.

Nothing wrong with that resolution - I realise it’s not just a spherical fisheye.

I think the feature might be already planned.

If you want to use them for creating geometry, the usual rules would still apply though.

Creating more than 6 images would achieve nothing at all since there would be no difference that could be used to calculate depth information.

Surely, as all six are being generated from a single viewpoint, even that basic six can’t provide depth information. In fact, the basic non-cubemapped spherical image can’t do that either.

There must be more than one spherical image, taken from different viewpoints?

Yes exactly - very time-consuming in my view…

Walking/cycling/driving through a street with a 360 camera set at regular 1-sec interval capture rate is not very complex or time consuming. Exploding these 360 images into multiple overlapping smaller-FoV images lets RC make alignments between the different capture points, and also with any drone-created images created at a higher flight level from a regular grid pattern. My drone can create a 360 image automatically by taking 25 overlapping images. I used these base images (and not the 360 image) as input for a RC model, combining these with the images generated from the drone from a higher height, created using a regular flight pattern. RC aligned these lower-height 360 images and the higher grid-based images automatically, and the result was a more detailed 3D model than I had without the lower-level 360 images.

You could achieve a similar alignment if you create 360-degrees images from ground level and explode these into 25 overlapping non-360 images (i.e. the reverse process of what my drone does).

I absolutely agree with Pjotr, that implementation of spherical images processing in RC would be great. Our company would also like to combine drone images with 360° images taken on the street. Honestly, we are waiting with buying the RC software untill this feature will be included. The RC team promises that they will incorporate such tool I guess already second year but I still do not see any progress.

Please, vote for this Feature Request in the top right of this page to maximally highlight the demand for this tool.

 

Yes, I would be interested in spherical input too…

Desirable would be

  • Equirectangular input
  • Cubic Input
  • Open EXR input

Currently using a HDR system with 50-100MP output

Best, Huw

The Netherlands

This feature would be great, I have many Insta360 Pro videos I would like to start processing as soon as possible.

Would also love such a feature as then I could use the 360 images from our Matterport cameras to produce - hopefully - a higher quality 3D model together with their OBJ / POINTCLOUD export.

Matterport has lately introduced 360 pano to 3D in their cloud also for one shot 360 cams.

@Huw Thomas - whare are you based in the netherlands? I am often in The Hague.

@mori - I am in Breda, South of The Hague / Rotterdam. Email me if you want a chat…

foci360@outlook.com

Is there any update on this feature?

As this feature is now planed, i hope it will get pushed to in development. As many sites use spherical images for real estate sites and are quite successfull with their reconstruction i would assume realitycapture would deal with this easy.

Was wondering if there’s any updates for 180 and 360 input images support?

I have made some prior intrinsic calibrations with a Charuco board externally (GitHub, ChaurcoCameraCalibration), and used them for intrinsics, before attempting to reconstruct a simple scene from 8K 3D180 video (from a Qoocam3Ultra 180VR mod). I can align the AprilTags quite well, but the table in the point cloud looks warped on the edges suggesting wrong image undistortion even after multiple attempts. Ideally I could use the stereoscopic camera rig as a defined unit as it would lend itself greatly for depth estimation etc. I would like to use this in addition to Lidar dense cloud to add good imagery (hence Qoocam Studio output in L/R separate square undistorted format rather than fisheye), but it looks like this ‘camera’ model is not supported well, even when using advanced undistortion models. Not sure where to go from here. Ideally I have camera poses as output for 3DGS

Link to dataset “OutsideTable AprilTags” :slight_smile:

Hi, there is no update for such data.

In general, for such images the distortion model Division is recommended.

Hi otrhan,

Unfortunately with large FOV cameras such as this one (180deg FOV) or 360, this is not really a good option because of the inherent distortion issues that cannot be resolved properly.

For such cameras/lenses a different model would be required which is currently not implemented.

https://au.mathworks.com/help/vision/ug/fisheye-calibration-basics.html

I do see a lot of use for this in future uses, as the extreme FOW makes this a very suitable candidate for quick scanning / reconstruction, especially in conjunction with LiDar scans, etc.

Please do consider such model.

(Using LichtFeld with GUT for 3DGS output)

I can add this as a feature request to our database.

thanks otrhan, I’m sure many users will appreciate this, as there currently is no good pipeline for this :wink: