I think having set of images from a uav flying a double grid, would be pretty good.
you can get a pretty decent model from 300-500 photos from that.
I think having set of images from a uav flying a double grid, would be pretty good.
you can get a pretty decent model from 300-500 photos from that.
I just tried to combine WOW with ALREADY THERE.
The stag would fit that, in my opinion… …but that’s not possible so moving on…
Who will be first?
@ chris: I think that the internal processing for a model like you suggested might be quite different from an “all around” one, so I guess it would make sence to have one like that as well, especially since it is a common application for photogrammetry…
I would actually suggest having multiple objects ranging from near-perfect source images to low quality / noisy / video source images to show what magic RC can do even with bad source images.
Ideally it would be the same object because that way they can be compared and the differences shown.
That does make sense Shadow, and not a bad idea at all. Although it would require a good amount of strict control to ensure set variables are consistent between the shoots. - More of a additional test to see how the difference that image quality makes. It could easily become quite in depth fast and ties into what i was suggesting with the test scene. And yes Chris also UAV flights can produce great results.
Jpg vs Raw
Resolution differences
Quantity of Images
Various ISO levels
Various Focal Lengths
Sensor Sizes
Bayer vs Foveon
Optical Stabilisation on/off
… the list goes on
These are just some of the things we know make a difference, but to quantify them would be really nice.
I have been making progress. - On the technical side. Start with the least fun and trickiest bits.
I now have a script/batch file that loads up any set of images, goes through all the motions and spits out the following below as a txt file. No other apps required, just click and go.
I think the output is *mostly relevant. - Have I missed anything obvious ?
I couldn’t interrogate the hardware deep as I wanted without external apps. - However I think if that can be avoided it will be far better.
Start time 1:07:29.43
Alignment Time 12 seconds
Reconstruction Time 48 seconds
Simplification Time 6 Seconds
Texturing Time 124 seconds
End Time 1:10:45.11
NumberOfProcessors
1
Name=Intel(R) Core™ i7-6700K CPU @ 4.00GHz
NumberOfCores=4
NumberOfLogicalProcessors=8
GPUName
NVIDIA GeForce GTX 1080 Ti
TotalPhysicalMemory
34272317440
Capacity PartNumber Speed
8589934592 CMK16GX4M2B3200C16 3200
8589934592 CMK16GX4M2B3200C16 3200
8589934592 CMK16GX4M2B3200C16 3200
8589934592 CMK16GX4M2B3200C16 3200
Model
Samsung SSD 850 EVO 500GB
Size 500105249280
Windows Version
10.0.16299
Pagefile Peak Usage
40
Next stage is to parse the data onto a easily uploadable online *database*, which can then nicely display the results to us all.
Hopefully next week, we should have something to test out.
Hey ivan,
great work!
Is that CLI now?
Would it be possible to split the processes up into all individual steps? E.G. depth map calculation has different HW needs than modeling etc…
What more info would you want? I think it shouldn’t be too complicated either since you can never have 100% comparable setups anyway. It depends on so much! I think what ShadowTail and you are aspiring to is it’s own research project!
It would be useful to also see how much which component has been used but I guess that’s what you mean about deep HW interrogation…
Could anyone answer my question concerning demo parallel to Promo?
Yes It’s CLI based, so end users should get a zip file with pretty much the following structure inside.
Images Folder
Benchmark.bat (contains all the script and code)
Settings.rcprog (contains variables required by the script to be used in the application/benchmark and makes no permanent changes)
Results.txt/csv (Created as the bat file is ran - this will need to be uploaded to the database)
CompletedBenchmarkScene.rcproj (created when benchmark finished)
.
I am exploring a different method also. As cli does make things potentially tricky if using a the promo, however being able to control all the functions is very handy indeed. It’s all work in progress.
Parsing the exported data is testing me as multiple hd’s/gfx cards/Cpus/ram sticks, can create extra lines and shift the results about so the results structure is a little dynamic depending on the system, so I need to figure that out. As well as some pathing issues.
For the moment, the substages within each calculation are not recorded, however I am working on that. However ultimately I think the required data can be extrapolated without those… I think… maybe…
Recording the % of CPU/Ram used as a timeline is possible, however it makes the results file huge & complex, as poling is required to be captured constantly throughout the process and whilst interesting you just get a list of 10000’s of numbers, you can get a better visual representation of what is being used at certain points by playing a game of watch the task manager
:We do indeed need to be wary of not undertaking a PHD in image analysis.
:Re the Promo/Demo question, If I recall It did not pose a problem for me when I tried last. Things may have changed, so at your peril…
Hi ivan,
so you think in MIGHT work even with the Promo?
Maybe we could sway Michal to provide testers with a short CLI license, that would aleviate this problem.
I get what you’re saying about the CPU usage. Would it at all be possible to thin it out by using only every, say, 100th or 1000th value and ditch the rest? But it’s your call since you do all the essential stuff. It’s really great that you are putting in all that work.
I am planning on providing an image set as my contribution. I use a 12mp camera, so that would also fit Michals suggestion of around 10 mp. It’s not high end at all but we are not trying to create the best model ever but a sound basis for a benchmark, right?
I’m late to this thread, what an awesome development, count me in. I’d have to check on rights, but recently captured in a Siberian salt mine, highly occluded environment, not so much the mine walls which are smooth, but sections with human stuff, a burly mining machine, an area where miners eat lunch, lots of texture-rich tools lying about, old dial telephone on the (psychedelic) wall. For Wow factor, this place hits you in unexpected ways, surprise, complexity, natural beauty, human authenticity, while also presenting the right challenges, things, interior space, occlusions, etc.
This image comes off the internet, gives you an idea.
The source data comes from 42 MP Sony A7Rii w/ 21mm Zeiss Distagon prime, we’d simply downscale to 10 MP. I can share an animated clip out of UE4 offline (password protected) to show how some of these scenes from RC appear in deliverables.
In any event, count me in to participate with benchmarking. I’ve been planning an upgrade, am aware how key benchmarking is, especially when tied to specific apps and separate functions within.
Benjy
Benjy - what a interesting subject - I can indeed imagine that such a environment is quite surreal and beautiful in it’s imposing and harsh ways. It would be great to see
I’d imagine those machines would work brilliantly due to the dusty and dirty environment giving a lot of texture. However they would also need a lot of images to avoid misalignment bugs for such a scene.
I found a lion skull (as you do) thinking that would make a exciting subject… now not so much…
I also have exactly the same capturing equipment in my arsenal, so can vouch for the results that are possible.
- Off subject one thing I have found is the software does not support the sony raw files so have to pre-convert them to tiff or similar beforehand. I did at one point manage to get the software to read them, however I believe it was extracting the jpg preview from within the raw and not the true raw image data itself.
Gotz - I don’t think it will work directly with the promo (Part of the reason the promo has the more accessible price over the full editions is the fact that automation is disabled), however I am pretty sure I installed the demo alongside when I had the promo installed, and then was able either run side by side, or just uninstall the demo after and the promo would resume fine as before. It was a few months back so can’t remember exactly - I’m pretty sure it worked fine. For the moment I won’t be adding the cpu% stuff as the parsing of the data & coding is testing me enough as it is. I have as a proof of concept got it working - dealing with the output’s is another matter. So in time maybe.
Frustratingly I am in between systems at the moment and am awaiting a new workstation, however will take a week+ to be built/arrive, so cannot test at the moment.
Ivan,
The idea to limit the more robust test to 500 images is the right idea, so whatever amount of real estate can be covered with 500 is enough. A relief sculpture is only a bit tougher than a Van Gough painting, which also applies a bit to aerials, depending what you’re flying over and how close. The mining machine is too complex, I agree, believe like nearly 2000 just for the front end. I’m reconstructing this scene presently with the lunch table and tools lying about, will see what 500 provides for. I also have a Gothic chair that is heavily carved, lots of tiny windows carved through front to back, believe it’s 800+ total, but I suspect I shot well more than enough, could try culling to see if 500 bears fruit. Personally, I think doing an object is less impressive than 3D mapping part of a complex environment.
Ivan,
looking at those logs “Reconstruction Time 48 seconds”
do you know if there is a way that can split that time into the depth map part and model generation parts.
might not be possible.
but we won’t get separate gpu and cpu score. just a mixed gpu+cpu score which is less interesting for me.
Indeed, learning from the log how a particular gpu, cpu, etc. influences performance is the point of the exercise. Maybe, that PhD in imaging is precisely what’s needed here ;^( someone write a macro to make a snip in time lapse at intervals of Resource Monitor and a gpu monitoring utility like TechPowerUp GPU-Z, then a way to glean values from the changing rasters… This appears to call for heavier lifting than off-the-shelf tools support. Definitely over my pay grade to properly conceive. We need a hero.
Hmmm, that gives me a thought. Whatever happens within these cpu/gpu monitoring apps that ends up getting rasterized to a set of graphs, what’s actually needed is access to the discreet samples driving those functions, add to that a routine for averaging and comparing values. We’re not the first to talk about the relative worth of benchmarking with off-the-shelf tools, the functions running within specific apps calling for more granular insights. If someone approached one of these developers behind something like GPU-Z, TechPowerUp, and pitched them on the concept we’re after, but this with a broader value to them of vastly improving the utility of their app when pointing it to track a user-specified app, RC in this case, and generate this awesomely informative log, believe there’s actually a strong selling point here, especially for manufacturers vying for sales, to take benchmarking to the next level. I personally don’t mind taking a stab at making contact and opening a dialog. Maybe, that’s all I do, handing that dialogue off to Ivan, Michal, whomever, to present an orderly list of specs we’d want this app to perform to. TechPowerUp is just an example, if it’s the right idea, we should generate a prioritized hit list. Sounds a bit ambitious establishing momentum, but I’m game putting in a little time toward this end to fly it up the flagpole.
this might just be easiest if we just ask rc team to write a bit more info to the log.
if we can have reconstruction time split in to depth map generation, and model generation. then we should have all that we need.
I have already contacted the dev’s regarding this. It’s the weekend - so let’s be patient - I’m sure it will be possible.
The application is aware of this stage, so theoretically it shouldn’t be a issue. I have put a placeholder in the code for it. Or I can extrapolate the data via more tricky scripting which isn’t so elegant.
I have tried quite a few ungraceful things, some using 3rd party apps including gpuz (which is free to distribute uncommercially, modifications cost $), I think the way forward is to avoid any external applications, to avoid legal issues, the complexity of working with the different way each additional app handles data, and the issue of keeping compatibility cross versions.
I also think it would be improper to contact 3rd parties even if the intentions are good :)
The capturing reality team likely have there own agenda, and I do not believe it would be professional of us to overstep/act on their behalf.
All this could be done in app, however development time is likely better spent on other new features etc. It maybe on the roadmap somewhere down the line…
Everything is currently achievable via the app and some crafty scripting, even gpu monitoring.
That does make much better sense keeping this within the app. Since with every export of an asset RC alerts user of stats being sent their way, couldn’t this performance data be collected alongside system specs, the comparison among all users leveraged to provide the benchmarks? Right, development time better spent on new features.
Hey Benjy,
awesome that you offered to share your salt mine stuff!
I have seen the staff room and I agree this would be an excellent choice.
It’s rare, quirky and an interior, which are rarely seen and poeple often seem to have difficulties with them. So that could contribute to showing that it IS possible.
I’m somewhere in between ivan and Benjy on the approach, but since ivan is writing everything, the call is entirely up to him. The only issue might be, as chris pointed out, that the CPU and GPU parts should be separate to maximize the benefit.
Anyway, thank you again, ivan, for doing all this!
Good news
The benchmark now records the time taken for the following stages
It was a case of me misunderstanding the effect of one of the CLI switches. Michal kindly pointed me in the right direction.
Hey ivan,
excellent! I think now everything is covered, right?
So now we “just” need a suitable image set…
I’m just trying if I can get an interior of a small gothic choir covered in wall paintings to align with <500 images. Coverage would not be perfect (e.g. behind the altar) but certainly a nice impression.
One thing just came to mind. I guess we would need to rely on the automatic reconstruction region, right? Since RC tends to vary the orientation quite a bit, that would not be identical in most cases, so would skew the results. To avoid this, wen should add some GCPs to the scene. I have no idea if the automatic box is then always the same or if it also varies. If it does, we need to import a custom reconstruction region.