Good point Gotz, I also considered this however from my tests the region was always the same - maybe it isn’t or it’s slightly different.
I expect there will always be slight variances between run to run, as with all benchmarks. The best way to get accurate results is to run something many times then take the average. I don’t think that level of accuracy is needed, unless the region does indeed cause issues.
There is no project file, except the one generated by the benchmark at the very end. I could however set a fixed region via the code if needs be, so that is always constrained.
However as it stands, it code is totally dataset indifferent, which I think is best. As that would enable us and end users to only swap the contents of the images folder if they wish to benchmark a different project. Makes things best in the long run.
I *may* be able to have different options. That would be Alpha 0.2 not passed 0.1 yet
Makes sence what you say. I guess with some GCPs included it should be all right since then the orientation will be identical for everyone and the auto selection should then be close enough…
Technically you should be able to generate a custom .rcproj file, though I don’t recall if you can specify the reconstruction region or GCPs in there. Those might be saved as separate data files by RC.
Also you need to keep in mind that the demo version of RC likely uses a slightly different format for the project files.
Shadow - you are correct I could create a custom rcproj file for the project, global settings (and backup the existing ones), also have a specific file just for the region. Avoiding GCP’s is always best as from my experience this requirement for them just means there is something off with your images. These things can be *easily* added/adjusted as we proceed.
What are peoples thoughts on a ID for the benchmark. - no identifiable system data is scraped.
However some kind of identifier will be required as the results list will eventually get long, and some kind of identifier will be required to identify your results. I can extract the username from capturing reality, And maybe just have the first name plus the first letter of the last name.
So for instance Ivan Humperdinck results in Ivan H
Oh, I would have thought to creat an rcproj file with the images and GCPs already included. All people would need to do is to adjust the image path.
If the file format is different with the demo, then there might be a problem. I guess that Michal would have pointed that out to ivan. There are many users out there without a CLI version…
it seems like you want to preserve your anonymity!
I think it might be better to use the HW as identifier for exactly that reason. Maybe with the first letter of the name or so. But in my case the first name would be almost a 100% give-away…
The name is ok for internal purposes while testing but later I think it should be as neutral as possible when it goes public.
GCPs are not there to patch up a model but to geo-reference (scale) it. It’s an entirely standard procedure and it would make sure that the model is identical for everyone who runs the benchmark…
Your 100% correct GCP’s are for what you suggest, I was getting mixed up with CP’s. With all the scripting done over the last week… I don’t even see the code. All I see is blonde, brunette, redhead.
Re: a identifier I can ask it to prompt the user for whatever they wish to enter at the start, so that maybe better.
Say you upload your results, then at a later date you upload more from a different system or change hardware, and wish to compare them, being able to locate those results in the database by scrolling down to ‘F’ which will show all results by “FluffyBunny” is the idea.
Not finalised the way selection will be working yet, however I did want to have a identifier that people could see so they could compare their own results with others if wanted.
It’s a great project as is, but maybe I misunderstood the scope. I thought there’d be some kind of analytics applied to the database to make the comparisons for us. If we manually study others system specs against our own and work to correlate performance values for a given operation to which hardware represents the winning horse, I’d think that could be a non-trivial exercise to tease out from the confluence of so many factors. No? I may be totally off base here, overthinking it, maybe it’s more straightforward an exercise.
The idea is that the data you upload, will be displayed to you (and then others), and you can choose to compare that dataset either against a previous run that you did, or other variables.
I expect different people will wish to analyse the data in different ways.
For me personally - and maybe selfishly, I wanted to compare initially against my own results, so I can make adjustments to my system and see how they affect each stage. - These will be presented to everyone, for better or worse.
Seeing how things compare on other systems will be great too.
Data analysis can be a complex matter in itself, yes we are talking another PHD :D. Presenting it in a manner that is ideal for everyone won’t likely be possible, however we can work at some pretty graphs etc. The idea isn’t to make it a race to show who is at the top/who has the fastest system- ultimately that data will be very valuable to see the system spec of how/why that was achieved.
However it may not be the case that 1 system is the fastest at all stages, so I’ll let the user choose if they want the data ordered by date/user/fastest alignment/fastest depth map/fastest model creation/fastest texturing/gfx card/cpu/ etc… to which they can make the comparison. - I suppose a few default calculations could be done to say “Your computer sucks, it is 22448% slower than the fastest shown here.” etc :D … Then you can try and see why. It will probably make me full of buyer’s remorse.
This is not close to the final, however I need feedback on how/if the scripting works on others systems.
The online database is work in progress, so I have omitted any code regarding that.
Things you need to do, unzip to your desktop. It will currently only run from there.
Choose your images and place them inside the Desktop/CapturingRealityBenchmark/Images folder
For now I’d suggest a smaller collection that you know work. - None are currently included.
Run the benchmark.bat file
You will be asked to enter a identifier/nickname at the start.
Sit back and relax
Once the benchmark has run it’s course you will be given the option to enter any additional notes.
The results will be generated into a file called results.txt It should look similar to this.
Don’t worry that the times are not labeled etc, that is all being dealt with when the data is parsed at the database.
If your txt file looks different to this, please share. - especially if you have multi gpu or HDD’s.
Current Known Issues/Potential Issues
If dataset is too small or computer is too fast completing a section <15s It may not record the time stamp for that section, fix - increase amount of photos.
I Cannot Identify if more than 1 GPU is present (requires Cuda toolkit) or we must wait until my workstation arrives so I can test multi GPU.
2.5) Same goes for Multi HDD’s.
I run windows 10, I am unsure if all the commands/scripts will work on earlier versions/VM’s/Servers
The code is english as are the commands, I do not know if they work with other local’s
Will likely only run with Demo & Full/CLI versions of the application. So if you have the Promo, please try installing the demo.
The script assumes you have installed the application in the default directory
Admin privileges maybe required.
Be wary of running software from unknown sources from the internet. Both *.Bat files are in plain text. You are free to inspect the code in notepad to ensure no shenanigans. You can also check with www.virustotal.com
The project will delete your cache and may change some application settings away from default. - Fear not a backup of your settings are saved first. as “GlbBackup.bak.rcconfig”
You looked at the code, and question why it’s such a mess, why I did it that way and why it took me so long, me too. - I’m no expert.
If you have made a suggestion and I ignored or refuted it, sorry. - If you think it is important, try a different way to convince me, I may not have understood. This project is for the benefit for us all, my opinion is just one of many. Everyones input and suggestions are valued
Please if you have time, give it a go and report back, if it does not work please explain what happened with as much info as you can if it does not behave as expected.
Thankyou
Edit:
Changelog - minor edit to code, to allow CapturingRealityBenchmark folder to be located anywhere, and not restricted to the desktop. I found that testing on a mac/parrallels vm that virtual paths/directories did not work so well, so ideally it should be located in a real location.
Well done, Ivan, much appreciate your efforts. My machine is chewing down a reconstruction presently, will run when RC is freed up. Don’t we wish we could save any process mid-stream to resume at a later time.
Unfortunately for the moment, the version posted will not work with the promo due to its restrictions with CLI instructions, - one of the caveats of the more accessible price point.
In my work in progress code, it does check which license type the user has, longer term having a less detailed bench running on the promo version maybe possible.
Compromises are always a issue, and getting the most detailed & accurate data, took a higher priority.
I am currently unable to test the effect of installing the demo when the promo is installed.
Of course, you and Götz went through that previously. I’ll await word for further instructions. I’ve just ordered an upgrade to motherboard, RAM, and more SSD disk space for head room, would be nice to compare before and after.