you need to install the demo version.
and I don’t think any dual video cards have been tested yet, so you’ll probably find issues.
you need to install the demo version.
and I don’t think any dual video cards have been tested yet, so you’ll probably find issues.
ok so I threw 687 video frames into the folder, and the demo version runs and gives me
NSRjecross
12:46:49.52
248
20
216
12:55:24.72
NumberOfProcessors=2
Name=Intel(R) Xeon(R) CPU E5-2623 v4 @ 2.60GHz
NumberOfCores=4
NumberOfLogicalProcessors=8
Name=NVIDIA GeForce GTX 1080
TotalPhysicalMemory=137357156352
Speed=2400
Model=ATA ST2000DM001-1ER1 SCSI Disk Device
Size=2000396321280
Model=ATA SAMSUNG SSD SM87 SCSI Disk Device
Size=512105932800
Size=0
Version=10.0.15063
PeakUsage=20
ECHO is off.
notes: I have 2 gtx1080 (8gb) GPU (not sli) And the pictures were pretty horrible - if this matters to the benchmark times, I can run some other sets or use a standard group if there are some
Many Thanks Jennifer & chris. - All feedback is appreciated.
It’s useful to see how it runs on dual Xeons etc, and multiple gpu’s as I currently have only 1.
The results are not as I expected, Only 3 times are recorded. - did it actually generate a completed textured model ?
I assume you used the version 0.2 ( on this page, I should tidy up the links, to avoid confusion)
The good news is my workstation arrived today, so once setup I’ll get working on uploading a new version, which should deal with multigpu, etc.
Hi Ivan - Thanks for all your work.
Yes, it seemed to generate a model, though that set of images does generate two components (only did the model for the major component). It is the same image set I’ve linked over in the green colorize bug report if you want to play with them.
Are we maybe hitting your "too fast time limitations? the generated model is only about 100k polys. I’ve got a mineral sample that generates models with ~40 million tri I can re-run in your benchmark script if that would be better. (https://sketchfab.com/models/c76deda5006744ba9f1a5129750b1a48) %C2%A0)
Hello Ivan,
Thanks for your continued efforts. Though I’m past learning anything new in comparing performance in my previous and upgraded system specs, I saw utility in using the benchmark to isolate against a particularly vexing issue I’m encountering in the Promo version, described here, wherein I’m a) able to align a set of 500 images with default settings, and b) unable to reconstruct the component, always crashing RC with “invalid function call” red flag of death. I wanted to see how the Demo version might behave. Note, I had uninstalled Promo and reinstalled, as well as cleared cache and Shift-hold opened RC Promo to reset settings, so that’s about as tabula rasa as you can go, though I’ll say I suspect settings were not reset, as I saw my assigned cache drive letter hadn’t changed. This goes off topic, so back to this thread, but thought it useful to provide context, to possibly understand the strange results I got in your benchmark.
It took me a few steps to get everything working, not sure why it downloaded the Demo version over Steam this time, used the same installer I had saved to my downloads folder as used previously. Every time I’d launch benchmark.bat with no indication that Steam was doing something, I’d get the txt file open with nothing moving forward, had to hunt around to finally get Steam to load everything and ready RC Demo.
Once I got benchmark.bat to open RC Demo and step through the workflow, I kept an eye on alignment, reconstruction, especially when it came to comparing depth maps, as this is always where I’ve run into problems leading to the invalid function call issue. Interestingly, RC Demo was able to get through depth maps to then reconstruct all the parts, though I missed the show that followed, as that took place in the wee hours. I awakened to RC closed, isn’t that different from before, thought I remembered seeing the finished model, maybe not. Here’s the results.txt which shows echo is off. (Why?)
I surely don’t need to derail this thread with a discussion so overly specific to my issues, which I suspect relate to the combination of something generated by RC (a bug?) and only manifesting during work in RC, but operating at the system level, since reinstalling Promo and running Demo also present problematic behavior, though not identical. That said, I do believe my sharing this with you might trip your understanding of what possible culprits belong on the table, am all ears what you make of this news and possible suggestions moving forward. I’ll then move along, take what I learn back to the other thread with Kevin Cain and Götz Echtenacher.
Again, many thanks for this project.
Hi Benjy
Thank you for the feedback. As I change things to improve, I inadvertently break the code functioning somewhere.
I know just because it works for me, is not enough, so having detailed responses like yours is helpful.
To try and answer some of your questions.
The benchmark firstly tries to load the steam demo version, if that is not present, then it will attempt to load the command line version. or command line demo.
If things are installed in non default directories… then for the moment it won’t work.
I currently only have demo’s so I need to rely on others feedback, or just guess how it may behave.
Regarding the cache drive. As I am unable to query the application as to what the cache drive is set as, (although I could set the drive letter by command line)
I added the user selection option. This does not change what the application uses. It just records the drive type selected.
I spent a long time trying to figure out how to define the drive type by drive letter, however I could not. Maybe using diskpart is possible, however I purposely have used only commands that interrogate, and none that could by accident change someones partitions etc.
Regarding the finished model at the end. I require that the application closes when the final texturing stage is done, so as to know when to allow the code to move onto the next stage and allow the records to be compiled. - The file is however saved, so you can reopen the completed project. and view the model. It is currently simplified to 1 mil polys as a arbitrary value.
Regarding the Echo is off, - this happens when the tool cannot obtain the benchmark results, or fails to actually run the application to run the benchmark. It’s just my poor coding’s way of saying no result found.
I do not know when you downloaded the last benchmark tool, however yesterday I uploaded a couple of revisions to the V4. v42,v43 - The first upload of the V4 was reported by the capturing reality team to generate the same Null values as you experienced. A recent windows update slightly changed how it allows the taskmanager to be queried and I also added a space where there should not be one. - Do you know which you downloaded ? It’s hopeful the latest code I put up yesterday may have already fixed the issue you are having.
Hopefully you have time to download the latest v43 and try again. - Maybe just chuck 10 images in for a quick run to see if it completes correctly. Then you can play about with your real data set.
As per the issues you are having not regarding the benchmark. I have found that windows profiles can get corrupted, and no amount of re-installing applications helps. However creating a new user profile allows for a fresh registry for that user, and has often allowed me to get things working without a re-install etc. It maybe worth a try.
All feedback is appreciated
Ivan
Thanks, Ivan, that all makes good sense. I’ve linked to your .43 version, will try again. Interesting what you say about a corrupted Windows profile, will create a new one to see how that works out. Could RC be the cause of that corruption, or is this encountered in Windows generally? I’ve only recently moved to Windows from Mac OS.
I doubt RC it’self would cause issues, Even windows updates themselves can make things go weird, or uninstallers, don’t fully uninstall something. It’s worth a try for the 5 mins it takes.
Alpha 0.43 Can be downloaded here
**[
](Dropbox - CapturingRealityBenchmark.zip - Simplify your life)**
The Benchmark requires CLI access, so either a demo, steam demo, or the full license. (promo is no go).
You can quickly install the steam demo alongside the promo, without issues.
Instructions:
1) unzip
3) Run the benchmark.bat file
You will be asked to enter a identifier/nickname at the start, asked for a note if you wish, and to select the drive you use as the cache disk.
5) Once the benchmark has run it’s course you can review the results in the results.txt file which should pop up automatically. And look like this
Hello Ivan,
I tested on 0.43 and my project made it through:
So, it’s something about my Promo version, after it being uninstalled and freshly installed. I tried under a new user, no go. I did notice under the new user that the preference for cache location in RC persisted from the other user, so does that imply something may still bleed through that’s causing the crash? How can one be sure everything is uninstalled when there appears only to be one crack at it? Thanks.
Benjy
Hi benjy
I’m glad it worked this time, thank you for posting the results.
Some applications create profiles on a per user basis, and some don’t. It’s fine either way. What that does help rule out that your user profile is a issue.
Another possibility is that the uninstaller hasn’t/doesn’t fully uninstall everything. And leaves certain values behind. This again is often not a problem, however at times it can be if your software plays up.
The issue here if trying to fix is playing with the registry can cause more harm than good.
It maybe worth using cc cleaner to uninstall, and run it’s registry cleaner tool (may need ot be ran a few successive times). - It is a well respected tool, be sure to get it form the official pirform site. If in doubt don’t. Sometimes it can cause more harm than good.
Thanks Ivan, great advice. At least without a second PC, benchmark proved it’s not the photographs and it’s not my PC generally speaking. I’ll give cc cleaner a whirl.
Benjy
I’d like to add my appreciation for this project. If I could significantly understand the detail of the expert discussion, then I would be right in there, keen to participate and learn. As it is I can only watch and be amazed at the amount of effort needed to prepare to collect the resultant data, let alone to analyze and make sense of it.
And hope and pray that the eventual distillation of guidance will be made public. Because, fast as RC is, speed is going to be make-or-break for an independent operator to get established offering initially a modest local service, via an efficient reliable workflow, without access to any high-finance render-farm. Having the optimum standalone machine will be one of the keys.
You are welcome.
Don’t be fooled into thinking we are experts or have the remotest clue what we are talking about or doing.
You are more than welcome to add your 2cents.
Fear not, the intention of the benchmark is as you hope for.
There has been a lot of talk from me, and not much evidence of my web-based results page. I have struggled immensely with that part. So many solutions which claimed to be able to offer the ability to upload and then show data were failures and did not deliver. I think I have it cracked… Mostly.
This is now as a good a time as any to share where I am. There are currently 2 parts.
1) the upload, and 2) the public results.
Getting the publicly viewable results shown in a clear and presentable manner that can be analysed and interrogated was a important part.
Here is where I am with that. The data is drawn live from the google spreadsheet from which results are uploaded too.
And updates accordingly. The pie charts etc are not final and I will change the metrics displayed/used. It’s just a test to get it working. And will have more useful data shown for your viewing pleasure…
Note. the contents are fabricated by me changing the rawresults.txt files that have been uploaded each time, and don’t represent real results yet.
https://datastudio.google.com/reporting/1LVbEcggzC87TWXaKTDczwks2pRLM51b8
The uploading part is currently not as pretty. (which will change.) is here
https://script.google.com/a/ivanpascoe.com/macros/s/AKfycbwQi8gvGNy83YEhrNZykm_uLJwgUbGOdrSnauWJC1FNrLE8OpJL/exec
I’d very much appreciate anyone to try uploading some data. Using the rawresults.txt that is generated by the benchmark. please use that file rather than the results.txt as it will add garbage to spreadsheet, I have not yet added code to reject the incorrect file. Yes the results will be kind of useless as we are all using different datasets for now, however for the moment I need help with checking that the uploaded process is working correctly, and the results are displayed properly.
known issues.
Results are show instantly on the upload page, however can take a min+ to appear on the pretty public results. And you will need to manually refresh the page for your uploaded data to appear. This is a limitation of the platform. It caches data on the server to save on resources. Poor google and their lack of resources…
Works on chrome, I do not know about other browsers.
The Rawresults.txt must be selected for the upload or terrible things may happen (not that terrible, but will make a mess on the spreadsheet with garbage data)
The chart is full of made up data . For now the fact that results can be generated, uploaded, displayed and analysed is the important part.
You can download the results for your own analytical pleasures, there is a hidden button next to the word " Total."
As always. Feedback is really appreciated.
For me, Christmas then doing year-end accounts always wipes out a month from my ‘working’ life, so can understand why this vital and timely topic has gone quiet.
Any update? Shopping for some ‘ideal’ hardware is getting closer so I’m looking forward to ‘the answer’!
Hi Folks,
Here are the result from the benchmark I just ran with 1159 images at 17mpx. I have uploaded the rawresults.txt to the spreadsheet.
Username=RC TEST - THANOS SERVER
Comment=1159 images
Version=1.0.3.3939 Demo RC
Alignment=245
Depth (GPU)=15.272
Model=1.615
Simplify=0.055
Texturing=5.528
Total Time=266
CPU=Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz * 2
GPU=GeForce GTX TITAN X * 2
Cache Drive=INTEL SSDSC2BA400G3 ATA Device
RAM (Bits)=206158430208
Ram Speed=2133
SYSTEM INFO:
CPU1: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
CPU2: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
Number Of Processor Packages (Physical): 2
Number Of Processor Cores: 36
Number Of Logical Processors: 72
Motherboard: Supermicro X10DRG-Q
Chipset: Intel C612 (Wellsburg-G)
Memory: 256 GBytes @ 933 MHz, 13-13-13-31, DDR4-2132 / PC4-17000
GPU1: NVIDIA GeForce GTX TITAN X 12288 MBytes of GDDR5 SDRAM [Hynix]
GPU2: NVIDIA GeForce GTX TITAN X 12288 MBytes of GDDR5 SDRAM [Hynix]
Drive: INTEL SSDSC2BA400G3 400GB, 381,554 MBytes, Serial ATA 6Gb/s @ 6Gb/s
Network: Intel Ethernet Server Adapter I340-T4
OS: Microsoft Windows 7 Professional (x64) Build 7601
There is some confusion with the RAM specs here as hardware info shows what is physically installed - 256gb, although the OS can only use 192gb. It’s currently clocked at 933Mhz not the 1066Mhz suggested by the benchmark test.
Hello George,
That’s one mighty PC you’ve built there. It’s beyond me to extrapolate from your benchmark results to my world, i.e. working with a different image count @ 42 MP and very different system resources, so I’m interested more in your general response to some questions geared at evaluating things like dual socket CPU and performance of the Titans.
Ivan (thanks again, Ivan, for putting this conversation and utility on its feet, see how we benefit?) suggested early on that he’d take fewer faster cores than more slower. I see you went with dual 2.7 GHz CPUs, 18 cores each. Maybe, you had other considerations in your choice, e.g. running two CPUs cooler, needs of other apps, but did you run any comparisons with single CPU and/or with faster CPU to inform that perspective?
It’s been my impression that a Tesla verges into the case of diminishing returns relative to a 1080 Ti, the cost/value being hard to justify. Time is money, so I assume your projects yet justify your pulling out a sledge hammer - correction - two sledge hammers. What kind of memory pressure are you seeing in Resource Monitor on the first GPU, the second?
How about RAM? At a fraction of your whopping 256 GB I rarely see memory pressure in RC, really only after reconstruction and texturing, where GPU memory doesn’t support Sweet display quality. I suppose that’s at least one area your Teslas rock.
I’m still relatively new to Windows PCs. I came up with Macs mid-80s, do miss the construction quality, and in the case of the Mac Pro “trashcan” I’ve yet to see a slicker approach to thermal regulation. I’m currently running a i7-7820X in an ITX case, the smallest form factor supporting the 980 Ti short of a laptop (didn’t want the heat). This ITX is now gasping its last breath, thanks to a Delta agent who performed an atomic body slam when dropping my well-padded storm case onto the belt from chest height. (I later found the GPU loose inside the case, rear metal mounting plate peeled back with mounting screws ripped clean out! So many loose connections, had to rebuild from scratch 3 times to finally get video up and operating system to load.) I’m now getting BSODs, believe a weakness on the motherboard has now broadened into a tiny bridging gap hiding somewhere. So, I’m now going with a Boxx laptop I can take carry on, use in field to validate a day’s capture and for demos, but for desktop I’m pulling out my tower ready for parts. I’d like to repurpose my i7-7820X and possibly bring in a second one. Thanks for your thoughts.
Benjy
Wondeful if this important thread comes back to life
Hello Ivan,
Anybody home? I’d like to revisit this topic, am looking at spec’ing a new PC, am curious what you think about any possible conclusions one could draw from the albeit limited participation you worked so hard to make possible. What you did with this benchmark utility is really important work, now gathering dust. Ugh.
It was my impression that number of cores wasn’t as valued by RC as clock speed, right? What was the actual evidence for that? It appears the i7 processors base frequency topping off at 4.0 GHz (i7-8086K) would then win over the fastest of the i9 class topping off at 3.6 GHz (i9-9900K), though overclock speed is the same at 5.0 GHz. To then account for the difference between their respective core-thread count, 6-core 12-threads for the i7-8086K v. 8-core 18-threads for the i9-9900K.
I then wonder about a cpu with slower base frequency than either of those, the i9-9960XE at 3.1 GHz, but sports 16-core 32-threads. Multiplying the base frequency by the thread count for these three processors you get:
Those totals reflect the proportion between Passmark numbers:
So, I’m curious about the reality of how these three cpus stack up in RC. Is there a sweet spot, as suggested before, and would this be the middle guy, the i9-990K? Do we see diminishing returns with the i9-9960XE? Note, there’s one model with yet more cores-threads, the i9-9980XE, slightly lower base frequency at 3.0 GHz, and even with its 18 cores/36 threads Passmark shows it coming in beneath the i9-9960XE, which seems to underscore the diminishing returns principle. Even so, the 150% jump in performance between the i9-9900K and i9-9960XE would seem to outweigh whatever is lost apples for apples with this diminishing returns thing. No?
Many thanks for your noggin.
Benjy
Hi All & Benjy
I have been hiding. I am working on some new fancy tools to help. I have not forgotten you all. My license expired too which didn’t help
To answer your question is tough.
From my testing (which isn’t concluded). There is no easy answer.
The issue is as follows.
The initial benchmark, gives a rough idea of performance to be had. Of course more cores & more mhz the faster things go.
However it isn’t so simple.
The software behaves differently depending on the dataset given.
As you know the computational stages are split into:
a) alignment - a predominantly fast stage
b) point cloud creation - this is where most of the calculations are done
c) texturing
d) mesh export
However there are multiple stages within the point cloud creation, such as depth calculation (gpu accelerated). Some of these interim stages are single threaded and some are multi threaded. So some stages win out with core count, and other stages win out with pure ghz.
Using different settings within the application, image resolution, number of images, quality of images (not just meaning sharp etc, but enough so the software has a easy time), all can throw the weighting one way or the other.
Roughly speaking when dealing with small datasets low res, mhz is king.
If dealing with many images - 200+40mp images for example, core count wins.
There are diminishing returns with multi core systems. Even more so with dual cpu systems. My old world record breaking dual 14core xeon system is slower than my new one at almost half the cores. There are so many variables in system architecture that make a difference.
I have the i9 980xe all cores are @ 4.5ghz (some motherboards allow all core turbo as default). However it was silly expensive, hot and power hungry and is wasted most of the time I love it. Enthusiast things have drawbacks.
AMD maybe worth considering too. They will be releasing the 7nm parts this summer and are taking the lead over Intel for the price performance ratio when high core count is key. I’d be cautious of the current gen, but the new ones on the horizon look very interesting.
The GPU processing part does not tax the GPU to the fullest, and you’d be much better getting a regular gaming card. Perhaps a 1080/ti or better. There maybe circumstances where multi gpu can help, however It’s such as small amount of the overall calculation time, It would depend on the project your working on. I don’t believe RC takes into consideration your video ram, and the sweet setting is hard coded, more memory is wasted (I could be wrong here).
When waiting a day+ for a test to come out 5% here and 10% there start to make a significant difference.
Tweeking bios settings, ram timings, motherboard choice, windows setup, all make a difference too.
SSD’s are essential in this modern day, however past a certain point make little difference to performance. No need to get some fancy nvme expensive thing. However be sure it can sustain long term writes, many drives fall to about 30MBps after the cache is full.
So to conclude the best system will be different for different people, depending on the data sets they commonly use.
What type of datasets are you throwing or planning at throwing at RC ?
Ivan