Hi @WillBarley
My colleague tested this using the similar dataset and he was able to create a texture.
Can you try to use other style in the unwrap settings?
Also, using the other unwrap method is working for you?
Can you check the CUDA driver on your machine?
Can you also try to disable some of the GPUs and keep for example just two of them for RealityCapture?
I’ve not seen the “102” number before -what does it mean?
Why does it suggest updating my CUDA driver? I’m on a pretty recent version.
Any idea what’s going wrong @OndrejTrhan ?
Yes,
Geometry unwrap works
Check CUDA drivers - what would you like me to check?
In my post you can see the CUDA driver version (12.4) - what else can I check?
Hi Will,
I found this about the mentioned errors: CUDA_ERROR_OUT_OF_MEMORY = 2 The API call failed because it was unable to allocate enough memory or other resources to perform the requested operation.
Ca you also try to use only one GPU (as a test case)?
12.4 is a CUDA toolkit version, there should be already a newer one. And what I meant is the driver version (Download The Latest Official NVIDIA Drivers). It is possible that using the newest or one of the older will help.
Is this error also happening on other dataset? Can you try to simplify your model and unwrap that smaller model?
Is there a way that I can lookup those error messages like you?
On this one: CUDA_ERROR_OUT_OF_MEMORY = 2 The API call failed because it was unable to allocate enough memory or other resources to perform the requested operation.
Is this GPU memory or system memory? I usually see system memory rise to 100% and then the process fails. So I have believed that it is system memory that is running out. Is that correct?
I’ve done some test and it seems that switching to
Unwrap Style = Maximum Texture Count
and setting the max count to a very high number
Seems to allow us to successfully use Mosaic to do Unwraps and Texture.
I’m running some more test now to confirm.
Good News!
@OndrejTrhan I also noticed a bug:
using the -unwrap command ignores the current unwrap settings. You can see it if you do this:
I can see that at least 1 GPU ends up with a lot of memory allocated and it stays there until you restart Reality Capture or start a new process in Reality Capture
Maybe I don’t see 100% GPU memory usage in the chart because it happens quickly: the chart gets a data point every 10 seconds, perhaps it OOMs in a 10s period between data points.
why would it OOM? Perhaps something tries to allocate “all free GPU memory” on one of the GPUs, while at the same time some other process is using the GPU… i’m not sure how that kind of race condition would occur, but maybe so. What other processes are using the GPU? other apps displaying things on screen because the GPU is in WDDM driver model so that it can render to the screen in Reality Capture - so perhaps putting them in TCC mode and perhaps running headless would help avoid a race condition?