Luoshuang's GPULightmass

hi @Miguel1900 ,
There is a whole manual from Nvidia about this and for me as Hardware/Software engineer its a good read for me!.

NVTechNotes_Notebook-003.book (nvidia.com)

However, if the application or the Optimus driver are configured to use the NVIDIA 
High Performance Graphics hardware through one of the methods listed in Methods 
That Enable NVIDIA High Performance Graphics Rendering on Optimus Systems, then 
information about the NVIDIA High Performance Graphics hardware and its 
corresponding capabilities is made available to the application.

The NVIDIA driver monitors certain runtime APIs and returns the appropriate NVIDIA 
hardware information through calls made by the application to those APIs.

The following are the APIs that applications can use:
 DirectX9 and above – IDXGIAdapter::GetDesc
 OpenGL – glGetString(GL_VENDOR) and glGetString(GL_RENDERER

The trick is finding someone in Epic to implement the code
Trick to tell AMD and Nvidia drivers to use the most powerful GPU instead of a lower-performance (such as integrated) GPU (github.com)

The code that need implementing in CUDA is here in this article
In c - How can I get number of Cores in cuda device? - Stack Overflow
I am trying and I am not asking to use Directx DXR :rofl::joy::grin::grin:

Lets face they are :sleeping::sleeping::yawning_face:

This unfortunately is low level stuff and not DXR, DX12 etc

If you really want to try and see how fast/hot your GPU can go try this it even works on GTX1050 and getting 67 FPS!.

FurMark - GPU Stress Test | Tutorials (tenforums.com)