How to use CUDA file in UE4?

Hi folks,

I’ve been playing with CUDA, outside of UE4, and using the Thrust library with C++ to write CUDA functions, in a file that is compiled to a .cu file. I want to use this in UE4 for parallel processing on large arrays (and this works as a standalone app).

I am new to UE4 and still feeling my way around. I am using VS2015 with UE4 and have managed to write C++ scripts in the engine, and pass variables beween them and blueprints. So far so good.

My question is: How can I call/use the functions in the .cu file from a C++ file in UE4?

I mean, although the code looks very much like C++, and I can write and run regular C++ code from within the .cu file (compiled to a windows console from VS2015), it is compiled to a .cu file, and from there on I am lost as to how I would use the CUDA function(s) I have written within UE4.

Is there a way to reference a .cu file’s functions from a C++ file in UE4?
Or do I need to make a DLL from the .cu and then access it from a C++ file in UE4?
Anyone know of an example in UE4 of what I am trying to do?

I don’t really understand compiling, so any source of simple, practical information that might help me get going, rather than a deep, exhaustive course on compiling, would be appreciated.

Thanks!! :slight_smile:

UPDATE: Going to answer my own question (partly):

I am trying the static library approach. Anyone who knows better (almost everybody here), please let me know of any simpler/more effective methods.

So, I have followed a MS tutorial on making a C++ static library, then used the same method to create a CUDA library by using the CUDA 8.0 runtime project (available in VS after installing the CUDA Toolkit) with a C++ header file included, rather than a C++ project, and compiling it to a .lib.

I have managed to then incorporate the CUDA library in a C++ Win32 Console project (outside of UE4), and confirm that the C++ project can call and run functions inside the .cu file in the .lib.

Next step will be using the CUDA .lib in UE4. Once that is working, I can start doing some perfomance comparisons, etc. I will update on success/failure, as I noticed one or two other similar questions on this topic, with little or no response.

Hey,
How’d did you go with this?

I don’t find a good way for this issue. Also plan to use the static library approach.

What about using shared GPU memory to communicate a stand alone CUDA C++ application with the UE4 engine? (I’ve managed to do this using CPU shared memory but I’ll need some directioins to do this on the GPU) Does anybody know if it’s even possible?

Finally, a kind dude from Japan has created a tutorial on how to do this. (Use google translate)

http://www.sciement.com/tech-blog/c/cuda_in_ue4/

Man, they beat me to it :(. It was in my todo list. But cheers!
Now I will focus on IBM libraries.

I was able to link and use CUDA kernels in unreal.
The biggest hurdle though is I wad not able to register a ID3DTexture2D into a CUDA resource, and thus force me to go GPU->CPU->GPU route instead of GPU->GPU. That kind of defeats the purpose of using CUDA in some scenarios.
Do you have any idea how you can create a UTexture2D from a CUDA memory without going through the CPU?

1 Like

If the goal is to move heavy computations to the GPU and then get back the result, I’d suggest looking into compute shaders (HLSL). In Unreal it requires a little bit of a setup to start using them (in Unity they’re available out of the box), but they can do stuff XX-times faster. There are already methods to pass data to the compute shader, to execute a specific function in the shader, and to gather (to “dispatch”) the result.

If you want to learn a bit more (after googling the basic setup), note that Lumen uses compute shaders (maybe Nanite as well), which is officially declared in Lumen docs. The engine source code is there in GitHub opened to everyone.

Oh, I just realized the post was 5 years ago. Any success though? :slight_smile: