I have data files in a project that I’m working on where arrays of data are typically on the order of tens of millions of floats long. I have written a version of this program in c++ with OpenGL from the ground up before in which it is actually pretty simple to pass large buffers of data to shaders and operate on the data with glsl code.
However, it seems that since Unreal is DirectX based there is no glsl and it is recommended to use materials for optimal performance in UE5. So I need a way to pass large quantities of data like I would (compute shader style) in OpenGL but with materials. What is the most efficient way of doing compute shaders in Unreal? Any suggestions?
TLDR: I need a way to pass a buffer millions of floats long to a material shader.
You can write HLSL code in custom material nodes, that may help. UE has custom data but not in the magnitudes you’re talking about - but you could well be able to do it in some custom code:
Hey RecourseDesign,
Thanks for the pointer, my problem lies more with actually passing data to the material in the first place. I can convert my glsl code to nodes/hlsl pretty easily, but I don’t know how to get the data into a material.
Is there no way to load the data into like an image buffer and pass that to the material?
I’ve seen a way to write compute shaders in Unreal, and I’ve got that working but I still could only use those in normal blueprints, the data is still isolated from materials.
Right, yeah you can create a “RenderTarget” and either directly access the memory or wrap it in an FCanvas object, then use that RenderTarget as a source texture in your material…
And I can see how I could attach the texture to the Rendertarget through an FCanvas by looking at the FCanvas documentation. The disconnect I’m still having is how do I get this rendertarget to be useful in the material now?
Again thank you for bearing with me, I’m getting used to the Unreal shader pipeline still.
Sounds like you’re almost there.
Now you just need create a material that has a “Texture Parameter” like this (taken out of some of my code for something else):
and in your c++ code, create an instance of the material, and pass in the render target texture like:
UMaterial* material=LoadObject<UMaterial>(nullptr,TEXT("Material'/Game/M_MyMaterial.M_MyMaterial'"));
if(material) {
matInstance=UMaterialInstanceDynamic::Create(material,nullptr);
matInstance->SetTextureParameterValue(TEXT("Texture"),renderTarget); // sets the "Texture Paramter" to the rendertarget
matInstance->SetScalarParameterValue(TEXT("Resolution"),res); // used to find texel size
}
There’s a “Resolution” paramter too to calculate the texel size.
You get best results if you set the interpolation type to 'Nearest Neighbor" too.
Thank you so much for your help. This works, however, the max “rhi dimension” is apparently 16348 so I can’t actually create a texture to hold the size of data I need.
This seems a bit strange considering that a merely 1024x1024 image has 1048576 pixels of four floats so I should definitely be able to create these textures. Any ideas why I cant? The error I’m getting is: