I am currently hard-stuck trying to read entires textures in a fast way with UE5, and since reading pixel-per-pixel values from the CPU is extremely slow I thought compute shaders could be the way.
My idea is to input the texture and fit the values into an array (like an array of float4) with the same length as the number of pixels of the texture.
I wanted a shader that works with varying size textures, but I would have to change the number of threads dynamically depending on texture size, so I am not sure I can do that.
Then: can I return the array containing those pixel values in a way? How do I set it up?
Creating shaders in HLSL is a complex and challenging process. Compute shaders are an even more complex topic. You won’t be able to create a compute shader without years of experience in HLSL. For example, I have been creating materials in UE for more than 15 years and writing in HLSL for more than 5 years, I still have not written a single computational shader. To run a compute shader you will most likely need a C++ background.
According to your task: shaders (including computational shaders) are created to WRITE, not READ. Everything you create with a compute shader will remain on the GPU side. The CPU will not be able to receive this data.