I want to precompute a LUT for a complex shader to cut down on computation time. The LUT in question is 3D, so ultimately I want it to be read from a volume texture. I have done something similar before encoding and reading back the values based on Ryan Brucks’ PseudoVolumeTexture lookup (essentially flattening a 3D lookup into 2 dimensions).
However, I noticed just the other day that now there is the possibility to make a VolumeRenderTarget (TextureRenderTargetVolume) in the editor. However, I cannot figure out anything useful you can do with it. I expected there would be a similar function to the ‘Draw Material To Render Target’ which is available for 2D render targets. This is what I need - essentially using the UVW value (from a texture coordinates node) to loop through the volume texture and encode values.
But looking at the API (header here) there are functions for reading the pixels from a volume render target, but there don’t seem to be any methods for writing something to the render target.
In fact, looking at the usage in the engine, this only seems to be used once in NiagaraDataInterfaceRenderTargetVolume.cpp so I’m not sure if I’m just totally barking up the wrong tree here.
Can someone enlighten me? Is this a half-cooked feature which will someday be possible to use in the way I intend, or something totally unrelated?