nice work @
Here’s my pre-baked shadows. I can update the shadow texture once at scene load or if the light changes direction. One question is, how are you able to go inside the volume? I see a plane in the blueprint. Is that what is actually rendering? You just move and orient it to camera position? I’m rendering a cube (even though I’m doing the box/ray intersection in hlsl). It obviously disappears if I enter the cube mesh.
How expensive is switching texture objects as a parameter through blueprint? I assume its the cost of a texture bind on the gpu. I’m thinking of caching out a fluid sim (maya or houdini) and save it out as 3 or 4 4k texture to get a few frames of animation.
I’ll post my custom code soon. Needs a little clean up.
The other question I had is related to the metaball stream where in the raytracing (in that case and sdf) you were adding up density and at the end doing the exp density function. In most volume rendering code, the exp(-densityCoefficient * densitySample * stepSize) is done in the raymarch. Can the exp be taken out and done at the end after the raymarch?

