Your thoughts on and comments to Volume Rendering in Unreal Engine 4.

the translucency shadow wierdness is actually a bug and not anything to do with procedural mesh component. see here…

https://answers.unrealengine.com/questions/468787/volumetric-translucency-shadow-bug-on-static-meshp.html
https://issues.unrealengine.com/issue/UE-34623

I’ve wasted hours trying to figure that out and never considered trying it on a static mesh! Oh well. lesson learned.

[MENTION=4894]Tim Hobson[/MENTION], I added a note to the answerhub post which mentions that the bug is nothing to do with the noise. it can be a texture too. could the issue description be changed to note please? I don’t want it to me misunderstood.

Cheers
Dan

I am seeing some artifacts that I am unable to understand.

First, have a look at the effect I am getting:
f8288e803d.jpg

This is the material nodes(Not all inputs are used in the shader. StepSize and End are not used):

The T0 is the entry point of the volume(black triangle in the picture below), T1 I used as the directional vector. I use that vector to iterate through the volume(The small gray triangle). The BoundingBoxMax happens to be the origin of our 3D textures UVW(The large gray triangle).

5a5d95d552.png

And this is the code that I have inside the node:


float4 OutColor = float4(0, 0, 0, 1);
float Phase = 0.0;
float3 particlePos = T0; //This is the particle we are tracking. XY = UV, but we have to add n*WidthOfTexture to U and m*HeightOfTexture to V so we use Z to calculate that.
for (int i = 0; i < NumIterations; i ++)
{
	Phase = particlePos.z;
	uint numSegmments = NumRowsCols*NumRowsCols;

	float fraction = frac(Phase);
	
	float2 offset = float2(floor(numSegmments*fraction)/NumRowsCols, floor(NumRowsCols*fraction)/NumRowsCols);

	//Shrink the space 0-1 that covers the entire texture to 1/NumRowsCols of that to only cover one segment.
	float2 MUV = float2(particlePos.x/NumRowsCols, particlePos.y/NumRowsCols);

	//Add the offset. The offset is moving the "current slide subsquare of interest" around.
	offset.x = MUV.x+offset.x;
	offset.y = MUV.y+offset.y;

	float4 voxel = Texture2DSample(Tex, TexSampler, offset);

	OutColor = Alpha*voxel + (1-Alpha)*OutColor;
	particlePos += DirVec;
}

return OutColor;

The reason I ask is because the code inside the loop is basically the same as my custom flipbook node, which works. The only difference is that I am using the particle vectors x and y coordinate as my U and V coordinate. This is only a problem in the Z axis that causes 4 of the 6 surfaces to have this weird effect.

Somewhere my application of the Z axis must be wrong because the surface up and down are identical, while the surface between them have that effect you see.

Thank you for your help.

That is the kind of artifact I see when I change the frame count of my volume texture without updating the shader. My best guess is that this is a rounding error in your 2d space conversion, or a ceil needs to be a floor or something.

I handle the volume lookup with a separate nested function and then call that. Note that to seamlessly sample the volume texture it is actually blending two samples. If you skip that step you will see a very nasty hard edge between Z frames. That part will not be necessary once we have full hardware volume texture support. The reason why is that the hardware uses the usual mip mapping channel to handle Z frame blending. But that also means you lose mip maps.

Here is my single volume lookup code. Maybe you can plug it in and use it.


//** Tex       **// Input Texture Object storing Volume Data
//** inPos     **// Input float3 for Position, 0-1
//** XYSize    **// Input float for num frames in x,y directions
//** NumFrames **// Input float for num total frames



float zframe = ceil( inPos.z * NumFrames );
float zphase = frac( inPos.z * NumFrames );
float2 uv = 0;
float2 zcellxypos = 0;



// convert 1d to 2d index
{

	zcellxypos.x =  fmod( zframe,  XYSize );
	zcellxypos.y = floor( zframe / XYSize );
	zcellxypos /= XYSize;

}

uv = frac(inPos.xy) / XYSize;
uv += zcellxypos;
float sampleA = dot( Tex.SampleLevel(TexSampler, uv, 0), channel) - offset;



float2 uv2 = frac(inPos.xy) / XYSize;

// bilinear filtering get 1 frame above
{

	zcellxypos.x =  fmod( (zframe + 1),  XYSize );
	zcellxypos.y = floor( (zframe + 1) / XYSize );
	zcellxypos /= XYSize;

}


uv2 += zcellxypos;

float sampleB = dot( Tex.SampleLevel(TexSampler, uv2, 0), channel) - offset;

//return sampleA;

return saturate(  lerp( sampleA, sampleB, zphase ) );

The value ‘offset’ was added last minute so I could try to add variation to instances of the effect using the ray starting position but you can remove it. Also it does a dot product so that various channels could be used. I was trying to channel pack different variations of the volume texture as well for more variation which worked very nicely.

This is the code line to call the nested function. All you have to do is make sure the nested function is plugged into one of the pins of the ray marching function so that the compiler is forced to add the nested function first so that it is number 0:


	//Sample the volume texture
	float texatray = CustomExpression0 (Parameters, Tex, TexSampler, saturate(CurPos), XYSize, NumFrames, channel, offset);

Saturate may or may not be necessary depending on how you handle your step calculations. I added it just to clean up the edges when using low number of steps.

Here is what my volume baking BP and material look like animated. I apply some animation by using flowmap like distortion on the volume texture itself. The volume texture gets rebaked every frame and then the ray marcher can look up the more complex effects like flowmaps just as a single (well technically double due to the aforementioned reasons) texture lookup:

nice work @

Here’s my pre-baked shadows. I can update the shadow texture once at scene load or if the light changes direction. One question is, how are you able to go inside the volume? I see a plane in the blueprint. Is that what is actually rendering? You just move and orient it to camera position? I’m rendering a cube (even though I’m doing the box/ray intersection in hlsl). It obviously disappears if I enter the cube mesh.

How expensive is switching texture objects as a parameter through blueprint? I assume its the cost of a texture bind on the gpu. I’m thinking of caching out a fluid sim (maya or houdini) and save it out as 3 or 4 4k texture to get a few frames of animation.

I’ll post my custom code soon. Needs a little clean up.

The other question I had is related to the metaball stream where in the raytracing (in that case and sdf) you were adding up density and at the end doing the exp density function. In most volume rendering code, the exp(-densityCoefficient * densitySample * stepSize) is done in the raymarch. Can the exp be taken out and done at the end after the raymarch?


The solution to going inside the volume is simple: use a box with inverted polygons. Then you use the result from the line box intersection to establish the ray entry position.

The prebaked shadow looks nice and I am sure is a huge speed boost! I will try that out soon but I have not worked on this much lately. Instead I have been doing other stuff like baking out animated caustic textures. These are pretty fun!

I couldn’t make the gif loop but the actual texture loops and tiles seamlessly. I will be making a more detailed post about this soon.

Inverted box. I like it! Now I have to see if I can use scene depth texture to composite into scene properly by exiting early. Or I can work out exit point before loop using the depth. Shame these thing don’t composite together but not a major thing to worry about. Wan’t to have a go at whole sky cloud dome tracing using some ideas from the zero dawn paper.

I like the caustics! Looks like gerstner waves. Is it multiple sin/cos stacked with some remapping of ranges?

Gerstner all the way :slight_smile: Yes, I am using gerstner waves from the tessendorf paper. We were lucky enough to have Dr. Tessendorf himself grace us with a lecture at Epic last year and it was an eye opening lecture to be sure. I have recreated most of his paper so far and the possibilties are endless when you consider both the waves and his solved form of the Snell equation in vector form.

The ‘spectrum’ of the waves contains most of the character of the waves. You have to be somewhat careful selecting the frequencies when rendering for realtime. I am using 20 waves at the highest quality setting, and then layering on some texture based normals and bubbles for fine detail. I tried using the existing wave spectrum formulas but found when using only 20 waves that a custom formula is best. Don’t worry, I will share all of my research in due time. Not much of it is ground breaking, but having everything controlled by one formula adds endless possibilities to what you can do.

here is a teaser image:

(please excuse the lack of premultiplied alpha on the fish, I will fix that at some point)
tiny.jpg

All of the fish and corals in this image were taken from the reef aquarium I had many years ago. I sold it when I moved to a new house but plan to pick up the hobby again once my daughter is old enough to help me out. All corals from local frags of course.

http://www.complex.com/pop-culture/2012/11/10-weird-hobbies-of-game-developers/ryan-brucks-fishtank-builder

That pretty cool. Nice weird hobby! My nearly 2 year old daughter takes up much of my time too! The only ‘hobby’ I have is UE4 (and studying for a uni maths course).

By the way, here is my version of gerstner…

This video is what caused me to take an interest in you. I wish I were as good as you with stuff like this. ^^

Those raymarched volumes are much nicer than my own attempt and probably faster too. Keen to see some examples!

I would try to marshal gpu particles and spawn them exactly in the area where opacity is not 0 on a voxel grid (generated with Houdini). [That would probably take some C++.] The particles would need nothing but an endless lifetime and a color and opacity based on the voxel grid. [Instead of using the grid for velocity, the vectors could be read in as colors and the magnitude of the vectors could be read in as opacity, if Unreal does not import that by default.] Unless you want to create special effects with it you wouldn’t need any velocity on the particles and if you would need a voxel grid for velocity, you could possibly import it separately.
The particles themselves, could be opaque square or round billboards (for efficiency purposes) and would probably still look decent.

@xDiavalx this approach is ok for small volumes but it does not scale well. Viguera et al. [1] have shown the feasibility of GPU splatting (which your technique boils down to) for sparse volumes (in their vascular volumes, only around 2% of the voxels contribute to the rendering), but for volumes in the medical context which I myself as well as the OP are interested in, volume sizes of 512^3 are not uncommon with at least 50% of voxels contributing. This would require UE to handle about 65 Million (static) particles… I wonder if anyone has tried how far you can go in terms of resolution/number of particles on a 2016 graphics card and UE4, maybe @dokipen has some insights here since he seems to have tried it?

[1] http://www.vis.uni-stuttgart.de/plain/vdl/vdl_upload/148_7_vega-et-al-VIS2006.pdf

If you go a little further and create some single-sided, simple surfaces that can represent the connection between voxels, you could make a fluid surface. You’d make the center point of the surface represent the voxel and the sides connect to where the adjacent voxels sit. Then you’d just make all of the possible permutations and swap them out as needed.

How hard would it be to only render voxels that aren’t completely enclosed?

I completely agree, it’s not very efficient.
My idea was to optimize by not spawning particles in transparent areas. (For relatively static objects like clouds that could be pre-optimized in Houdini.) I suppose you could also make an algorithm that spawn particles only on the surface of the volume (within some tolerance) and in proximity to a cutting plane. So when you show the inside of the volume, you pre-cut it with the cutting plane creating a new volume with a new surface area. - I guess it boils down to the stuff you already are trying to do XP.
Thinking in that direction you could make a flipbook of VDB volumes. Where you precompute surface volumes that the user might see and switch between those depending on the user’s needs…
But at that point you might as well create flipbooks of optimized polygon meshes and switch between those.
I’m going further down the rabbit-hole: Why not make meshes and slice them with the new cutting operation from 4.13. Just figure out a sorting algorithm so that the caps don’t intersect. For medical representation that seems far more efficient.

  • Just spit-balling…

The problem with this approach is it prohibits zoom. If I want to focus on a specific area for inspection I can not just zoom as then my entire field of view would be covered.

> 'm going further down the rabbit-hole: Why not make meshes and slice them with the new cutting operation from 4.13. Just figure out a sorting algorithm so that the caps don’t intersect. For medical representation that seems far more efficient.

This is much better done with the shader where you simply hide the pixels that you dont need.

I implemented your shader and I am still getting the same error. I was talking with someone and they said it might be the padding of textures to the closest power of two, but even if I stretched the texture to 1024x1024 it showed the same symptoms.

As you can see I am using your material function that gives me the entry and exit point of the volume. I fed that into the function that you gave me to sample and lerp the values.
Here is the result:

Since my volume is 100x100x100 I divide the vectors by 100 to get it into the range 0-1. In the image I am sampling 144 times.
I also checked if I actually did not have 12x12 slices so that the NumCellsPerSide variable was wrong, but it seems to not be.

Any ideas what the cause can be? I have rewritten the code many times and I am out of ideas.

Looks like for the most part its getting the correct values but there are some kind of slicing artifacts where it is either going outside the volume on the edges or maybe getting mixed up between the slices.

The 3d volume lookup I gave you does not care about powers of two at all.

Can you try a saturate(position) on the position you feed to the texture lookup? If you are going outside the volume that should tell you. Or even do something like this

if( pos.x < 0 || pos.x > 1 || pos.y < 0 || pos.y > 1 || pos.z < 0 || pos.z > 1)
{
return currentvalue;
}

This way it should not continue to add edge pixels and you may see a hard black line indicating places where the ray tried to exit which will tell you if there was more of a problem with how you normalize your coordinates into the 0-1 space.

Also try disabling the bilinear Z filtering while testing. That means you can just sampleA instead of blending sampleA and sampleB.

RE:
>Can you try a saturate(position) on the position you feed to the texture lookup?
What I did was OutColor = saturate(float4(inPos, 1));
The result: 06fea1af5f.jpg
If I did OutColor = saturate(float4(uv, 0, 1)); the result became: 4df652c31d.jpg

With the IF I get the same as before because I traverse exactly the distance between the entry and the exit.

Or even do something like this

if( pos.x < 0 || pos.x > 1 || pos.y < 0 || pos.y > 1 || pos.z < 0 || pos.z > 1)
{
return currentvalue;
}

>This way it should not continue to add edge pixels and you may see a hard black line indicating places where the ray tried to exit which will tell you if there was more of a problem with how you normalize your coordinates into the 0-1 space.

If I returned float4(0, 0, 0, 1) when the if test hit then I get this: 0ee980fbc3.jpg

Note that if I iterate 1 less I DO get a smooth surface, so all those dark spots are just the vector barely going across the border due to float calculations. Further note that the colors are smooth, so the particle movement is fine. Something happens between calculating the new particle location and looking up the texture with your custom UVs.