Your thoughts on and comments to Volume Rendering in Unreal Engine 4.

Don’t return 0 if it hits the edge, just return the color the ray marcher has created so far.

And I didn’t mean to saturate the position and return it, I meant to saturate the position just before using it to look up the volume texture.

Its possible to fix this in a more graceful way than saturating by pre calculating the number of steps to reach the edge. The cheapest option I have found so far (but I only have this working for the 2.5d heightmap raytracer) is to only run the number of Steps that can run without hitting the edge, and precalculate the remainder step size and run that final step outside of the loop at the end. That keeps the execution overhead of the loop to a minimum.

RE:
>Don’t return 0 if it hits the edge, just return the color the ray marcher has created so far.
This gives me the same image.

>And I didn’t mean to saturate the position and return it, I meant to saturate the position just before using it to look up the volume texture.
What position? I am tracking a particle sent out for each pixel as it moves through the volume and that was the position I saturated and blended.
If that is not what you meant, then what position are you talking about? The UV “position”? If so I did that.

@, would You be kind enough to post a full res version of the texture, that you’ve linked in post 8 ?

Just stretch it? It works more than fine for testing. You need new textures when you make your own anyways.

Ok I am confused because the images you posted looks like its returning UV coordinates and not the actual volume accumulation. Yes I just meant to saturate the UV value being fed to the texture but wasn’t sure if that’s what you did since the modified code lines you posted were returning position rather than accumulation.

I could share the texture but as previous poster mentioned it won’t add much value and I also gave the material code to make your own volume textures in post 44:
https://forums.unrealengine.com/showthread.php?119267-Your-thoughts-on-and-comments-to-Volume-Rendering-in-Unreal-Engine-4&p=578397&viewfull=1#post578397

This is what I wrote:

If I did OutColor = saturate(float4(uv, 0, 1)); the result became: 4df652c31d.jpg

I showed you all the different examples.

I loop over the entire ray and I alpha blend the flaot4(uv, 0, 1) and the previous amassed colour.

Hmm I am still confused why you are returning UV there and not OutColor as I suggested.


float4 voxel = saturate(lerp( sampleA, sampleB, zphase));
	float cAlpha = voxel.x;
	OutColor = cAlpha*voxel + (1-cAlpha)*OutColor;

If that is how you blend your color, then you need to return OutColor on the break. Returning UVs just tells you what position it broke at but you want to just exit early without letting that sample affect the result.

This is the entire last part of the code:


	float4 voxel = saturate(float4(uv, 0, 1));
	float cAlpha = voxel.x;
	OutColor = cAlpha*voxel + (1-cAlpha)*OutColor;
	inPos += StepVec;
	if( inPos.x < 0 || inPos.x > 1 || inPos.y < 0 || inPos.y > 1 || inPos.z < 0 || inPos.z > 1)
	{
		return OutColor;
	}

I return the OutColor if I go outside the volume. I returned Float4(0, 0, 0, 1) in the one example to show that I indeed went outside the volume from time to time, but when I iterated one time less than what is required to traverse the entire volume I got NO black spots. That indicates that the float inaccuracies are causing me to just BARELY go outside the barrier. At most I would go 1/144 outside the volume.

EDIT:
If I returned a black color if I went outside: 0ee980fbc3.jpg With one less iteration it worked perfectly.

You just need to move the if to be BEFORE that final sample is taken. The way you have it written above means even if it goes outside the volume it still samples that outside position. That also means you need to increment the ray before taking the new sample instead of after like you are doing now. Taking a sample before incrementing doesn’t make much sense anyways since then you are always sampling the very edge of the box to start with which you don’t want.

I’ve asked about texture just to have some sort of comparison with what was already done. Additionally, I was puzzled for quite some time about origin of stair-like artifacts as pictured below only to find out that they were specific to the texture.

http://image.prntscr.com/image/a63ac8c00cad43b281da84eb7a81f7f6.png

What texture did you swap it out with to get it to work then? What was different?

Bunch of CT scans for example.

http://image.prntscr.com/image/bcff104c962643e3b0ead7ad8e902215.png

I wonder what additional optimizations can I take, to speed up the rendering. So far I have pretty obvious stuff, namely:

  • Skip light transmittance calculation completely, if sample density is close to zero.
  • Early out for light loop, when light transmittance is close to zero.
  • Early out for main raymarch loop, when transmittance is close to zero.

Obviously, the most problematic part is lighting calculation(light samplesview samples2). Maybe there is a way to improve that, not including half angle slicing ?
So far I was thinking about rendering light transmittance into a RT and sampling that render target in the view raymarch loop.
Alternatively, maybe it is worth trying to prebake light transmittance for 6 directions, and use this data to interpolate, according to the actual light direction? Not sure if it is feasible to pack that into 3 channels though.

What about some sort of adaptive sampling rate? For example, sample with a step size of 0.1 volume side, then do substeps for every step that has significant density changes?

Lastly, I’d like to ask one more question about rendering volume from inside. With inverted box, ray exit point will be always world position of a pixel, and raymarch starting point will be box/view line intersection, if the camera is outside the volume, or camera position, if it is inside. I am comparing distances between pixel position/camera position and pixel position/box entry point using IF, and choosing either camera position or box entry point as a raymarch starting position, but I feel like I have over-complicated something here, and there is cleaner way.

How would one go about ray-marching a world-aligned volume texture? i.e. I’d want to set up a cube w/ inverted polys like Ryan described (So I could fly around inside the clouds) and project a volume texture, but I’d like to have it aligned to worldspace coordinates so that I can easily scale subtractive properties for breakaway effects, and even set up multiple cloud layers with spaces in between.

That should be fairly straightforward. Rather than starting at the box intersection position locally, simply use the box intersection as a world space and then increment that ray separately using the worldspace camera vector. If you still plan to use the box for entry and exit then you will probably want to track the local and world positions as well. You should be able to apply arbitrary scale to the worldposition whenever you do a texture lookup.

I think a more performant method of volumetric rendering wouldn’t be raymarching, which requires a lot of pixels at many slices across the model, but a way to use the distance between the front and back of a two-sided volumetric object to calculate the depth and drive different factors in the shader using those two values. I’m not sure of how to technically handle a method like this in UE4’s renderer since you need the front and back face of every single object rendered behind a pixel, but there should be a way to calculate the depth of the object between the front and back polygons to gauge a volume. This way, instead of using very complex 3D textures which few people know how to use, anyone can just model clouds in Z-brush, throw it in the engine, put a material on it, and get a volumetric result right away.

While raymarching might be great for super-advanced scientific visualizations, a polygonal volume rendering technique would be much easier to work with and performant for game development. If such a thing could be done.

If not, I would definitely like to get something like NVIDIA’s Flex materials to work inside UE4’s material editor, either through the use of GPU particles or anything else. It looks stunning!

I believe that you could use the custom depth of the backfacing geometry to solve that. So you render the front faces using the translucent material, then you duplicate the mesh and render an inverted face masked material that renders custom depth. Then in the volumetric material you would derive the start depth by using PixelDepth and then you use the exit depth using custom depth. Then you handle going inside you could have yet another effect that does the inverse where starting depth is 0 aka camera position. It may be tricky to make sure that the outer mesh always sorts over the inner one and might require some messing with separate translucency or something (or maybe that is tough to solve).

The obvious downside to that is it won’t handle a ray that entered and exited a volume multiple times so your ‘mesh’ would have to encapsulate the entirety of each effect, you couldn’t cut out the little puffs of smoke on the edge to be separate meshes for instance since the custom depth of the backside would be opaque.

This should be quite a bit faster and is similiar to how I did the metaballs for the protostar demo. If you end up modulating the density you will have to expand the outer geometry shell unless the modulation is only removing volume.

That technique can’t handle non uniform density. NVidia is using this technique for volumetric lighting. https://developer.nvidia.com/sites/default/files/akamai/gameworks/downloads/papers/NVVL/Fast_Flexible_Physically-Based_Volumetric_Light_Scattering.pdf

I don’t see why your media couldn’t be variable density as long as it never extends beyond the bounds of the geometry. I have done it as well as user dpenny.

It is just expected that your texture reaches black right at the geometry edge. Maybe you could also just fade the value to 0 near the edge since by definition you know how far the ray is from the backface with this technique.

If done that way you can’t have more than one layer of volumetrics.