Your thoughts on and comments to Volume Rendering in Unreal Engine 4.

I added a clipping plane that interacts with the volume, but the edge is bright white. I might have to redo my blending algorithm a bit.

@NoobsDeSroobs The white highlight is just a result of your transfer function - the values inside the cloud map directly to white. You can try to use more complex transfer functions (which map an intensity value of a voxel to a color+transparency which is then used for blending) to get rid of the bright white and get more details in those areas.
If you want a more ā€˜naturalā€™ look of the cutting interface, consider shading it using the normal of your cutting plane:
In your raycast loop, check if the first sample is already opaque (above some transparency threshold). If so, use the color and the normal of your cutting plane (or the one from the bounding box) to emply a simple shading (i.e. phong) to make it look more like a cut surface.

Other than that, nice results, what framerates do you get in VR? And would you be willing to share your project somewhere (nevermind if its not in a nice state, just looking for some pointers on how to achieve certain things)?

I am not using a transfer function at all. I am simply doing the blending and I encode the transfer function using an external tool like Paraview or VTK. By doing this I can save a lot of instructions. That being said, I still need to do a lot of testing to say if it is a systemic problem or just a result of certain encodings.

I am also trying to avoid lighting it at all. The reason for this is because I am not going for a realistic look. If I can avoid it I can save a lot of instructions as well. I know I might have to shade it somehow to properly display the shapes properly, but that is something I will wait with.

I get 90 FPS easily when not recording and I drop down to 40-60 when recording. I do believe that is more of an issue with my CPU being a bit slow however, as I have a GTX 1080 which reportedly should be adequate. I dont mind sharing my project, but I will have to check certain things before I can give you a definite answer.

@TheHugeManatee There seems to be no issues with me sharing my code. Please feel free to fork the repo as you see fit. If you solve any of the problems I have yet to address I would very much like to see your solution. Here is the repo.

thanks for uploading a version that can just be downloaded and messed with, NoobsDeSroobs
Iā€™ve been lurking around this post for a while because this tech is just beautiful, but itā€™s waaay over my skill level.
Iā€™m a visual learner, so having the repo you put up there to mess with is helping me understand whatā€™s going on. Iā€™ve tried following the posts and making a version myself, but the info is too scattered for me to really get whatā€™s going onā€¦
One thing Iā€™ve noticed about your version is if you bring the camera inside the volume then you start to see artifacting where some of the texture from outside the view starts to come in and it looks super weird:

Yes, I have not had time to look into that and figure out exactly why that is. The effect is similar to having to blending from back to the front. The direction of the scan is also reversed. As such you get a blend where one pass scans forward, as expected, and the other scans backwards from the camera position and backwards. There are several other problems with it as well. For example, if I rotate the volume it becomes all weird. I have some weird random offset every 1/12th of the total Z distance etc.

Working on it, but first I need to get my Paraview Python code to work. Paraview has no info on how to access each data point so I cant make slices out of it.
Happy that you enjoy it. Please feel free to mess around with it. And please come with ideas or concepts as well. Remember to rescale the volume to match your volume BTW.

I managed to get some work done on that today and I just pushed a fix. Please pull the new push and you wont have that problem anymore.

Wrote up a post on encoding and decoding volume textures. will be getting to the ray marching part next.

http://shaderbits.com/blog/authoring-pseudo-volume-textures

Thanks for posting it, @. Eagerly waiting till you reach shadowing part.

Iā€™ve actually experimented with dynamic lighting and had some success by pre-baking transmittance from every point towards 6 directions and interpolating between these based on light direction. Results were quite acceptable actually and considerably cheaper than raymarching it. However, it still took me 2 textures

  1. (TransmittanceX+, TransmittanceY+, TransmittanceZ+, Opacity)
  2. **(TransmittanceX-, TransmittanceY-, TransmittanceZ-) **.

Maybe anyone could kindly suggest a way of packing it into 1 RGBA texture?

My two cents:

I think the best approach for data visualisation is creating a texture flipbook and simply using GPU particles. This wont look particularly beautiful, but it easily allows variating the values and adding more information:

Hmm that is tricky. If this is a float rgba texture, you could pack two 8 bit floats. This doesnā€™t work perfectly since not all bits are free to write such as the sign bit, but it can work for a decent range. If this is for 8 bit textures your options are more limited. You can pack two 4 bit values, but that means you only get 16 values instead of 256 which is pretty poor precision.

You could try storing the total total width of the volume along each axis instead of the extents. That means you only need 3 values instead of 6. Then you could divide it by splitting around the local ray position. This would kind of assume a relatively symmetrical volume and Iā€™m not sure what the artifacts would look like. Maybe it will be ok?

That is actually quite interesting. I will definitely try that. I feel there might be issues with a volume, containing cavities or discontinuities, but for something relatively uniform, like a cloud, probably it could work. My sincere gratitude for suggestion.
Iā€™ve already tried 4 bit values. There was very noticeable stepping in shadowing, sadly.

I just thought about another approach, where each frame in the texture would be further split in two parts, and I would choose which part to sample depending on either it is positive or negative per-axis direction towards light. I havenā€™t tried it yet. Feels scary. Besides I would loose quite a bit of resolution for light transmittance. Might be worth trying that too.

Fantastic series, Iā€™ve been really excited for this post! Just curious, is it possible to project a pseudo-volume texture using global coordinates rather than local-UVs?

What do you mean? That you dont do the calculation to transform it to two values between 0 and 1? You have to give a value between 0 and 1 since the texture lookup method demands it. If you dont want to perform the 3Dto2D calculation you must use a 3D data set.

I feel like I am misunderstanding you though. Could you extrapolate a bit?

The method described at the end of the post looks up using a V3 meaning you can just use worldposition and then divide it for scale control. Using that method you could use the volume noise texture to do things like procedurally cracked stucco walls that would work seamlessly around corners automatically. It would be definition be global and allow you to have seamless effects between objects.

This looks fantastic. I canā€™t wait for the update to the volumetric tutorial from .

Thanks

Looking forward to Ryanā€™s next blog post as well. He posted a new video yesterdayā€¦maybe itā€™s coming soon!

Hey guys, the ray marching article is up!

http://shaderbits.com/blog/creating-volumetric-ray-marcher

Beautiful!
Thanks a ton for putting the effort writing all these articles, I think itā€™s safe to say at this point that you have my favorite UE4 shader blog :slight_smile: I greatly appreciate it!

Will something like this make into the engine sometime in the future? (I mean like a volumetric cloud system and/or volumetric example content and whatnot).

Also since Iā€™m writing this, your caustics video (ContinousSpectralPhotons) looks super duper fricking sweet! Iā€™m dying to learn more about that :o is that something you are planing to release as well ?

Yes we have plans to add volumetrics as a first class feature at some point but it may be a while. Even when that time does come, there will be a place for custom one-off effects since it is likely that a unified system will come with some limitations for performance reasons that can be worked around when doing something that is just a single self contained volume.

Caustics stuff should be coming at some point but there are a few other things first so it will be a while. Next article is about curved surfaces and POM and its been mostly done since before I did the raymarching one.