Your thoughts on and comments to Volume Rendering in Unreal Engine 4.

To be honest, I am not sure this should be here, but I felt the other topics were even less relevant as I am talking about rendering. Just not the standard methods in UE4. Feel free to move if I placed it in the wrong area.

To start, let me be transparent. I am working on a masters thesis using VR and scientific visualization. I saw potential in the merging of UE4 and scientific visualization for students, scientists, gamers and all graphical artists alike.

Before I start discussing the problem, let me try to give you an idea of what I am trying to achieve:


8419d9e208ca98e51c4f8b817f55e249bb1c711c.jpeg

Volume rendering is expensive and is a completely different way of rendering from what UE4 already uses.
It is also one of the reasons for why UE4 is not used in a lot of scientific visualization. This is the reason for why I want to try to get volumetric rendering into UE4. It will be expensive, but for a few important assets it might be worth it for the effect it gives. I want to at least provide the option. If I am successful I can open UE4 for a world of scientific use that it was never considered for. Further, it will enable developers to put in volumetric renders of a few important assets like clouds or humans in gameplay or in cutscenes.

However, there are challenges to this. For volumetric rendering you need volumetric data. That means not just vertices and face data, but a complete XYZ data set with some kind if data in each voxel. Then you have to integrate over the descrete data volume using a ray from each pixel.

What options and tools do I have?
-Third party library
-Plugin
-Change engine code
-Write a custom shader with the custom shader node.

I tried using a third party library and to add it as a third party library to UE4, but that failed because the library could not find the OpenGL context. If you have any idea how to solve this, that would be great. I do not know UE4 well enough yet.
I could write a custom shader and try to use that using the custom shader node, but I doubt it will work.
A plugin would be great, but since I need to change the entire rendering pipeline, that seems to be hard to get to work.
Finally, changing the engine code. I have started to read up on the code and try to understand the pipeline and where I might want to input my code, but I am completely green with UE4’s source code so any input, experience or tips is greatly appreciated.

I also want to reach out to anyone with some experience with something similar. I want all your input and thoughts as they would help me tremendously.

Have you worked with volume rendering outside of UE4? Got any tips or thoughts? Great!
Have you worked on volumetric solutions in UE4? Awesome. Leave a comment.

This is a selfish plea for help, but I hope that someone are willing to help.

Thank you.

1 Like

I have some working examples of this that I could share soon. Basically I am creating the volume texture inside of the editor and then ray marching it.

You can do everything with blueprints and materials using the custom node. If you want the experiment to scale, it will be better to do it in code though.

Technically when you enable project settings for distance field generation, every asset has a volume texture created for it so it should be pretty easy to convert that into a format we can read. It is planned to make true volume textures a content browser asset type soon which will make this even easier but there is no reason to wait for it when you can treat any 2d texture as a 3d texture with simple flipbook math.

If you want to get something like the results from the 1st image, you need to do things like store different density for the different elements in different channels of the volume texture and then write some different shader rules for them. There is also a different approach that instead uses more of a gradient mapping approach using both the value and the gradient of the volume to map into a lookup texture of surface types. That method is a bit more complex but is potentially much more memory efficient since then you could pack unrelated volume textures into the different channels and the gradient mapping info would be relatively tiny 2d textures. Both methods have different challenges and advantages.

Currently the fastest method for rendering dynamic shadows for these would be to pre bake a shadow volume texture and the 2nd fastest would probably be half angle slicing. Half angle slicing would be far superior though since shadow volumes get bleeding issues on edges.

It might be easiest to just slice the mesh and cap the holes, I would imagine that wouldn’t be super intensive and would be simpler to set up than a volume renderer.
You could also do something with particles, but you would be limited in the number of particles so it might not look good enough.

I have mostly worked with volumetric rendering and voxels in 3rd party tools (such as Blender 3d) or code that I wrote myself (C++ based). As said, getting someone to hard code in an implementation may be your fastest route to a scalable approach, and a lot of information is often stored in textures that pertain to some kind of information about the properties of the volume you’re trying to simulate. Density is often the go-to variable, though there are other things depending on the kind of data you’re working with and what you want to be visualized.

One of the things you will want to get well acquainted with if you go through the texture approach, is UVW coordinates. UV coordinates are a way to map a 2D texture’s pixel coordinates to a surface on a 3D model by using 2 coordinates, U and V. UVW takes this to the third dimension, where instead of a single 2D texture (at let’s say 1024 x 1024 resolution) we now have a 3D texture (let’s say at a resolution of 1024 x 1024 x 1024) that is now 1024 2D images, with associated W coordinates. If you think of it as a cube, you can slice a cube into 1024 planes, and each plane will then be a traditional 2D texture. By ordering the planes along the W axis from the top of the cube to the bottom, you are able to recreate the 3D object from these 2D image slices. This is a method often employed in scientific visualizations and 3D printing methods. One way to use UVW in UE4 would be to take a 3D texture that is a gray-scale scan of the subject you want to represent volumetrically. You then use blueprints, materials or hard coded functions to spawn a voxel with a color, size and shape dependent on the scan data. This way you can now use tiny 3D cubes to reconstruct something as complex as an MRI scan of the brain. How jagged it will look is controlled just how it is in 2D - the resolution of your scan data, and the size of the 3D cubes (voxels) you use to create the model are all that matters at that point.

To be able to use that kind of UVW based voxel spawning, to do real time visualizations you will also want to make most of your materials unlit so that they don’t require lighting or dynamic shadow data to go with cut-aways if you want to do any. You can also perform cutaways with simple hit detection methods and an actor that acts as the cutway object so that voxels despawn when they hit the actor. This won’t scale too well off blueprints, but hard coding it could work very well as long as doing cut-aways of the model and only that model are the primary focus of your simulation at the time.

I hope that helps!

Re: darthviper107

I don’t think using a mesh is what he is after. Typically volume rendering is used for things like CT scans because things like the fibrous thin tissue is very hard to resolve into a discrete solid surface. And clouds are not well represented by meshes. And he mentioned this is for thesis research so I don’t think just seeing a sliced mesh will further that goal.

RE: Darthviper107
I tried using particles, but the particles were sadly not accurate enough. I imported a vector field that I created using a script, but the particles did not follow the field properly and pulsated. So I couldn’t use that to visualize path lines.
Further, those solutions will not be good enough for scientific research, so I have to find a better method with all the bells and whistles. If you know VTK, Paraview or other scientific tools I aim to enable things like that.

RE:
I dont understand what you mean with a volume texture. Is it a 3D scalar data set? I am mainly going to work with XDMF files.
I have used VTK to render and visualize similar things before. I even wrote my own rendered using C++ and OpenGL. My problem is how I can use the tools of UE4 to achieve such an effect. I do not know of any textures or materials that are 3D. They all have UV coordinates and not UVW. I tried to do a quick test where I made a bunch of 10x10x10 transparent voxel actor, but that caused the program to lag. Then I tested with 100 layers of material billboards and that also was not viable as the frame rate tanked. Is a 3D texture something similar to layering a 2D texture?
The programs that I wrote myself outside the engine runs fine and renders the volume at about 240 frames per second so I feel I might be doing something wrong.

The data I work with is a 3D scalar or vector set of density/energy or vectors, respectively. These data sets come from MRI scans and flow measurements of wind and water, to mention a few example. I am also going to make some kind of UMG user interface that allow users to create and control transfer functions, various mapping algorithms and create isosurfaces. I am afraid we are not talking about quite the same thing here.

RE: Shade599
Thank you for your reply. What you wrote is what I did in my own code as well. The problem is that I also have to interpolate values that I don’t have. To make a 2D example. If I have a coordinate system XY and I have a datapoint for all whole values of X and Y I should still be able to calculate a value for <3.5, 6.6>. As such using cubes is suboptimal.

I tried using cubes and I made ana ctor that consists of 10x10x10 cubes that you can cut through using a cutting plane. The cutting and everything works really really well, but the frame rate dropped to 30-40 FPS. I am going to check if I forgot to make the models unlit though. Further, please read my previous comment regarding path lines. I have some problems with the engine vector fields, so I might just write a function that calculates the points myself and then I spawn a particle at each point. Hopefully that will work.

A volume texture is just a 3d texture instead of a 2d texture. There is actually hardware support for 3d textures but it is not exposed to the content browser in UE4, only via code currently.

You can easily make a volume texture out of 2d textures though using a flipbook approach. here an an example where I took a 2048x2048 texture and divided it into 12x12 frames meaning the 3d texture is resolution ~170px on the XY dimension and 144px on the Z dimension. This texture was made inside of UE4 and it should be easy to convert any data format into a volume texture. I wish there were more of an industry standard for them.

Its literally the same as a flipbook, and you sample by saying the ‘Frame’ is the local Z position in 0-1 space. Note that technically you don’t need to make these power of two sized, so really I should have made it so the sides are equal in X,Y and Z. say by doing a 1728x1728 texture that is 12x12 frames. That would be equal 144 pixel resolution in all axes. The reason I used power of two is that I was trying to make these also support mips which I also got working up to about 3 mips but that is another topic since most volume renderers don’t use mips.

Ray marched, it looks like this:

You can move it all around and even go inside of it and it still looks like a true volume:

2 Likes

How is the performance on this? I’ve used arbitrary slice-based volumetrics for some of my experimenting but I always run into transparency overdraw performance cost limitations.

Wow, I didn’t know UE4 had that capability built in. That’s awesome, thanks for showing that!

It just depends on the number of shadow steps. If I only do ~128 density steps and maybe ~20 shadow steps it is pretty fast. But up the shadow steps to 64 and it will be slow as hell. That is because the cost is Density * Shadow steps, so 128x64 = 8192 lookups! And because these are pseudo volume textures, you need 2 lookups for each single lookup (to blend the 2 nearest Z frames), that means the cost is really 16k texture lookups! But you will need a beefy video card to do this without using really low step counts. Temporal jittering can help make up for less steps but it adds motion smearing if anything else moves in front of it.

Its actually not quite as bad as it sounds because the ray marcher quits early once density is full and a few other little optimizations (ie it only does the full number of density steps when the view ray is corner to corner, the rest will do far less). But if you add constant fog to the volume (ie no black pixels) it gets waaaaay slower since it only does shadow steps when there is some density to receive the light energy. So realistically its probably only doing maybe 15% of the total theoretical steps without the optimizations.

Half angle slicing is so desirable because it changes the formula from O(DensitySamples * ShadowSamples) to be O(Density Samples + Shadow Samples).

fwiw this is not built in, its all done using the custom node but you can do quite a bit with it.

RE:
I see what you mean now. I am mostly working with C++ as I am not very good with the Blueprint programming. I will have to read up on ray marching, but this seems very promising. Thank you very much for the pointers.

You are welcome.
fwiw, the only blueprint used here was using a single blueprint node called “Draw Material to Render Target”. It is in 4.13. The rest is all HLSL inside of the custom node using the rendered texture. I will try to post back some of he steps towards doing this later.

RE: Shade599
RE:

This is what I tried doing and as you can see, the frame rate is not acceptable with just 10x10x10 voxels.

Sorry for the **** quality. I just quickly recorded a bit before I go to bed. It is late.

Thats pretty cool. I was also doing some playing around with painting the volume texture in VR using a brush approach.

Not sure why the performance is so slow with so few layers, what video card do you have? if its only 10x10 and they are 2sided materials then it should only be 20 layers or so max. but once fullscreen it could cost a bit. also whats the instruction count?

I have a GTX 1080 and the instruction count is 66 instructions.

This was the smallest I could make it to still respond to the cutting effect.

Hmmm that doesn’t seem too bad at all. It might be slightly faster to use an additive material instead of translucent and then multiply the emissive color by the opacity instead, but it probably is not a big difference.

have you done ‘profile gpu’ to verify it was all translucency slowing it down? or checked the same scene without the effect? It may actually be related to the 1000 draw calls of the 10x10x10 meshes which gets doubled in VR.

Also the ‘use cutting plane’ option could probably be a static switch instead of an IF statement if its something that doesn’t get toggled at runtime.

btw 66 is the vertex shader instructions which won’t be a big deal for such lowpoly geo. 66 is nothing for vertex shader. the vertex shader cost doesn’t matter how big the objects are on screen, its just about how many verts are rendered wheras the pixel shader cost is what will scale with screen size and overdraw.

Your pixel shader cost is the 27 instructions listed. I am surprised this effect is so slow with that video card… is it that slow in regular editor without VR?

I noticed your FPS is only 45 even when not looking at the effect which indicates you don’t have your scene optimized for VR. It should be hitting 90 no problem when looking away. We are publishing some VR example stuff soon that should help.

For Plane equation just store distance from origo and normal. Then you can test which side of plane point bit cheaper. Also prenormalize vector at cpu. Then it’s just: dot(normal, point) < distance. Very cheap. You also can get away with “use cutting plane” branch by using zero vector as plane normal. You can also premultiply color and alpha.

RE: Jenny Gore
I store the normal and vector from the origin. I do not see how I can prenormalize that vector that I use to calculate the dot product because I have to generate it for each point in world space on the mesh, something I do not know in advance. Further, if I use just the distance I have no way to determine where the cutting plane is in space. I just know that it is somewhere on a sphere with a radius of that length. However, your last suggestion I can try. It makes sense and is a great idea, I think. If I am missing something or misunderstood you, please correct me.

Thanks.

RE:
In the editor I get 110 frames when looking at nothing and 45 when looking at the voxels. I get 80-90 in standalone editor when looking at the voxels. In VR preview I get 90 when looking away and 45 when I look at the voxels. I was recording at the time and the editor was up in the background.The GPU profile shows that there are two major costs in m frame. The HZB SetupMips Mips 1 and direct light. The translucency takes about 1/3rd of what those two took. I dont know what the former means though.
I have no idea how to optimize for VR. I can optimize in general, but I am not sure what to do for VR specifically.

Thanks.

You are using Plane Standard Form. What I suggest is to use Plane General Form. This is usually superior format in terms of performance(test is just single dot) and storage.(4 vs 6 scalar).
Maybe this video is helpfull.