I’m glad you guys like it!
@xnihil0zer0: I added support by following various examples in the engine source of filling 3d textures and exposing them to shaders. The basic idea is that I create a single huge FTexture3DRHIRef and make sure it is exposed to shaders. I’m working on expanding it to multiple (animated) volumes, but Allumette really only needed that one huge grid, which is 900x900x600 or so. The tricky part is filling the buffer. Basically, I have a file format that is stored as plain text which specifies the voxel data in raster order (and uses run length encoding to remove empty space - see the paper). Then, for every z slice, I build a 2d array (TArray<uint32>), and load the slice from the file into the array. Then, I update just that slice region in the 3d texture with RHIUpdateTexture3D. Sorry, this is very terse, so maybe I’ll release the code at some point. The key component to loading volumes as large as I did was compressing empty space and updating the 3d texture 1 slice at a time. Also, for controlling this, I didn’t add support in the editor beyond just specifying file paths for my voxel data in a config file.
@, @: Right now the rays terminate with the max samples per ray, which is ~25, and in general the rays early terminate with a density close to 1. This works for my clouds because they are so dense, and most rays early terminate except for gazing angles. Ideally, you’d define ray intervals that bound density by utilizing some depth maps for first hit, second hit, etc. It’d be a balancing act for optimal peformance, and depends on the use case. Doing simple bounding box intersections with cloud elements could work, too, and would work well for smaller volumes rather than the huge cloudscape I showed. Note that when you do bounding box intersections for particularly sparse volumes, you aren’t ensuring that you hit density, but only that you are within a loose bounds of the density.
nice, that’s a cool technique
I found a solution for that in a custom node. In my own code, I’ve done it that way with two offscreen buffers, but couldn’t find a way to do that in unreal without going into engine code.
The way I did ray termination in the custom node is to do ray/box intersection in HLSL. its not that expensive per pixel. you get your start and end positions on the unit cube so you know exactly the vector you need to step through.
Speaking of that, I just recently made a Box Intersection material function that should be in for 4.14. There is also a LineBoxIntersection function in common.usf that you can use, but I found that I did not like how it was clamping the time result to be 0-1 since I prefer to have a unit vector ray cast and return time in world space.
Interestingly, even though I based this off of the code function mentioned above, doing it as material nodes saved a few instructions somehow (yay compiler magic)
It may seem a bit excessive to use the distance between entry/exit to return the distance inside (rather than just t1 - t0), but I compared the two and the compiler makes them both the same anyways so its just dropping out the extra instructions.
@dpenney, that is pretty cool that you were able to write to the volume textures in code. For those curious how to do that without any code and using a 2d 'pseudo volume texture', all you need is the 4.13 preview build and the "draw material to render target" blueprint node. Then you need a material that samples the 3d space like a flipbook, like so:
You would then just hook up the result of that snippet as the Position to the noise node or some other 3d position sampling function.
0-1 UVs are used since the 'draw material to Render Target" node uses 0-1 UVs for the canvas material.
“Num Cells Per Side” is the number of frames on one side of the flipbook. Ideally that value will be the cuberoot of your texture dimension.
Then to read that texture as a 3d texture you simply do a “1d to 2d index” conversion using the local Z position as the index. You can also just use the Flipbook material function and use Z as the phase. Note that “1d to 2d index” is the exact opposite of the “2d to 1d index” used to encode. Math!
So I’ve got the blueprint volume slicing code converted from c++. It’s slow in editor. Nativized I get much better performance. 128 slices go from 90ms to 8 ms blueprint time in stat Game. I’m thinking raymarching will be the way to go but I’m hoping for nativized assets in editor to become a feature. a ‘bake’ button which compiles and hot reloads it would be great.
I think you should try to do some experiments with straight ray marching, just for comparison. In my tests, I got nicely convergent renders with a 970 + Vive @ ~1.2ms GPU time, and some overhead for rendering the cloud shell into custom depth (.3-.4ms). There isn’t much CPU overhead. This is with 25 samples per ray and my dense clouds with a 3d texture that is 950x950x600. If you weren’t running in VR, or didn’t have a massive amount of other stuff going on like we did, you could really bump up the sample count, and add a ton more complexity.
Yeah I think you are right. Once I’ve finished up the slicing BP I’ll get to work on the precomputed shadows and add that into my existing raymarcher. Having the shadows from houdini baked would make it super fast. also would love to see the 3d texture code. 25 samples is nothing! I suppose because it is dense then you can have a smaller stepping because it’s more likely going to early exit due to density. Now just need to magic up some spare time.
Finally got the slicing actually finished. I’d had a small bug in my bubble sort to fix the vertex winding which was messing up the polygon rendering.
One thing to note is that it has exactly the same bug as my c++ code where the translucent shadow volumes rotate as the camera rotates. I’ll probably be able to give this BP to @danielW and hope he can debug it.
Now I’ve exercised this demon, I can go to bed and sleep soundly before tackling ray marching tomorrow.
Take care all. Nighty night from NZ.
Dan
Ok, so using a ray box intersection for the entry/exit point generation works only as long as you dont have other scene geometry intersecting with it. Also, as soon as you want to use custom bounding geometry (like the clouds, often also a bounding octree is used) this will not work anymore. In order to use it for my application (which is visualization of medical volumes, which can be quite large) i probably need both things.
So my outline for achieving that would be:
- Render the scene geometry as usual
- Render the back faces of my bounding geometry fully transparent but force a depth write (is this possible?? otherwise the background behind the volume will not show correctly…)
- Render the front faces into custom depth only
- In a post processing pass, read depth and custom depth and unproject into world space and then into texture uvw of the volume. These two points then give a ray start and end position which can be raymarched.
Since I am not very experienced with UE, how do you guys judge the feasibility of this approach or is there another way that I am missing?
As soon as we get a forward renderer in UE, this or course will be much easier to implement…
@ , @dpenney Thanks for info. Indeed, pre-calculating entry and exit points for a box proves to be considerably faster than having entry-only point, and checking if the sample is still within the volume every iteration. Camera inside the volume thingy also works perfectly for me now. If my scene had static lighting, I would probably have all the shading pre-baked, But I’d like dynamic lighting, from one directional source, and probably ambient light.
I’ve only got myself introduced to volumetric rendering few weeks ago. Do I understand the general concept for directional lighting correctly?
For every sample, I perform additional loop, raymarching from the sample point, towards light vector and accumulating opacity. If opacity reaches 1 or I exit the volume, I break the loop. Should I also pre-calculate where ray exits the box, like in the main loop, or checking if the sample is still within the volume would be better here?
And what do I do with the shadow value to make it look right? Alpha blending the shadow samples kinda makes shading view-dependent. The further I go into the volume, the higher shadow density I get.
Lastly, Is there a viable option to account for ambient lighting without baked data?
@dokipen: How are you baking your volumes out now? Given you are at Weta, I’ll assume you are pretty good with Houdini. Baking lighting from Houdini isn’t too hard, regardless if you are embedding your volume in a 2d texture or exporting some custom 3d one. Modify the SHOP to export lighting calculations to a point cloud (.pc) file, then rendering from multiple cameras and combine the point clouds into 1 big vdb. After you have your volume with rgb lighting, then you can export either as a texture atlas with COPs or as a 3d texture if you have a custom exporter. For static lighting, this is really great because you can then use Houdini lights! I encoded an environment light, directional key light, and a scattering pass this way.
@TheHugeManatee: You are on the right track for sure. You should check out my previous posts since what you’ve mentioned is very close to what I do. I used custom depth to encode the bounding geometry, then snap ray start locations to that for the clouds I mentioned before. It worked great! Also, by looking at scene depth, you can account for occluded areas of the volume nicely. As far as the exit point generation, I don’t do it because my volumes are very dense and I am ok with a few artifacts to save some computation. That being said, I am planning on implementing roughly what you mentioned for a back face depth map, but it’ll require some engine changes. Ideally, for my case, you wouldn’t stop with just 1 {front_face_0, back_face_0} pair, but you’d define more intervals along a given ray using depth peeling. It could get expensive.
@: I think you have the general idea of volumetric lighting from directional lights. As far as precomuting ray exit points, that is really up to implementation details for your specific use. I’d recommend experimenting. As far as integration goes, look at this:
Second 3.1 talks about lighting, and it gives pseudocode for a ray marching loop.
What do you mean ambient lighting, exactly? Like bounce lighting from geometry? Multiple scattering inside the volume? Environment lights? Those are tricky problems to solve well for offline renderers, so real time solutions are a bit absent. You could probably come up with some cheap hacks, though keep in mind if you want to hit a framerate, you don’t have that many volume samples per frame to play with.
I am not sure I understand the problem with the shadow volume rotating with camera. Are you making your own shadow volume, or are you somehow getting your slices to cast shadows using the regular translucency as the sheets? If so I would kind of expect that since you are slicing the volume based on viewing angle which will change how the slices align from the lights perspective. Ie if your light angle is at 90 degrees to the view angle you may get very thin or disappearing shadows since they will be invisibly thin from that view. Half angle slicing also addresses this. You could probably do half angle slicing just as an angular setting to fix that without actually tackling the more complex light accumulation method that it usually refers to.
it’s when using the translucent shadows
Yeah I get how that works. The issue here is definitely not the angle of the slices to the light as here the slices are directly aligned to the light. That was actually what I though would be happening but it cant be in this case. This behavior is exactly what happened with c++. I tried fixing the light, the geometry the slices and all combinations and ended up thinking I must have been doing something wrong with my bounds or tangents and I put the c++ on the back burner. Now I’ve done it in BP I can be tentatively confident (!?) that I’m not doing anything to cause it (assuming the procedural mesh component does the tangents and bounds correctly).
DanielW said on twitter a while ago that he thinks its a bug. Actually I might volumetric translucent shadows on a procedural box generated from BP using the utility nodes and see if the same thing happens.
here is the original answer hub post…
https://answers.unrealengine.com/questions/217866/volumetric-shadows-wrong-when-rotating-actor.html
Heya
I originally used maya to bake out a fake maya fluids cloud by rendering a sequence of an orthographic camera with the clipping planes animated. Pretty simple. Then it went through texture packer to go into a 2d atlas.
Regarding houdini, believe it or not I am still quite ‘un-seasoned’ with houdini. It is used here but Because we have inhouse tools with Maya as a core application, it’s rare that I’ve used it on a job in the last 5 years. It’s something I’m actively trying to rectify though!
I’m fully aware that this would be amazingly cool to do in houdini and even exporting the 2d atlas would be done there too with easy control over number of slices and res.
Assuming I wil get to grips with exporting volumes from houdini within the year (on my big list of things to do) I will also investigate sparse volumes using a simple octree in a texture kind of thing. Should be able to pack in more resolution that way. The GVDB stuff I saw recently which reads vdbs on the graphics cards looks amazing.
Hmmm, I thought you were slicing based on camera angle for the density? Are you only slicing for the light direction or are you doing both?
Without seeing how all the bits work its hard to figure out the problem. It could just be a bug with lighting but I feel like it could be something more basic (purely guessing and going with my gut here).
Ie some of this stuff is tricky. Camera position becomes light position during the lighting pass btw. In the past I have leveraged that knowledge to fix material issues similar to this (ie volume billboard stuff often gets similar issues).
RE:
First off, I wanted to say that I am using the 3D texture that you posted for testing and learning as I know that it is right when it looks like your post. I hope you dont mind. Further, I am unable to understand one of your snippets. I will explain below.
For everyone:
I have been trying to understand and create a prototype following the examples and posts here in this thread and some of the resources I have found online.
What I got so far is this:
Using 10 of these with manually input constants ranging from 0.1 to 1.0 gives me this:
This is just a texture applied to a cube so you can not go inside it. It is also orthogonal right now with no concept of a camera so all faces are the same. Once I fix that I might also fix the going inside problem simply by making it two sided.
Anyways, I have 2 major problems (for now).
First, how can I loop through all pixels in a shader? As far as I understand it, in volumetric rendering you calculate a ray for every pixel and you blend the interpolated value of each point between the volume bounds. I can send in the camera position relative to the box and the view direction using parameter sets. However, I have no idea how I can loop through all x*y pixels of the camera to calculate the ray direction which, in turn, can be used to calculate the entry and exit points of the volume for that ray. I might be misunderstanding something though.
Second, I am unable to understand the logic of the snippet below (and how to use it because of that):
I calculated one example ray as follows:
I get a lot of infinite values. The invert ray dir does not make sense. Should it not be a multiply with -1 as B? However, then both t0 and t1 become 0.
Is this because I chose a specific case? The Box Min and Box Max are two vectors defining the volume bounds, right?
By now I am just rambling so I will end it here, but my confusion is too **** high. :'^)
Slicing to camera is the goal but I have it disabled. In this case I have it fixed to the light for testing as it allows me to pan around and check the mesh.
So here the light will be the camera during the lighting pass. This is where I think the bug might be (like an odd matrix transformation somewhere). I did read somewhere on answerhub about a bug about translucency shadow volumes not using the light as camera during the shadow pass but I thought it was fixed.
Here is that thread I was talking about…
Post 7 mentiones an InvViewMatrix bug. No idea if it affecting me here. The fact that it is happening on my mesh that isn’t moving is what makes me thing something is going on outside my control.
Apart from that I had some ideas for performance in BP with slices and thought that I could update the slices in sections over a few frames. The camera is unlikely to move that quickly so can update a few slices at a time. not sure how that affects rendering order for translucency (I’m hoping that the sections maintain their order that they were created in).
Hmm the function should work fine. It is a literal copy of the function “LineBoxIntersection” found in common.usf.
The min/max operations should be removing the 0 and inf. I tested the function like this by using sphere masks in 3d space which would show if either intersection point was wrong. I also restricted it to 2d by making sure the ray direction has no Y component and it still worked fine.
Maybe try applying a small offset so the ray origin isn’t right on the edge of the box to start with.