Announcement

Collapse
No announcement yet.

Your thoughts on and comments to Volume Rendering in Unreal Engine 4.

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #31
    Originally posted by RyanB View Post
    That should work. You are basically talking about flipping the shadow volume behavior. The one downside to that method is it will require another large render target to store the density slices. It will have to be the same size as the original volume texture to avoid losing resolution. Also you would require a separate sliced volume texture for each instance in the world so it wouldn't be as flexible as regular ray marching. But the good news is you will be able to compute the shadows in parallel instead of as nested steps and it should avoid the shadow volume edge bleeding artifacts.

    Layering your slices will bring back some of the cost as overdraw so it is hard to predict exactly how that might perform compared to regular ray marching. It won't be as fast as half angle slicing though because it is still not sharing the shadow samples between slices which means that the deeper samples will cost more than the ones near the edge of the volume, and there is more re-peat steps being taken.

    I think shadow volumes will be a bit easier to start with so I may try an experiment with them soon. Basically you just precompute the light energy received for each voxel which is costly once, but from then on its just single sample reads to get it.
    Yeah you are right about the cost. as I was going to sleep last night it dawned on me that even though I would be splitting the density into the geometry slices and shadows ray traced, it still has to shade each pixel for each slice which is a large surface area vs doing a raymarch on each pixel on a bouding volume. The cost is probably the same.

    I wasn't even thinking about shadow volumes as I have no idea how to do that in the engine. it probably involves engine code modifications? I was trying to think of ways to use the new drawmaterialtorendertarget as a way to do this. Does that issue a command to be queued on the GPU? Is that also what the 2d canvas render capturing does (I guess you figured out that stuff from the flipbook/imposter BP tools you made). If those BP nodes just issue a command and you can exclude/include certain actors, would proper half angle slicing work by doing this....

    - create mesh section for one slice -> issues command?
    - 2d render to render target from light angle -> issues command?
    - render next slice using 2d texture ditto

    do that for all slices, etc Does all the commands then get run on the gpu in that order for one render frame?
    Visual Effects Artist, Weta Digital, Wellington New Zealand
    BLOG www.danielelliott.co.uk
    @danielelliott3d https://twitter.com/danielelliott3d
    Unreal Engine and VFX Tutorials https://www.youtube.com/user/DokipenTechTutorials
    2015 Showreel: https://vimeo.com/116917817

    Comment


      #32
      ooh [MENTION=3692]RyanB[/MENTION], I think drawMaterialToRenderTarget could work for precomputing shadow volumes. the material that drew the unlit quad into the rendertarget would do the tracing (with the light direction as a parameter) and then the actual volume material reads that in during raymarch. I think that's what you are getting at right?
      Visual Effects Artist, Weta Digital, Wellington New Zealand
      BLOG www.danielelliott.co.uk
      @danielelliott3d https://twitter.com/danielelliott3d
      Unreal Engine and VFX Tutorials https://www.youtube.com/user/DokipenTechTutorials
      2015 Showreel: https://vimeo.com/116917817

      Comment


        #33
        we could then do two rendertargets for odd and even voxels (red/black similar to gaus seidel red black solvers) on alternate frame and blend between frames to get temporal sampling with a 1-2 frame delay. half the cost again (with twice the render targets)
        Visual Effects Artist, Weta Digital, Wellington New Zealand
        BLOG www.danielelliott.co.uk
        @danielelliott3d https://twitter.com/danielelliott3d
        Unreal Engine and VFX Tutorials https://www.youtube.com/user/DokipenTechTutorials
        2015 Showreel: https://vimeo.com/116917817

        Comment


          #34
          I think I took a pretty different approach from you guys, and I wanted to share the rough outline of my process and some images. The basic idea is that I utilize 3d textures with my own loader, and ray march them in a post process shader.

          1) Create a big OpenVDB grid in Houdini
          2) Light in Houdini, and bake the color into rgb. I did channel lighting (r=key light, g=scattering, b=enviornment light)
          2) Export it as a simple format in dense grid fashion
          3) Modify the engine to load it as a 3d texture
          4) Ray march the volume in a post process shader which samples the texture in a few steps:
          - 1 - Initialize ray start positions according to the shell of the cloud to avoid empty space
          Click image for larger version

Name:	shell_cloud.jpg
Views:	1
Size:	574.5 KB
ID:	1113426
          - 2 - Ray march and sample the rgba texture. rgb are 3 baked in lighting passes mentioned in (1). a is the density
          Click image for larger version

Name:	rgb_lit_cloud.jpg
Views:	1
Size:	117.3 KB
ID:	1113427
          - 3 - Remap lighting to give a nicely lit appearance
          Click image for larger version

Name:	lit_cloud.jpg
Views:	1
Size:	109.6 KB
ID:	1113428

          There are a lot more details in the paper I mentioned, but I got this running nicely at 90fps on a Vive/Rift and 60fps on PSVR. By doing the rgb channel lighting in Houdini, I was able to get a bunch of lighting conditions like at sunset. I also played with some hacks doing dynamic lighting. Let me know if you have any questions.
          Click image for larger version

Name:	sunset3.jpg
Views:	1
Size:	211.2 KB
ID:	1113429
          Devon
          Attached Files

          Comment


            #35
            Originally posted by dpenney View Post
            I haven't read through all the replies here, but I did implement volume rendering in UE for our VR experience, Allumette. I wrote a shader that ray marched some voxel grids I exported from Houdini. I baked in the lighting, so there are no shadow rays to march, which made it feasible. The cloudscape was really really big so I had to use some tricks with empty space traversal in order to get it running at framerate with a reasonable number of steps. I presented this at DigiPro this year:
            http://dl.acm.org/citation.cfm?id=2947699

            If you can't get access to that paper, send me a message, and I can try to get you a preprint of the paper.

            Also, if you are interested in volume rendering as a topic, I recommend checking out some of the notes from these SIGGRAPH courses (I helped out the first year in 2010).
            http://magnuswrenninge.com/productionvolumerendering

            Also, Magnus Wrenninge (who helped organized those courses) wrote a book on volume rendering that has a ton of good info:
            https://www.crcpress.com/Production-.../9781568817248
            Thank you so much for this. This will probably help me get to grips with volume rendering. I am new to the techniques and concept related to volume rendering so I appreciate resources like this. The course notes are huge, so thank you for that as well.

            BTW, I am able to open and read the paper you wrote, but I am not able to download it and print it. It is a short paper so it does not matter that much, but I thought you might want to know. Maybe I can download it if I am on my university's network. Will update after I have tested that.
            Last edited by NoobsDeSroobs; 08-08-2016, 09:45 PM.

            Comment


              #36
              [MENTION=121013]dpenney[/MENTION] any chance you'd be willing to share how you added 3D texture support to the engine?

              Comment


                #37
                Holy wow, I am completely blown away (cloud pun intended) by all of this...
                This is so high above my skill level, but I'm bookmarking this thread and will read with intent each update

                Visit my portfolio: jhgrace.com

                Comment


                  #38
                  [MENTION=121013]dpenney[/MENTION] These are very impressive results, thanks for sharing!

                  I am curious as to how you implemented termination of your rays. Usually this is done by rendering the back faces of the bounding geometry into a separate render target in a separate pass, but so far I have not seen a possibility to define custom render passes in UE unless you use some workaround like the Scene Capture 2D attached to the main camera. From the figures in your paper it looks like you only terminate when the ray opacity is saturated, but at the fringes of your clouds these would raymarch until the end of your scene?

                  Cheers,
                  J

                  Comment


                    #39
                    [MENTION=121013]dpenney[/MENTION]
                    Very good looking stuff.

                    [MENTION=3692]RyanB[/MENTION]
                    You can move it all around and even go inside of it and it still looks like a true volume:
                    Ryan, would you mind shedding some light regarding camera being inside the volume?

                    Comment


                      #40
                      Great results gpenny! That is similar to how I did the metaballs for the protostar demo by starting the raytrace with proxy geometry that was a bit larger than the actual spheres.


                      Re: deathray, basically you just need to use inverted polygons and solve a ray equation for where the box would have intersected the volume. Then you can pre calculate the number of steps through the volume and remove all math or branching to ensure the ray stays within the volume (you dont need to check every iteration when you know it will stay within due to precaulation of ray size and step count). Doing that alone saved almost 30% of the total time in my tests.
                      This is simple for things like boxes or cylinders but not as simple for gpennys method of arbitrary shapes. You could try to fit a box tightly around each cloud perhaps.

                      Termination of rays can occur anytime density goes over a certain threshold (which is the same as transmission going under some threshold).
                      Ryan Brucks
                      Principal Technical Artist, Epic Games

                      Comment


                        #41
                        I'm glad you guys like it!
                        [MENTION=27525]xnihil0zer0[/MENTION]: I added support by following various examples in the engine source of filling 3d textures and exposing them to shaders. The basic idea is that I create a single huge FTexture3DRHIRef and make sure it is exposed to shaders. I'm working on expanding it to multiple (animated) volumes, but Allumette really only needed that one huge grid, which is 900x900x600 or so. The tricky part is filling the buffer. Basically, I have a file format that is stored as plain text which specifies the voxel data in raster order (and uses run length encoding to remove empty space - see the paper). Then, for every z slice, I build a 2d array (TArray<uint32>), and load the slice from the file into the array. Then, I update just that slice region in the 3d texture with RHIUpdateTexture3D. Sorry, this is very terse, so maybe I'll release the code at some point. The key component to loading volumes as large as I did was compressing empty space and updating the 3d texture 1 slice at a time. Also, for controlling this, I didn't add support in the editor beyond just specifying file paths for my voxel data in a config file.
                        [MENTION=3692]RyanB[/MENTION], [MENTION=146056]Deathrey[/MENTION]: Right now the rays terminate with the max samples per ray, which is ~25, and in general the rays early terminate with a density close to 1. This works for my clouds because they are so dense, and most rays early terminate except for gazing angles. Ideally, you'd define ray intervals that bound density by utilizing some depth maps for first hit, second hit, etc. It'd be a balancing act for optimal peformance, and depends on the use case. Doing simple bounding box intersections with cloud elements could work, too, and would work well for smaller volumes rather than the huge cloudscape I showed. Note that when you do bounding box intersections for particularly sparse volumes, you aren't ensuring that you hit density, but only that you are within a loose bounds of the density.

                        Comment


                          #42
                          nice, that's a cool technique

                          Comment


                            #43
                            Originally posted by TheHugeManatee View Post
                            [MENTION=121013]dpenney[/MENTION] These are very impressive results, thanks for sharing!

                            I am curious as to how you implemented termination of your rays. Usually this is done by rendering the back faces of the bounding geometry into a separate render target in a separate pass, but so far I have not seen a possibility to define custom render passes in UE unless you use some workaround like the Scene Capture 2D attached to the main camera. From the figures in your paper it looks like you only terminate when the ray opacity is saturated, but at the fringes of your clouds these would raymarch until the end of your scene?

                            Cheers,
                            J
                            I found a solution for that in a custom node. In my own code, I've done it that way with two offscreen buffers, but couldn't find a way to do that in unreal without going into engine code.
                            The way I did ray termination in the custom node is to do ray/box intersection in HLSL. its not that expensive per pixel. you get your start and end positions on the unit cube so you know exactly the vector you need to step through.
                            Visual Effects Artist, Weta Digital, Wellington New Zealand
                            BLOG www.danielelliott.co.uk
                            @danielelliott3d https://twitter.com/danielelliott3d
                            Unreal Engine and VFX Tutorials https://www.youtube.com/user/DokipenTechTutorials
                            2015 Showreel: https://vimeo.com/116917817

                            Comment


                              #44
                              Speaking of that, I just recently made a Box Intersection material function that should be in for 4.14. There is also a LineBoxIntersection function in common.usf that you can use, but I found that I did not like how it was clamping the time result to be 0-1 since I prefer to have a unit vector ray cast and return time in world space.

                              Interestingly, even though I based this off of the code function mentioned above, doing it as material nodes saved a few instructions somehow (yay compiler magic)

                              Click image for larger version

Name:	boxintersection.PNG
Views:	1
Size:	311.6 KB
ID:	1113476

                              It may seem a bit excessive to use the distance between entry/exit to return the distance inside (rather than just t1 - t0), but I compared the two and the compiler makes them both the same anyways so its just dropping out the extra instructions.

                              [MENTION=121013]dpenney[/MENTION], that is pretty cool that you were able to write to the volume textures in code. For those curious how to do that without any code and using a 2d 'pseudo volume texture', all you need is the 4.13 preview build and the "draw material to render target" blueprint node. Then you need a material that samples the 3d space like a flipbook, like so:

                              Click image for larger version

Name:	3dbake.png
Views:	1
Size:	86.6 KB
ID:	1113475

                              You would then just hook up the result of that snippet as the Position to the noise node or some other 3d position sampling function.

                              0-1 UVs are used since the 'draw material to Render Target" node uses 0-1 UVs for the canvas material.

                              "Num Cells Per Side" is the number of frames on one side of the flipbook. Ideally that value will be the cuberoot of your texture dimension.


                              Then to read that texture as a 3d texture you simply do a "1d to 2d index" conversion using the local Z position as the index. You can also just use the Flipbook material function and use Z as the phase. Note that "1d to 2d index" is the exact opposite of the "2d to 1d index" used to encode. Math!
                              Last edited by RyanB; 08-09-2016, 05:36 PM.
                              Ryan Brucks
                              Principal Technical Artist, Epic Games

                              Comment


                                #45
                                So I've got the blueprint volume slicing code converted from c++. It's slow in editor. Nativized I get much better performance. 128 slices go from 90ms to 8 ms blueprint time in stat Game. I'm thinking raymarching will be the way to go but I'm hoping for nativized assets in editor to become a feature. a 'bake' button which compiles and hot reloads it would be great.
                                Visual Effects Artist, Weta Digital, Wellington New Zealand
                                BLOG www.danielelliott.co.uk
                                @danielelliott3d https://twitter.com/danielelliott3d
                                Unreal Engine and VFX Tutorials https://www.youtube.com/user/DokipenTechTutorials
                                2015 Showreel: https://vimeo.com/116917817

                                Comment

                                Working...
                                X