Announcement

Collapse
No announcement yet.

Your thoughts on and comments to Volume Rendering in Unreal Engine 4.

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #46
    I think you should try to do some experiments with straight ray marching, just for comparison. In my tests, I got nicely convergent renders with a 970 + Vive @ ~1.2ms GPU time, and some overhead for rendering the cloud shell into custom depth (.3-.4ms). There isn't much CPU overhead. This is with 25 samples per ray and my dense clouds with a 3d texture that is 950x950x600. If you weren't running in VR, or didn't have a massive amount of other stuff going on like we did, you could really bump up the sample count, and add a ton more complexity.

    Comment


      #47
      Originally posted by dpenney View Post
      I think you should try to do some experiments with straight ray marching, just for comparison. In my tests, I got nicely convergent renders with a 970 + Vive @ ~1.2ms GPU time, and some overhead for rendering the cloud shell into custom depth (.3-.4ms). There isn't much CPU overhead. This is with 25 samples per ray and my dense clouds with a 3d texture that is 950x950x600. If you weren't running in VR, or didn't have a massive amount of other stuff going on like we did, you could really bump up the sample count, and add a ton more complexity.
      Yeah I think you are right. Once I've finished up the slicing BP I'll get to work on the precomputed shadows and add that into my existing raymarcher. Having the shadows from houdini baked would make it super fast. also would love to see the 3d texture code. 25 samples is nothing! I suppose because it is dense then you can have a smaller stepping because it's more likely going to early exit due to density. Now just need to magic up some spare time.
      Visual Effects Artist, Weta Digital, Wellington New Zealand
      BLOG www.danielelliott.co.uk
      @danielelliott3d https://twitter.com/danielelliott3d
      Unreal Engine and VFX Tutorials https://www.youtube.com/user/DokipenTechTutorials
      2015 Showreel: https://vimeo.com/116917817

      Comment


        #48
        Finally got the slicing actually finished. I'd had a small bug in my bubble sort to fix the vertex winding which was messing up the polygon rendering.

        Click image for larger version

Name:	UE4Editor_2016-08-12_00-20-26.jpg
Views:	1
Size:	33.2 KB
ID:	1113554

        One thing to note is that it has exactly the same bug as my c++ code where the translucent shadow volumes rotate as the camera rotates. I'll probably be able to give this BP to @danielW and hope he can debug it.

        Now I've exercised this demon, I can go to bed and sleep soundly before tackling ray marching tomorrow.

        Take care all. Nighty night from NZ.

        Dan
        Visual Effects Artist, Weta Digital, Wellington New Zealand
        BLOG www.danielelliott.co.uk
        @danielelliott3d https://twitter.com/danielelliott3d
        Unreal Engine and VFX Tutorials https://www.youtube.com/user/DokipenTechTutorials
        2015 Showreel: https://vimeo.com/116917817

        Comment


          #49
          Ok, so using a ray box intersection for the entry/exit point generation works only as long as you dont have other scene geometry intersecting with it. Also, as soon as you want to use custom bounding geometry (like the clouds, often also a bounding octree is used) this will not work anymore. In order to use it for my application (which is visualization of medical volumes, which can be quite large) i probably need both things.
          So my outline for achieving that would be:
          - Render the scene geometry as usual
          - Render the back faces of my bounding geometry fully transparent but force a depth write (is this possible?? otherwise the background behind the volume will not show correctly..)
          - Render the front faces into custom depth only
          - In a post processing pass, read depth and custom depth and unproject into world space and then into texture uvw of the volume. These two points then give a ray start and end position which can be raymarched.

          Since I am not very experienced with UE, how do you guys judge the feasibility of this approach or is there another way that I am missing?
          As soon as we get a forward renderer in UE, this or course will be much easier to implement..

          Comment


            #50
            [MENTION=3692]RyanB[/MENTION] , [MENTION=121013]dpenney[/MENTION] Thanks for info. Indeed, pre-calculating entry and exit points for a box proves to be considerably faster than having entry-only point, and checking if the sample is still within the volume every iteration. Camera inside the volume thingy also works perfectly for me now. If my scene had static lighting, I would probably have all the shading pre-baked, But I'd like dynamic lighting, from one directional source, and probably ambient light.

            I've only got myself introduced to volumetric rendering few weeks ago. Do I understand the general concept for directional lighting correctly?
            For every sample, I perform additional loop, raymarching from the sample point, towards light vector and accumulating opacity. If opacity reaches 1 or I exit the volume, I break the loop. Should I also pre-calculate where ray exits the box, like in the main loop, or checking if the sample is still within the volume would be better here?

            And what do I do with the shadow value to make it look right? Alpha blending the shadow samples kinda makes shading view-dependent. The further I go into the volume, the higher shadow density I get.

            Lastly, Is there a viable option to account for ambient lighting without baked data?

            Comment


              #51
              [MENTION=11545]dokipen[/MENTION]: How are you baking your volumes out now? Given you are at Weta, I'll assume you are pretty good with Houdini. :-P Baking lighting from Houdini isn't too hard, regardless if you are embedding your volume in a 2d texture or exporting some custom 3d one. Modify the SHOP to export lighting calculations to a point cloud (.pc) file, then rendering from multiple cameras and combine the point clouds into 1 big vdb. After you have your volume with rgb lighting, then you can export either as a texture atlas with COPs or as a 3d texture if you have a custom exporter. For static lighting, this is really great because you can then use Houdini lights! I encoded an environment light, directional key light, and a scattering pass this way.

              Comment


                #52
                [MENTION=523181]TheHugeManatee[/MENTION]: You are on the right track for sure. You should check out my previous posts since what you've mentioned is very close to what I do. I used custom depth to encode the bounding geometry, then snap ray start locations to that for the clouds I mentioned before. It worked great! Also, by looking at scene depth, you can account for occluded areas of the volume nicely. As far as the exit point generation, I don't do it because my volumes are very dense and I am ok with a few artifacts to save some computation. That being said, I am planning on implementing roughly what you mentioned for a back face depth map, but it'll require some engine changes. Ideally, for my case, you wouldn't stop with just 1 {front_face_0, back_face_0} pair, but you'd define more intervals along a given ray using depth peeling. It could get expensive.
                [MENTION=146056]Deathrey[/MENTION]: I think you have the general idea of volumetric lighting from directional lights. As far as precomuting ray exit points, that is really up to implementation details for your specific use. I'd recommend experimenting. As far as integration goes, look at this:
                http://magnuswrenninge.com/content/p...entals2011.pdf
                Second 3.1 talks about lighting, and it gives pseudocode for a ray marching loop.

                What do you mean ambient lighting, exactly? Like bounce lighting from geometry? Multiple scattering inside the volume? Environment lights? Those are tricky problems to solve well for offline renderers, so real time solutions are a bit absent. You could probably come up with some cheap hacks, though keep in mind if you want to hit a framerate, you don't have that many volume samples per frame to play with.

                Comment


                  #53
                  I am not sure I understand the problem with the shadow volume rotating with camera. Are you making your own shadow volume, or are you somehow getting your slices to cast shadows using the regular translucency as the sheets? If so I would kind of expect that since you are slicing the volume based on viewing angle which will change how the slices align from the lights perspective. Ie if your light angle is at 90 degrees to the view angle you may get very thin or disappearing shadows since they will be invisibly thin from that view. Half angle slicing also addresses this. You could probably do half angle slicing just as an angular setting to fix that without actually tackling the more complex light accumulation method that it usually refers to.
                  Ryan Brucks
                  Principal Technical Artist, Epic Games

                  Comment


                    #54
                    Originally posted by RyanB View Post
                    I am not sure I understand the problem with the shadow volume rotating with camera. Are you making your own shadow volume, or are you somehow getting your slices to cast shadows using the regular translucency as the sheets? If so I would kind of expect that since you are slicing the volume based on viewing angle which will change how the slices align from the lights perspective. Ie if your light angle is at 90 degrees to the view angle you may get very thin or disappearing shadows since they will be invisibly thin from that view. Half angle slicing also addresses this. You could probably do half angle slicing just as an angular setting to fix that without actually tackling the more complex light accumulation method that it usually refers to.
                    it's when using the translucent shadows

                    Yeah I get how that works. The issue here is definitely not the angle of the slices to the light as here the slices are directly aligned to the light. That was actually what I though would be happening but it cant be in this case. This behavior is exactly what happened with c++. I tried fixing the light, the geometry the slices and all combinations and ended up thinking I must have been doing something wrong with my bounds or tangents and I put the c++ on the back burner. Now I've done it in BP I can be tentatively confident (!?) that I'm not doing anything to cause it (assuming the procedural mesh component does the tangents and bounds correctly).

                    DanielW said on twitter a while ago that he thinks its a bug. Actually I might volumetric translucent shadows on a procedural box generated from BP using the utility nodes and see if the same thing happens.

                    here is the original answer hub post......

                    https://answers.unrealengine.com/que...ing-actor.html
                    Visual Effects Artist, Weta Digital, Wellington New Zealand
                    BLOG www.danielelliott.co.uk
                    @danielelliott3d https://twitter.com/danielelliott3d
                    Unreal Engine and VFX Tutorials https://www.youtube.com/user/DokipenTechTutorials
                    2015 Showreel: https://vimeo.com/116917817

                    Comment


                      #55
                      Originally posted by dpenney View Post
                      [MENTION=11545]dokipen[/MENTION]: How are you baking your volumes out now? Given you are at Weta, I'll assume you are pretty good with Houdini. :-P Baking lighting from Houdini isn't too hard, regardless if you are embedding your volume in a 2d texture or exporting some custom 3d one. Modify the SHOP to export lighting calculations to a point cloud (.pc) file, then rendering from multiple cameras and combine the point clouds into 1 big vdb. After you have your volume with rgb lighting, then you can export either as a texture atlas with COPs or as a 3d texture if you have a custom exporter. For static lighting, this is really great because you can then use Houdini lights! I encoded an environment light, directional key light, and a scattering pass this way.
                      Heya

                      I originally used maya to bake out a fake maya fluids cloud by rendering a sequence of an orthographic camera with the clipping planes animated. Pretty simple. Then it went through texture packer to go into a 2d atlas.

                      Regarding houdini, believe it or not I am still quite 'un-seasoned' with houdini. It is used here but Because we have inhouse tools with Maya as a core application, it's rare that I've used it on a job in the last 5 years. It's something I'm actively trying to rectify though!

                      I'm fully aware that this would be amazingly cool to do in houdini and even exporting the 2d atlas would be done there too with easy control over number of slices and res.

                      Assuming I wil get to grips with exporting volumes from houdini within the year (on my big list of things to do) I will also investigate sparse volumes using a simple octree in a texture kind of thing. Should be able to pack in more resolution that way. The GVDB stuff I saw recently which reads vdbs on the graphics cards looks amazing.
                      Visual Effects Artist, Weta Digital, Wellington New Zealand
                      BLOG www.danielelliott.co.uk
                      @danielelliott3d https://twitter.com/danielelliott3d
                      Unreal Engine and VFX Tutorials https://www.youtube.com/user/DokipenTechTutorials
                      2015 Showreel: https://vimeo.com/116917817

                      Comment


                        #56
                        Hmmm, I thought you were slicing based on camera angle for the density? Are you only slicing for the light direction or are you doing both?

                        Without seeing how all the bits work its hard to figure out the problem. It could just be a bug with lighting but I feel like it could be something more basic (purely guessing and going with my gut here).

                        Ie some of this stuff is tricky. Camera position becomes light position during the lighting pass btw. In the past I have leveraged that knowledge to fix material issues similar to this (ie volume billboard stuff often gets similar issues).
                        Ryan Brucks
                        Principal Technical Artist, Epic Games

                        Comment


                          #57
                          RE: RyanB

                          First off, I wanted to say that I am using the 3D texture that you posted for testing and learning as I know that it is right when it looks like your post. I hope you dont mind. Further, I am unable to understand one of your snippets. I will explain below.


                          For everyone:

                          I have been trying to understand and create a prototype following the examples and posts here in this thread and some of the resources I have found online.
                          What I got so far is this:

                          Click image for larger version

Name:	9fcf8f48e7.jpg
Views:	1
Size:	65.8 KB
ID:	1113611

                          Using 10 of these with manually input constants ranging from 0.1 to 1.0 gives me this:

                          Click image for larger version

Name:	e278c4eaf9.jpg
Views:	1
Size:	118.0 KB
ID:	1113612

                          This is just a texture applied to a cube so you can not go inside it. It is also orthogonal right now with no concept of a camera so all faces are the same. Once I fix that I might also fix the going inside problem simply by making it two sided.

                          Anyways, I have 2 major problems (for now).

                          First, how can I loop through all pixels in a shader? As far as I understand it, in volumetric rendering you calculate a ray for every pixel and you blend the interpolated value of each point between the volume bounds. I can send in the camera position relative to the box and the view direction using parameter sets. However, I have no idea how I can loop through all x*y pixels of the camera to calculate the ray direction which, in turn, can be used to calculate the entry and exit points of the volume for that ray. I might be misunderstanding something though.

                          Second, I am unable to understand the logic of the snippet below (and how to use it because of that):

                          Originally posted by RyanB View Post
                          Speaking of that, I just recently made a Box Intersection material function that should be in for 4.14.

                          [ATTACH=CONFIG]105451[/ATTACH]
                          I calculated one example ray as follows:

                          Click image for larger version

Name:	6652666118.jpg
Views:	1
Size:	341.6 KB
ID:	1113618

                          I get a lot of infinite values. The invert ray dir does not make sense. Should it not be a multiply with -1 as B? However, then both t0 and t1 become 0.

                          Is this because I chose a specific case? The Box Min and Box Max are two vectors defining the volume bounds, right?

                          By now I am just rambling so I will end it here, but my confusion is too **** high. :'^)
                          Last edited by NoobsDeSroobs; 08-11-2016, 10:54 PM.

                          Comment


                            #58
                            Originally posted by RyanB View Post
                            Hmmm, I thought you were slicing based on camera angle for the density? Are you only slicing for the light direction or are you doing both?

                            Without seeing how all the bits work its hard to figure out the problem. It could just be a bug with lighting but I feel like it could be something more basic (purely guessing and going with my gut here).

                            Ie some of this stuff is tricky. Camera position becomes light position during the lighting pass btw. In the past I have leveraged that knowledge to fix material issues similar to this (ie volume billboard stuff often gets similar issues).
                            Slicing to camera is the goal but I have it disabled. In this case I have it fixed to the light for testing as it allows me to pan around and check the mesh.

                            So here the light will be the camera during the lighting pass. This is where I think the bug might be (like an odd matrix transformation somewhere). I did read somewhere on answerhub about a bug about translucency shadow volumes not using the light as camera during the shadow pass but I thought it was fixed.
                            Visual Effects Artist, Weta Digital, Wellington New Zealand
                            BLOG www.danielelliott.co.uk
                            @danielelliott3d https://twitter.com/danielelliott3d
                            Unreal Engine and VFX Tutorials https://www.youtube.com/user/DokipenTechTutorials
                            2015 Showreel: https://vimeo.com/116917817

                            Comment


                              #59
                              Here is that thread I was talking about....

                              https://forums.unrealengine.com/show...-shadow-passes

                              Post 7 mentiones an InvViewMatrix bug. No idea if it affecting me here. The fact that it is happening on my mesh that isn't moving is what makes me thing something is going on outside my control.

                              Apart from that I had some ideas for performance in BP with slices and thought that I could update the slices in sections over a few frames. The camera is unlikely to move that quickly so can update a few slices at a time. not sure how that affects rendering order for translucency (I'm hoping that the sections maintain their order that they were created in).
                              Visual Effects Artist, Weta Digital, Wellington New Zealand
                              BLOG www.danielelliott.co.uk
                              @danielelliott3d https://twitter.com/danielelliott3d
                              Unreal Engine and VFX Tutorials https://www.youtube.com/user/DokipenTechTutorials
                              2015 Showreel: https://vimeo.com/116917817

                              Comment


                                #60
                                Hmm the function should work fine. It is a literal copy of the function "LineBoxIntersection" found in common.usf.

                                Click image for larger version

Name:	box.PNG
Views:	1
Size:	845.8 KB
ID:	1113649

                                The min/max operations should be removing the 0 and inf. I tested the function like this by using sphere masks in 3d space which would show if either intersection point was wrong. I also restricted it to 2d by making sure the ray direction has no Y component and it still worked fine.


                                Maybe try applying a small offset so the ray origin isn't right on the edge of the box to start with.
                                Ryan Brucks
                                Principal Technical Artist, Epic Games

                                Comment

                                Working...
                                X