Announcement

Collapse
No announcement yet.

Distance Field Ambient Occlusion (Movable Skylight shadowing)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    After a day of confusion i found out something. In 4.2 when we set Allow Static Lighting to 0 in Project Settings > Rendering it disables Skylight as well.
    All Movable light types work with Allow Static Lighting set to 0. No Stationary or Static ones do. Previously there was only a Stationary version of the SkyLight.

    Will the DFAO have any effect on canceling out reflections on materials? Cause i had an issue in one setup where a room lit only by skylight coming though windows using baked skylight shadows. The wall reflections where only visible in areas that where lit(not in complete shadow).
    No, it only provides shadowing for diffuse lighting. For reflections you need to use local reflection captures to override the sky. The issue where reflections disappear completely in shadowed areas means you need more bounce lighting there. We mix the diffuse GI together with the specular GI (reflections) to get better local shadowing on the reflections. In 4.3 Stationary Skylights will have one bounce of diffuse GI so this should help that case.

    I hope that this feature will have a toggle to make it static so that you can use it as a fast AO, sort of like how Crytek bake theirs into a volumetric texture surrounding the whole level.
    You mean like a fast baking method right? It can't be baked into a volume texture because a volume texture would not have enough resolution. It could be baked onto surfaces, but Lightmass does a much higher quality job of that with the Stationary Skylight.

    Comment


      #17
      Originally posted by DanielW View Post
      Lightmass does a much higher quality job
      But the whole point is to use it in fully dynamic scenes, and then "bake" it for dynamically generated geometry which won't move afterwards.

      Comment


        #18
        Gotcha. That can be done, although it hasn't been implemented. It would not be too hard, just have to render into the lightmap of the mesh, and do the same Distance Field AO shading computations. I don't see us implementing it soon though.

        Comment


          #19
          For all three yes.
          I'm using Wall_400x400 from architecture folder. Default content, to test it. Is it possible that this mesh is not thick enough ?
          I found the problem, someone had broken the project setting for this feature! Thanks so much for finding this, otherwise it probably would have been broken in the 4.3 release. Fixed in cl 2125165.

          I did a test in the Destruction showcase project. Here are the steps I had to do:
          * Load once, enable the AllowMeshDistanceFieldRepresentations setting under Edit->Project Settings->Rendering, reload editor
          * Delete directional light
          * Change skylight to Movable
          * Delete all non-static meshes (brushes, destructible meshes), they aren't supported yet
          * Delete all the meshes that weren't closed and so are 50% grey (mostly arches)
          * Duplicate some of the remaining meshes and build something of them. Change scale to be uniform.

          Click image for larger version

Name:	DestructionLevel.jpg
Views:	1
Size:	63.7 KB
ID:	1052350

          Comment


            #20
            Originally posted by DanielW View Post
            * AllowMeshDistanceFieldRepresentations must be enabled for the project (under Project Settings -> Rendering) for this to work. The editor has to be restarted after changing this value, and it will take some time to load on the next run.
            This doesn't seem to work. If I enable it and close the editor, it will be set to true in DefaultEngine.ini, and if I then start the editor again and open the project settings, it will be disabled.

            Comment


              #21
              That's the bug I fixed just now in cl 2125165, can you grab latest again? I'm not sure how often p4 changes are propagated to latest on Github, might be some latency.

              Comment


                #22
                I downloaded latest 4.3 from github, but it seems like the issue is still here.

                And that checkbox in project settings keep unchecking, even though there is proper setting in ini file.

                Could this be related:

                https://answers.unrealengine.com/que...rectional.html

                ?
                https://github.com/iniside/ActionRPGGame - Action RPG Starter kit. Work in Progress. You can use it in whatever way you wish.

                Comment


                  #23
                  Nah different issue. We'll just have to wait for the fix to get propagated to github. You'll know when it has because the project setting checkbox will stick.

                  Comment


                    #24
                    Don't wnt to make you sad but:

                    https://github.com/EpicGames/UnrealE...003ce7622bb724

                    I guess you talk about this change. I downloaded code from github when this commit was already in, and that issue was still present ;(.
                    https://github.com/iniside/ActionRPGGame - Action RPG Starter kit. Work in Progress. You can use it in whatever way you wish.

                    Comment


                      #25
                      Sorry for being obscure with my reply, but is there a way to make it not update every frame? That's all I was looking for, would be suitable for dynamic scenes as you could force it to update only when required, saving a bit of performance when nothing is changing.

                      Comment


                        #26
                        Could you give an overview of how it works? I'm very curious about the implementation. You said the static meshes are represented into signed distance fields. I can only imagine those being volume textures, where each texel stores the signed distance to the closest surface in the static mesh, is that correct?

                        Comment


                          #27
                          Don't wnt to make you sad but:
                          That does make me sad. I'm looking into it

                          Comment


                            #28
                            Sorry for being obscure with my reply, but is there a way to make it not update every frame? That's all I was looking for, would be suitable for dynamic scenes as you could force it to update only when required, saving a bit of performance when nothing is changing.
                            Only the pixels on your screen are considered for lighting, so if you moved the camera around you would quickly see areas that did not have lighting computed. It doesn't simulate the entire level. The algorithm already reuses shading work on surfaces that do not move between frames.

                            Comment


                              #29
                              Could you give an overview of how it works? I'm very curious about the implementation.
                              Sure thing. High level view is

                              Offline
                              Generate signed distance field for each UStaticMesh, store in a volume texture tightly bounding the mesh. The distance field stores the distance to the nearest surface, with sign indicating inside or outside.

                              Realtime
                              AO sample points are placed in world space using a Surface Cache, which places more samples in corners where AO is changing quickly. This is basically a GPU implementation of the standard Irradiance Caching algorithm used in offline raytracing. Irradiance caching isn't inherently parallelizable because each sample placement relies on the other samples placed before it. This is solved by doing multiple passes, with a sample grid aligned to the screen. The first pass only checks every 500 pixels of the screen for whether shading is needed, next pass every 250 pixels, etc. Shading is needed if no existing Surface Cache samples cover that position with a positive weight. Surface Cache samples from last frame are fed in, with some percent trimmed out to support dynamic scene changes (AO will converge over multiple frames).

                              The actual AO computation that happens at these sparse points from the Surface Cache is done by cone-stepping through the per-object distance fields. We trace 9 cones covering the hemisphere, accumulate the min visibility across all objects affecting the sample point. This is a super slow operation, which relies on the sparse sampling from the Surface Cache to make it realtime. Distance fields are nice to cone trace (compared to most other data structures) because you just do a series of sphere-occlusion tests along the cone axis. At each sample point on the cone axis you find the distance to the nearest surface from that object's distance field, and compute overall sphere occlusion using a heuristic. The cone visibility is the min of the sphere visibilities. We track the min distance to occluding surface for the shading point which is needed for the Surface Cache algorithm, it indicates how much area that sample is representative over (and therefore no other shading is necessary).

                              Then we interpolate all the generated AO Surface Cache samples onto the pixels of the screen by splatting the samples and normalizing with the final weight. This happens at half res. The tolerances of the Surface Cache interpolation are increased which effectively smooths the lighting in world space, in a way that preserves lighting details in corners.

                              The resulting AO is really unstable at this point, so we apply a temporal filter to stabilize it, a gap-filling pass for any pixels that still weren't covered with a valid AO sample, and finally a bilateral upsample. Then it is applied to the diffuse lighting of the Movable Skylight.

                              All of this is like 15 unique shaders, about half of which are compute. All of the lighting data structures exist only on the GPU, the CPU doesn't even know how much work there is to do, so Draw/DispatchIndirect is used a lot. There's still a fair amount of optimization potential, as the inner loop cone-stepping is pretty brute force.

                              The key benefits of this implementation are:
                              * Computes lighting on surfaces, not volumes. This allows much higher quality than something like LPV used for sky occlusion where you are limited by the volume texture resolution that lighting is computed in, and you don't know the surface normal anymore.
                              * Thin surfaces are represented well. The per-object distance field allows enough resolution to handle thin surfaces like walls which would disappear in a voxel based approach.
                              * Less work is done where less work is needed - static and flat surfaces cost less, while still supporting dynamic scene changes. Even in games that need fully dynamic lighting, not everything is moving all the time so you don't want to pay for that.

                              The weaknesses:
                              * Limited transfer distance, it doesn't scale up to global illumination
                              * Requires rigid objects for the distance field representation, only uniform scaling of mesh instances
                              * Cost can be much higher depending on scene content, a field of grass would be pretty much worst case (no interpolation).

                              Comment


                                #30
                                Thanks, that was a nice read. But boy, that's quite more involved than I was expecting! No wonder it takes so much frame time.

                                There's one part I didn't get: since there is a field per mesh, how are they iterated? Are all fields loaded at once and the GPU picks the ones that intersect with the current sample point or is it the other way around, iterating over each field and finding out which sample points fall inside them?

                                I also don't quite understand why they need to be signed. Wouldn't just the distance to the closest surface be enough? Why is it needed to know whether a point is inside the object or not? Wouldn't unsigned fields eliminate the need for closed meshes?

                                Comment

                                Working...
                                X