Announcement

Collapse
No announcement yet.

Global Illumination alternatives

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    Here are some more rendering research paper(some of it is real-time stuff):

    http://people.mpi-inf.mpg.de/~ritschel/

    Making Imperfect Shadow Maps View-Adaptive: High-Quality Global Illumination in Large Dynamic Scenes
    Bent Normals and Cones in Screen-space
    Real-Time Screen-Space Scattering in Homogeneous Environments
    TOUR of DUTY

    Comment


      #17
      Originally posted by Frenetic Pony View Post
      Nvidia's solution doesn't work with large game areas. Even for a relatively small area it eats ram like you wouldn't believe. Besides, this is just a less advanced version of what Epic already tried (hey they already did cascaded voxel level of detail!) and concluded it wasn't really fast and flexible enough to bring in, especially not with the Xbox One.
      From what I gathered it usees different way of LODing (cascading ?) voxel, in scene, so it actually uses far less of them than original technique, which was also developed at NVIDIA by Cyril Crassin.

      And from slides it you can read that rendering indirect shadows takes only 3ms. For me it is perfectly acceptable.
      https://github.com/iniside/ActionRPGGame - Action RPG Starter kit. Work in Progress. You can use it in whatever way you wish.

      Comment


        #18
        Originally posted by iniside View Post
        As for AO and Indirect Shadows. These terms are used pretty interchangeable, that is why there is clear distinction between various xxxxAO techniques (which are usually screen space approximated), and just AO (which is not approximated but calculated from all geometry).
        Right, that's point I've totally missed! I really forgot about various AO techniques assuming AO as 'geometry-only' thing.

        Originally posted by iniside View Post
        And from slides it you can read that rendering indirect shadows takes only 3ms. For me it is perfectly acceptable.
        Indeed, now the point is that would be great if Epic find it applicable, too. (*Because, you know, large user community, various PC hardware, including middle-budget PCs, etc.)

        Comment


          #19
          CRYENGINE now primarily uses cubemaps from environment probes to handle GI. They even suggest you turn off traditional LPV GI.

          Can such a system be implemented into UE4?

          Comment


            #20
            Intel just released a paper on a very efficient version of SVOGI:

            https://software.intel.com/en-us/art...t-illumination

            Comment


              #21
              Originally posted by StaticTheFox View Post
              Intel just released a paper on a very efficient version of SVOGI:

              https://software.intel.com/en-us/art...t-illumination
              SVO is old news. It was a research project that at the time people jumped on because it meant things were getting near to realtime raytracing, it's not where we should be aiming. look at this for example http://www.youtube.com/watch?v=1pjup...9FNz4rRSeLvowQ (should also add he has a download example of his supperb work on one of his video thread). nearly six months back, it's just not right for gpu designs right now unless you have a nuclear sub powering your machine, once it is, it's redundant anyway. When GPU's can efficiently do such things path tracing will be the best way of doing this with down sampling correction filters (except maybe hair, cone tracing for such things makes sense even with offline rendering).

              One of the best realtime idea's ive seen is Morgan McGuire's realtime GI system Lighting Deep G-Buffers, after speaking with him 4-5 months back this should appear very soon, his work on transparent depth solving is also superb (his power is in his beard, the only thing i dislike is the fact he works for Nvidia which means if we let them it will just become another licensed hate attack against the overall game industry).

              Ill start a thread to show the many many great works that are aimed at solving this issue, let people decide whats the best direction forward short term & long term.
              Last edited by KingBadger3D; 06-06-2014, 07:55 PM.

              Comment


                #22
                Originally posted by StaticTheFox View Post
                Intel just released a paper on a very efficient version of SVOGI:

                https://software.intel.com/en-us/art...t-illumination
                It's a step in the right direction, and something I'm personally looking into for a project later with other things, but it's only that it reduces memory usage a lot. The actual performance still isn't up there, in fact what they get in the paper is just plain horrible.

                Voxel cone tracing as a general solution may work one day, and may already be working, on the PS4 only, for really tight corridor environments, in Capcom's upcoming "Deep Down" But right now as a general solution it's not workable, at least in a power envelope where you could run an entire rest of a game on a reasonable set of platforms.

                The deep g-buffers thing is another completely useless idea, anything screenspace for "global" illumination, no matter how many hacks you throw at it, is not the way to go. Temporal stability of global illumination is pretty much the entire idea of "Global" illumination to begin with.

                Comment


                  #23
                  Originally posted by Frenetic Pony View Post
                  It's a step in the right direction, and something I'm personally looking into for a project later with other things, but it's only that it reduces memory usage a lot. The actual performance still isn't up there, in fact what they get in the paper is just plain horrible.

                  Voxel cone tracing as a general solution may work one day, and may already be working, on the PS4 only, for really tight corridor environments, in Capcom's upcoming "Deep Down" But right now as a general solution it's not workable, at least in a power envelope where you could run an entire rest of a game on a reasonable set of platforms.

                  The deep g-buffers thing is another completely useless idea, anything screenspace for "global" illumination, no matter how many hacks you throw at it, is not the way to go. Temporal stability of global illumination is pretty much the entire idea of "Global" illumination to begin with.
                  Temporal stability if you read the white paper is what this technique does well. It's a single pass system (basic 1 bounce) that uses past frame and predictive evaluation based on velocity of geometry per pixel to be able to use data to represent multi bounce indirect with no extra cost. Yeah it's not going to be perfect because you don't have all scene data for eval (unlike offline renderers that have to have all geo) but has enough through temporal remapping and view point guard banding (e.g rendering view at larger size by 10-20% for example) based on the fact human perception can't correctly interpret indirect lighting and reflection beyond our viewing point per frame. By evaluation of this extra perception data (even if not from a 360 degree point of view is 90% of the time enough for realistic perception of lighting and reflection) you get a near 99% acceptance of the result.

                  Also you should look into what i posted for devs to look at not long ago, SVOGI even with intels steps are still not great, ive been banging on about SV DAGS for months. Check this Budd Compact Precomputed Voxelized Shadows . Great papers http://www.cse.chalmers.se/~d00sint/
                  Last edited by KingBadger3D; 06-06-2014, 10:18 PM.

                  Comment


                    #24
                    Originally posted by KingBadger3D View Post
                    Lighting Deep G-Buffers[/U], after speaking with him 4-5 months back this should appear very soon, his work on transparent depth solving is also superb (his power is in his beard, the only thing i dislike is the fact he works for Nvidia which means if we let them it will just become another licensed hate attack against the overall game industry).
                    The issue I see here, that this is screen space. Imagine what will happen when you turn back to light source. Lighting will change drastically, because there not enough information on screen.

                    Though technique seems interesting to solving issues with translucency in deferred shading.
                    https://github.com/iniside/ActionRPGGame - Action RPG Starter kit. Work in Progress. You can use it in whatever way you wish.

                    Comment


                      #25
                      this is impressive!

                      https://www.youtube.com/watch?v=G9isGEI6Kfc

                      Comment


                        #26
                        Originally posted by KingBadger3D View Post
                        ive been banging on about SV DAGS for months . Great papers http://www.cse.chalmers.se/~d00sint/
                        It does have a lot of good papers! I'm really impressed by "High Resolution Sparse Voxel DAGs" the proof is in the pudding as they say and the images in that paper look very close to offline rendering. Epic rendering engineers should probably look at it. It's a pity they don't have any videos to go along with their paper.


                        Originally posted by gabrielefx View Post
                        It is impressive, but it doesn't help Epic since there are no papers to read :P
                        Last edited by SonKim; 06-07-2014, 10:54 AM.
                        TOUR of DUTY

                        Comment


                          #27
                          I'm sure they already did, because I posted those papers early in the beta.

                          There is just to few graphics engineers at Epic to handle all this stuff. And there are priorities. So if you guys are really good apply to work at Epic. They can use your help
                          https://github.com/iniside/ActionRPGGame - Action RPG Starter kit. Work in Progress. You can use it in whatever way you wish.

                          Comment


                            #28
                            Originally posted by SonKim View Post
                            It does have a lot of good papers! I'm really impressed by "High Resolution Sparse Voxel DAGs" the proof is in the pudding as they say and the images in that paper look very close to offline rendering. Epic rendering engineers should probably look at it. It's a pity they don't have any videos to go along with their paper.
                            That's actually what I'm toying around with (vacation aside). Encoding lighting information in a layered reflective shadow map (recent paper) for individual lights, and then encoding the standard geometry/color information into a set of 3d texture blocks, which being coherent in memory are a lot faster than sparse octrees, and reducing then memory footprint by storing the unlit voxel information into a directed acylic graph. Combined with some more recent hardware features allowing 3d textures to use null pointers for, in this case, empty voxels the resulting memory hit should be very low, while the structure should be very fast.

                            It's half an idea with pieces of broken code right now though. I still haven't put out my Pixar ambient hack demo, which is another huge time saver (no need for double light bounce to still get information). And that's mostly working. Maybe someone with a lot more time than me at Epic will read this and actually accomplish something though.

                            Comment


                              #29
                              Originally posted by Frenetic Pony View Post
                              That's actually what I'm toying around with (vacation aside). Encoding lighting information in a layered reflective shadow map (recent paper) for individual lights, and then encoding the standard geometry/color information into a set of 3d texture blocks, which being coherent in memory are a lot faster than sparse octrees, and reducing then memory footprint by storing the unlit voxel information into a directed acylic graph. Combined with some more recent hardware features allowing 3d textures to use null pointers for, in this case, empty voxels the resulting memory hit should be very low, while the structure should be very fast.

                              It's half an idea with pieces of broken code right now though. I still haven't put out my Pixar ambient hack demo, which is another huge time saver (no need for double light bounce to still get information). And that's mostly working. Maybe someone with a lot more time than me at Epic will read this and actually accomplish something though.
                              Is it just me or does there have a little whiff of ******** about Frenetic Pony's statements. You clearly don't know your *** hole from your elbow, don't tell porky's Pony boy!

                              Comment


                                #30
                                Originally posted by KingBadger3D View Post
                                SVO is old news. It was a research project that at the time people jumped on because it meant things were getting near to realtime raytracing, it's not where we should be aiming. look at this for example http://www.youtube.com/watch?v=1pjup...9FNz4rRSeLvowQ (should also add he has a download example of his supperb work on one of his video thread). nearly six months back, it's just not right for gpu designs right now unless you have a nuclear sub powering your machine, once it is, it's redundant anyway. When GPU's can efficiently do such things path tracing will be the best way of doing this with down sampling correction filters (except maybe hair, cone tracing for such things makes sense even with offline rendering).

                                One of the best realtime idea's ive seen is Morgan McGuire's realtime GI system Lighting Deep G-Buffers, after speaking with him 4-5 months back this should appear very soon, his work on transparent depth solving is also superb (his power is in his beard, the only thing i dislike is the fact he works for Nvidia which means if we let them it will just become another licensed hate attack against the overall game industry).

                                Ill start a thread to show the many many great works that are aimed at solving this issue, let people decide whats the best direction forward short term & long term.
                                Funny story, the source code for Deep G-Buffers has just been released!

                                http://graphics.cs.williams.edu/papers/DeepGBuffer14/

                                Wonder if anyone would like to take a crack at implementing this.

                                Comment

                                Working...
                                X