Announcement

Collapse
No announcement yet.

Global Illumination alternatives

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

    #31
    Is it consensus that other techniques (such as path tracing) will outperform voxel cone tracing once hardware has advanced far enough to handle SVOs efficiently? If so, what is the key problem? Is it that voxel data structures are not well suited for GPUs?

    Comment


      #32
      Originally posted by StaticTheFox View Post
      Funny story, the source code for Deep G-Buffers has just been released!

      http://graphics.cs.williams.edu/papers/DeepGBuffer14/

      Wonder if anyone would like to take a crack at implementing this.
      Yeah ive had a quick chat with Morgan about the work, Idea's for speed increases & quality. Morgans a top boy, he always share's the good ****!. If Epic staff don't port this Opengl code over to UE4 Ill take a look. Check the video though (which is just a tech demo, doesnt have PBR of any of the other bells and whistles UE4 provides) mixed with UE4 this could be a very nice screen space GI system that could well reach near to photoreal results with the right work.

      Comment


        #33
        Originally posted by robin.ender View Post
        Is it consensus that other techniques (such as path tracing) will outperform voxel cone tracing once hardware has advanced far enough to handle SVOs efficiently? If so, what is the key problem? Is it that voxel data structures are not well suited for GPUs?
        Sparse voxel octrees aren't good for GPU's simply because they don't map linearly to memory, causing all the nice parallelisation that GPUs are good at to go out the window. Which is why many people looking at voxel cone tracing have switched to uniform 3d textures, which is a very nice speed up but can take up a lot more ram if you aren't very, very clever.

        Originally posted by KingBadger3D View Post
        Yeah ive had a quick chat with Morgan about the work, Idea's for speed increases & quality. Morgans a top boy, he always share's the good ****!. If Epic staff don't port this Opengl code over to UE4 Ill take a look. Check the video though (which is just a tech demo, doesnt have PBR of any of the other bells and whistles UE4 provides) mixed with UE4 this could be a very nice screen space GI system that could well reach near to photoreal results with the right work.

        Bleh, just another "look at what we can do in Sponza!" type demo. Highly dependent on view direction and scene complexity isn't something that's a good solution.

        Screenspace tracing stuff was invented because you already had to do all the heavy lifting of rendering everything there to begin with, and re-using it relatively cheaply made sense. This is just a wrong headed approach of taking the "cheap" portion of 2.5d screenspace tracing and extending by adding yet more of the brute force portion of rendering everything to screenspace to begin with.

        Besides, notice A. That it's recorded at just 30fps on a Titan, which is really a nice bottom performance requirement to have. B. during the global illumination demo they never do something an actual player would do, which is just turn around and face away from the light. Because then all the lighting would disappear! Really I'd not bother with this.

        Comment


          #34
          Originally posted by Frenetic Pony View Post
          Sparse voxel octrees aren't good for GPU's simply because they don't map linearly to memory, causing all the nice parallelisation that GPUs are good at to go out the window. Which is why many people looking at voxel cone tracing have switched to uniform 3d textures, which is a very nice speed up but can take up a lot more ram if you aren't very, very clever.



          Bleh, just another "look at what we can do in Sponza!" type demo. Highly dependent on view direction and scene complexity isn't something that's a good solution.

          Screenspace tracing stuff was invented because you already had to do all the heavy lifting of rendering everything there to begin with, and re-using it relatively cheaply made sense. This is just a wrong headed approach of taking the "cheap" portion of 2.5d screenspace tracing and extending by adding yet more of the brute force portion of rendering everything to screenspace to begin with.

          Besides, notice A. That it's recorded at just 30fps on a Titan, which is really a nice bottom performance requirement to have. B. during the global illumination demo they never do something an actual player would do, which is just turn around and face away from the light. Because then all the lighting would disappear! Really I'd not bother with this.
          I'm using only a single GTX 480, and it runs near 30 fps on it. I don't see why it wouldn't be faster on say, a GTX 780.

          Comment


            #35
            Originally posted by StaticTheFox View Post
            I'm using only a single GTX 480, and it runs near 30 fps on it. I don't see why it wouldn't be faster on say, a GTX 780.
            Yeah Pony Boy is full of *****, The video was recorded at 30fps. It runs at better than 30fps, just because screen space doesn't mean lighting isnt calculated. It's the same as lighting with no GI, if you have lights within the screen frustum that effect what you see, the GI is still effective. Also consider this isnt the 1 pass version in the paper, their are many ways to speed this up.

            Better than lies about methods that Pony boy goes on about, what a ****!

            PS the attrium scene with tables and chairs is a very high poly scene, it's not a hack, and nothing is baked. You keep going on about your code your working on, show us your engines examples. If you want a look at mine just search Kingbadger3d on youtube and look for NexusGL.
            Last edited by KingBadger3D; 06-19-2014, 05:35 PM.

            Comment


              #36
              Originally posted by KingBadger3D View Post
              Yeah Pony Boy is full of *****, The video was recorded at 30fps. It runs at better than 30fps, just because screen space doesn't mean lighting isnt calculated. It's the same as lighting with no GI, if you have lights within the screen frustum that effect what you see, the GI is still effective. Also consider this isnt the 1 pass version in the paper, their are many ways to speed this up.

              Better than lies about methods that Pony boy goes on about, what a ****!

              PS the attrium scene with tables and chairs is a very high poly scene, it's not a hack, and nothing is baked. You keep going on about your code your working on, show us your engines examples. If you want a look at mine just search Kingbadger3d on youtube and look for NexusGL.
              You should try implementing it, it'd be the most epic project ever!

              Comment


                #37
                Originally posted by KingBadger3D View Post
                Yeah Pony Boy is full of *****, The video was recorded at 30fps. It runs at better than 30fps, just because screen space doesn't mean lighting isnt calculated. It's the same as lighting with no GI, if you have lights within the screen frustum that effect what you see, the GI is still effective. Also consider this isnt the 1 pass version in the paper, their are many ways to speed this up.

                Better than lies about methods that Pony boy goes on about, what a ****!

                PS the attrium scene with tables and chairs is a very high poly scene, it's not a hack, and nothing is baked. You keep going on about your code your working on, show us your engines examples. If you want a look at mine just search Kingbadger3d on youtube and look for NexusGL.
                Oh fine go ahead and implement, or just understand what the paper is about at all. You don't have to listen to me of course, I'm just trying to save someone time and effort. Or you could just download the demo if you are incapable of visualising what this technique actually does, and see what it does for yourself.

                Comment


                  #38
                  Originally posted by Frenetic Pony View Post
                  Oh fine go ahead and implement, or just understand what the paper is about at all. You don't have to listen to me of course, I'm just trying to save someone time and effort. Or you could just download the demo if you are incapable of visualising what this technique actually does, and see what it does for yourself.
                  I've been told by the moderators to ignore you, again you keep going on about the Code your working on? any video's? In fact anything worth a squirt of ****? Show us your right. If not just get to the back of the bus.

                  Comment


                    #39
                    The Deep G-Buffer GI seems like it could be a nice addition, it retains some properties of SSDO, performs very good AO and it's relatively cheap.

                    It's still not a real solution by itself I'm afraid. It's screen space, and even with all the tricks they used it doesn't take into account the actual volume of light scattered into the whole scene. All light behind camera is not scattering anything. Which is fine BTW, as a real preactical solution could be to use a coarse volume sampling to get interactions from the areas outside the camera frustum and blend them over with DGBGI when it gets into the view.
                    Sort of like the reflection system we have right now, where samples from the reflection probes gets blended with SSR.

                    Still I'd love to see strides in implementing something more elegant, ie. something I don't have to massage placing proxy probes carefully to integrate a screen space effect. Think about procedural levels for instance. You will not be able to art direct any volume you would need to get lighting info. If you have some corridors, maybe fire behind the corner, you're not gonna see it. Then it suddenly pops in and scatters, could be a nice effect but still not really realistic. They shomehow solved this using 2 or 3 levels of G-buffers, but the placement is very tight and doesn't allow much. And still it doesn't take into account the fire you already passed and it's now in your back.

                    Originally posted by KingBadger3D View Post
                    I've been told by the moderators to ignore you, again you keep going on about the Code your working on? any video's? In fact anything worth a squirt of ****? Show us your right. If not just get to the back of the bus.
                    Please calm down. After all this is an Epic Feedback forum. If you want to fight take it to another thread.
                    Pony last made an intelligent statement, that applies to every company in the world: investment cost/time vs return. In his eyes this tech is not really something to invest time on, and the comment he made are pretty legit.
                    It's good tech, mind me, so you comments are pretty legit too. It only depends on how much time and resources you have to invest.
                    If he's developing something you should be happy instead of bashing him, since that's going to be distributed at a certain time and everyone could benefit from this. If he doesn't deliver, well, who cares? you already have the tools to make a beautiful game, and those comes from Epic, not him.

                    That being said, is there anyone willing to port the released source code of DGBGI to UE?
                    I'm just an artist with enough technical knowledge, the best i could do is make you a statue in 3d

                    Comment


                      #40
                      Originally posted by max.pareschi View Post
                      That being said, is there anyone willing to port the released source code of DGBGI to UE?
                      I'm just an artist with enough technical knowledge, the best i could do is make you a statue in 3d
                      I have downloaded the code from the link, and have had a quick look at it, I plan to convert it across at some point, just not sure yet, and if I am accepted into the NVidia HairWorks beta program, it may be even longer. So if no-one has come up with anything better by that point then I will convert.
                      FluidSurface Plugin: https://github.com/Ehamloptiran/UnrealEngine/releases
                      TextureMovie Plugin: https://github.com/Ehamloptiran/TextureMoviePlugin

                      Comment


                        #41
                        Originally posted by max.pareschi View Post
                        The Deep G-Buffer GI seems like it could be a nice addition, it retains some properties of SSDO, performs very good AO and it's relatively cheap.

                        It's still not a real solution by itself I'm afraid. It's screen space, and even with all the tricks they used it doesn't take into account the actual volume of light scattered into the whole scene. All light behind camera is not scattering anything. Which is fine BTW, as a real preactical solution could be to use a coarse volume sampling to get interactions from the areas outside the camera frustum and blend them over with DGBGI when it gets into the view.
                        Sort of like the reflection system we have right now, where samples from the reflection probes gets blended with SSR.

                        Still I'd love to see strides in implementing something more elegant, ie. something I don't have to massage placing proxy probes carefully to integrate a screen space effect. Think about procedural levels for instance. You will not be able to art direct any volume you would need to get lighting info. If you have some corridors, maybe fire behind the corner, you're not gonna see it. Then it suddenly pops in and scatters, could be a nice effect but still not really realistic. They shomehow solved this using 2 or 3 levels of G-buffers, but the placement is very tight and doesn't allow much. And still it doesn't take into account the fire you already passed and it's now in your back.



                        Please calm down. After all this is an Epic Feedback forum. If you want to fight take it to another thread.
                        Pony last made an intelligent statement, that applies to every company in the world: investment cost/time vs return. In his eyes this tech is not really something to invest time on, and the comment he made are pretty legit.
                        It's good tech, mind me, so you comments are pretty legit too. It only depends on how much time and resources you have to invest.
                        If he's developing something you should be happy instead of bashing him, since that's going to be distributed at a certain time and everyone could benefit from this. If he doesn't deliver, well, who cares? you already have the tools to make a beautiful game, and those comes from Epic, not him.

                        That being said, is there anyone willing to port the released source code of DGBGI to UE?
                        I'm just an artist with enough technical knowledge, the best i could do is make you a statue in 3d
                        Hey, Yeah the reason i said id been asked by the moderators to ignore him was because i had been given an Infraction from admins after a complaint (wonder who that was). The reason it got me annoyed was the fact he just wrote of the whole system like he's the formost realtime engineer in the world, When Morgan who came up with this and kindly shared the code really is one of the best in the world realtime researchers. If i just put up with him writing it off the chances of people asking for this to be included would be reduced.

                        Yes it is screen space, But is a far bigger step forward in terms of going beyond what screen space normaly is, theirs no reason why LPV couldn't be tied to this for course off-screen Radiance mixed correctly like you said with well placed reflection probes. You could even include importance based voxel screen sampling/dissragarding like G. Papaioannou, does in his screen space Voxel based GI system Progressive Screen-space Multi-channel Surface Voxelization.from GPU Pro 4 (also has Opengl Code).

                        The real big reason it annoyed me was because half of what he was talking about was plundered from posts and research ive been doing into sparse voxel dags mixed with 4D visibilty field maps for secondary ray acceleration. I pointed the guy to my examples of code (NexusGL Engine) to point out if you want to bash other peoples research then at least have examples to show you know what your talking about. Such a surprise their was no examples.

                        Just letting people make stuff up that kinda sounds right if you don't know what your talking about doesn't help anyone.

                        Comment


                          #42
                          Originally posted by StaticTheFox View Post
                          I'm using only a single GTX 480, and it runs near 30 fps on it. I don't see why it wouldn't be faster on say, a GTX 780.
                          I'm running the demo on a GTX 680 4GB and getting around 10-15 fps with Deep G-Buffer Radiosity: Performance mode at 2560 X 1538(8 ms for radiosity, 6ms for filter). The quality of the rendering speaks for itself, it's quite good compare to the pre-render light probe. It doesn't support PBR so imagine this thing optimized + PBR!

                          Originally posted by Ehamloptiran View Post
                          I have downloaded the code from the link, and have had a quick look at it, I plan to convert it across at some point, just not sure yet, and if I am accepted into the NVidia HairWorks beta program, it may be even longer. So if no-one has come up with anything better by that point then I will convert.

                          HairWorks would be a cool project but if I had to pick, I'd pick rendering GI anyday Someone implement this!
                          Last edited by SonKim; 06-30-2014, 10:36 AM.
                          TOUR of DUTY

                          Comment


                            #43
                            Lots of interesting stuff in this thread.

                            At a personal level, most of us graphics programmers at Epic would love nothing more than to work on dynamic GI, however there are a lot of other tasks to be done that affect various things shipping. It's a difficult thing to implement a good feature for UE4, it has to be much more robust, cross-platform and performant than what you might do for a tech demo or a single game where you know exactly how it will be used. Now I'm just making excuses =)

                            We did get a chance recently to add a major dynamic lighting feature that will be in 4.3:
                            https://forums.unrealengine.com/show...t-shadowing%29
                            it provides medium-scale Ambient Occlusion for the skylight, in a way that supports dynamic scene changes like walls being broken down or constructed, doors opened (all things that happen regularly in Fortnite). It's computed in world space, no screenspace artifacts!

                            I have a lot of ideas for how to go from here to dynamic GI in UE4 but I'll keep them to myself for now.

                            Comment


                              #44
                              Here's a new paper on "Cascaded Voxel Cone Tracing" I'm really impress with their dynamic GI!( https://www.youtube.com/watch?v=9bnfz3XjUxQ )

                              Link to the paper: http://fumufumu.q-games.com/archives...cing_final.pdf
                              http://fumufumu.q-games.com/archives/2014_09.php#000934
                              Last edited by SonKim; 09-03-2014, 06:27 AM.
                              TOUR of DUTY

                              Comment


                                #45
                                My god those characters are creepy. The tech paper is basically like the Elemental reveal tech, but using cascades more efficiently, if that runs on PS4 then its quite interesting.
                                UDK and UE4 programmer and Unreal engine 4 betatester. Currently working on commercial VR games for PSVR.
                                Deep knowlegde of C++ and blueprints. Open to freelance work.
                                Games released, Deathwave(Steam), VRMultigames(Steam), DWVR(Steam,Oculus,PSVR):
                                http://store.steampowered.com/app/463870
                                http://store.steampowered.com/app/500360
                                http://store.steampowered.com/app/520750

                                Comment

                                Working...
                                X