Announcement

Collapse
No announcement yet.

Is light building CPU only?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Is light building CPU only?

    Is it possible to use GPU to build the lights?
    What about using both GPU and CPU? Or do a network rendering type, like you can with Vray.

    I really hope it's not limited to just the 1 cpu that the file is on.

    #2
    Lightmass is a CPU renderer. Building lighting requires loading the entire scene into memory though and most GPU's dont have much memory, even the most expensive ones might go up to 12GB which isn't that much for building a game level. For the vast majority of people they would not be able to take advantage of a GPU renderer. Also, it wouldn't necessarily be faster, since waiting for the noise to clear up enough would still take quite a while. GPU renderers are good if you have multiple GPU's and your scene fits in memory but otherwise the limitations are going to be an issue.

    Comment


      #3
      So how exactly do you go about making things faster? Surely the studios making those AAA titles aren't baking the lights with just 1 computer.

      And what if I have a big level that takes 2 days to build, then I realise that I forgot to add a mesh into the scene. Do I have to rebuild for 2 days again?

      Comment


        #4
        No, Lightmass has network rendering support--they push the lighting build to their render farm. But even then to a certain point they might drop static lighting and go with a lower-quality dynamic solution.

        Comment


          #5
          I have an i7 and it's possible to build large scenes with high quality in under 10 minutes if you just optimize your lightmap resolutions and general lighting situation. For daytime outdoor levels, if you have more than one sunlight and one skylight, something's wrong. Landscape does not need the highest resolution lighting if it's very smooth. While AO baking will definitely require higher res lightmaps, you can typically save a lot of space by lowering the lightmap resolution of objects in shadow. If you have a chandelier casting light, you can make each bulb dynamic and use a single generic static light to calculate bounce. This would be much cheaper than a stationary solution (which would have serious trouble with overlapping) and would look much better than a static solution by itself.

          People should know the limits of lightmass going in, though. While you CAN use it to cast gorgeous stained glass lighting on the environment, you CAN'T just crank up the resolution of a massive project with thousands of meshes, build on one CPU with less than 8 GB of memory, and expect a quick and easy build.

          Comment


            #6
            [MENTION=641]mariomguy[/MENTION] Thanks. I'm doing interior archviz, and there are many lights in the scene. I think all the shadows are important so I just make a blanket lightmap resolution of 1024. And even then the shadows still look quite bad.
            Click image for larger version

Name:	HighresScreenshot00002.png
Views:	1
Size:	925.1 KB
ID:	1113943

            Are there any good advanced tutorials on how to bake lighting?

            Comment


              #7
              [MENTION=62]darthviper107[/MENTION] I just tried googling for network rendering using swarm agent, and can only find unofficial tutorials for UDK.
              Are there any official documentation for UE4, or is the process still the same?

              Comment


                #8
                In that particular situation, with casting shadows on small beams with multiple overlapping lights, you NEED a higher resolution. I wish UE4's dynamic process was more improved than this, but it's not. You can use some dynamic ambient occlusion post process to cover the transitions for shadows in smaller, tighter, lower resolution areas, but that's only delaying the problem. The only alternative besides that is to literally change your scene so you don't have small objects casting shadows at all, or make those objects bigger so the transition isn't as obvious. After 1k, your light bakes will become horrendously slower and drain the memory absurdly. Take a look at your scene in the lightmap density mode to see where you can optimize, and increase what you need to.

                Comment


                  #9
                  Why not use stationary lights?

                  Comment


                    #10
                    If you use area lights or soft lights, you don't need to worry about hard sharp shadows with aliasing.

                    Comment


                      #11
                      GPU rendering is possible but usability is split into two camps of CUDA and OpenCL so has the same problem in Unreal 4 as hardware physics where nVida supports CUDA and ATI supports OpenCL.

                      Example of CUDA based rendering.



                      As mentioned video memory would become a problem for any kind of real time rendering but environment lightmass rendering could be just as fast, if not faster, then network based rendering as well as a huge bump in per frame rendering of 4k images.

                      At the very least would make for a very useful 3rd party product as a rending solution for Unreal 4 like a V-ray or Renderman plug in solution for say 3ds Max.
                      Last edited by FrankieV; 08-18-2016, 06:24 AM.
                      Clarke's third law: Any sufficiently advanced technology is indistinguishable from magic.
                      Custom Map Maker Discord
                      https://discord.gg/t48GHkA
                      Urban Terror https://www.urbanterror.info/home/

                      Comment


                        #12
                        Originally posted by FrankieV View Post
                        GPU rendering is possible but usability is split into two camps of CUDA and OpenCL so has the same problem in Unreal 4 as hardware physics where nVida supports CUDA and ATI supports OpenCL.

                        Example of CUDA based rendering.



                        As mentioned video memory would become a problem for any kind of real time rendering but environment lightmass rendering could be just as fast, if not faster, then network based rendering as well as a huge bump in per frame rendering of 4k images.

                        At the very least would make for a very useful 3rd person product as a rending solution for Unreal 4 like a V-ray or Renderman plug in solution for say 3ds Max.
                        In UE4 it doesn't take advantage of GPU for PhysX, Epic didn't want an unbalanced feature.

                        For building lighting the GPU would still have the issue of memory

                        Comment


                          #13
                          For lightmass whose purpose is lightmap baking, I think GPU-based rendering can be slower. Octane, Cycles, etc. are useful in interactive progressive rendering situation but they do not provide good baking solutions. Vray RT 3.0 has baking mode but it is normally slower than Vray CPU mode with irradiance caching because they do not have good noise reduction GPU algorithms. I don't know much about the internal of current UE4 lightmass, if it can bake scenes in 10mins or sth, i bet it is faster than current naive (path tracing based) GPU-based rendering/baking. But still I would agree there are rooms for GPU baking solutions in the future.
                          https://jiffycrew.com

                          Comment


                            #14
                            Originally posted by darthviper107 View Post
                            In UE4 it doesn't take advantage of GPU for PhysX, Epic didn't want an unbalanced feature.

                            For building lighting the GPU would still have the issue of memory
                            As I mentioned GPU has the same current problem as physics.

                            Memory requirements is not a GPU issue as the rendering solution uses bucket fill to render to my guess the g-buffer so the rendered bucket could be streamed to the hard drive using very little video memory.

                            nVidia is King of CUDA GPU rendering and this is how it works in theory.

                            http://www.nvidia.ca/object/what-is-gpu-computing.html

                            Interesting question I guess for Epic is does lightmass use bucket fill or scan line rending for things like lightmaps. If scan line then no hope to decrease render times unless using network rendering. If bucket fill one could decrease rendering time by buying a CPU with more cores available but still not even close to the thousands of possible cores available to the GPU driven by CUDA or OpenCL.

                            More about CUDA.

                            https://developer.nvidia.com/cuda-zone

                            Which puts the super computer with in a now reasonable price range of the average Indi developers budget.



                            So yeah the tech is there but until the tech becomes mainstream and bought off the shelf at Best Buy usability is still an issue.
                            Clarke's third law: Any sufficiently advanced technology is indistinguishable from magic.
                            Custom Map Maker Discord
                            https://discord.gg/t48GHkA
                            Urban Terror https://www.urbanterror.info/home/

                            Comment


                              #15
                              I don't think bucket rendering has anything to do with memory--when I render with a bucket renderer it still uses the full amount of memory required. I don't know of any GPU renderer that uses buckets or scanline, but Redshift has some way of using system memory if the GPU runs out of memory, I'm guessing that lowers performance though. That's the only GPU renderer I know that doesn't rely solely on the GPU memory.

                              Also--CPU cores and GPU cores are not the same thing, having more GPU cores doesn't automatically equal more performance than less CPU cores.

                              Comment

                              Working...
                              X