Announcement

Collapse
No announcement yet.

Is light building CPU only?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    Originally posted by FrankieV View Post
    As I mentioned GPU has the same current problem as physics.

    Memory requirements is not a GPU issue as the rendering solution uses bucket fill to render to my guess the g-buffer so the rendered bucket could be streamed to the hard drive using very little video memory.

    nVidia is King of CUDA GPU rendering and this is how it works in theory.

    http://www.nvidia.ca/object/what-is-gpu-computing.html

    Interesting question I guess for Epic is does lightmass use bucket fill or scan line rending for things like lightmaps. If scan line then no hope to decrease render times unless using network rendering. If bucket fill one could decrease rendering time by buying a CPU with more cores available but still not even close to the thousands of possible cores available to the GPU driven by CUDA or OpenCL.

    More about CUDA.

    https://developer.nvidia.com/cuda-zone

    Which puts the super computer with in a now reasonable price range of the average Indi developers budget.



    So yeah the tech is there but until the tech becomes mainstream and bought off the shelf at Best Buy usability is still an issue.
    Memory requirement will remain a big issue until GPU memory becomes scalable through multiple G/C or a virtual gpu memory system (gpu ram - cpu ram - hdd) is built. I actually experienced many failures rendering (blender cycles) with 20million+ triangles on 980 ti (6GB system) due to the lack of memory.

    Tesla... I would say please don't overestimate Tesla. The fastest GPU is actually driven by Geforce cards. Teslas are targetted for enterprise market. It has more expensive capacitors and ECC ram (error correction), but it is in general slower than the best geforce card (e.g., 1080). The reason why supercomputers use Tesla is they need to put so many cards (100+) there. In such a situation error rate is very important to both system engineers and customers. But for indie developers, 4-way geforce 1080 (I bet the performance is better than 8 best teslas) is much more adequate for use. Tesla also requires server O/S, server M/B, etc. which means many headaches...
    Last edited by Jiffycrew; 08-18-2016, 05:30 PM.
    https://jiffycrew.com

    Comment


      #17
      Why can Redshift render giant scenes with their "Out of core" technology then, if memory would be such a big problem??
      Its definitly doable. Because others do it already.
      And who says that the engine has to run at the same time? Why not shutting the engine down and use all the GPU power for baking?
      Or Use a second GPU for baking...

      I have 3 GTX980 in my PC and i do heavy scene Renderings without any problems... since almost 3 years now.
      Its just a question of will.
      www.c3d.at

      Comment


        #18
        Originally posted by Talismansa View Post
        [MENTION=62]darthviper107[/MENTION] I just tried googling for network rendering using swarm agent, and can only find unofficial tutorials for UDK.
        Are there any official documentation for UE4, or is the process still the same?
        easy guide to setup swarm : https://iamsparky.wordpress.com/2010...iple-machines/

        Comment


          #19
          Redshift is a renderer that does a specific job with its own specific approach, it is a hybrid but it is still not the all go and be all solution and has tons of features still missing and other limitations for full blown productions (as is the nature of GPU renderers). This discussion about GPU renderers have been made countless times before and there are very valid reasons as to why GPU rendering is not picking up, at least not in the foreseeable future yet. If this was the case then all the big studios and all the best renderers such as Vray, Arnold, Renderman , Hyperion (Disney) would've all gone the GPU way, and some of these renderers have been recently rewritten entirely.

          Also GPU is not "Cheaper" Than CPU as purchasing a couple of old xeons with 32 gb - 64 gbram to make up a renderfarm in a small room would easily outdo any modern GPU card price wise plus give you a solid tested pipeline without surprises.

          Also putting your entire production pipeline of rendering at the mercy of Nvidiia with its "super cards" is a suicidal situation and you would be asking for trouble.

          You are upset waiting 2 days for a render? Welcome to the world of rendering, some of us coming from VFX world would be so happy to just have 2 days worth of renderings! when we wait months and weeks for a one minute scene sometimes.

          If there are any improvements to be made to lightmass, we hope it would adopt a more Vray like brute force/light cache hybrid approach for baking GI vs the old mental ray type final gather approach. That should usually bake nicer GI more accurately perhaps even faster, but then again games have low res lightmaps and this quality over resolution can become questionable at some point as to if it is worth implementing it at all. But I think it will only benefit it. Also by the time if lightmass is updated in the far future we would probably get dynamic GI performant enough to make it production ready. But that's just wishful thinking.

          Comment


            #20
            What? I think you missed some years dude...

            Redshift is not hybrid its GPU only.
            Also its cheaper to buy some GPUs than to buy a whole new workstation.
            GPU Renderers are Rock solid in the meanwhile.
            Also VRay, Arnold and Renderman are going GPU already (partly).
            Why is trusting Nvidia suicidal?? I use Nvidia Cards since ever without a single problem.
            You wait months and weeks for a render? Maybe you should consider switching from your old Xeons to GPU? You know time is money.

            Before i had a 15 PC Renderfarm, now i render on 2 PCs each 3 Nvidia cards and a lot faster.
            www.c3d.at

            Comment


              #21
              What kind of sick video cards do you have? I keep reading complaints about how the video memory is not enough. Sure it is, ever tried Lumion? I have 6 GB and I want a simple answer to the question raised in the first place (only UE 4 staff answers): Is there a method to stop using 100% CPU to build/compile lighting and use more GPU? I once read you can do a sript for that but I forgot where. PLEASE ANSWER THE QUESTION.

              Comment


                #22
                There's now a GPU lightmass renderer, but you still need enough GPU memory to load all of the assets onto the GPU. You also can't combine the two, you either use the classic Lightmass which is 100% CPU or you use the GPU Lightmass Renderer which is 100% GPU

                Comment


                  #23
                  Originally posted by darthviper107 View Post
                  There's now a GPU lightmass renderer, but you still need enough GPU memory to load all of the assets onto the GPU. You also can't combine the two, you either use the classic Lightmass which is 100% CPU or you use the GPU Lightmass Renderer which is 100% GPU
                  OK, thank you, and how to use the GPU Lightmass? I would like to use two gpus.

                  Comment


                    #24
                    https://forums.unrealengine.com/deve...s-gpulightmass

                    Comment


                      #25
                      Thats memory argument is inacurrate, both in cuda as in dx12/vulkan you can specify host visible memory buffers and of course stream data from ram. Also you could split the levels in chunks or perform progressive light baking, keeping only the necessary parts for a frame in memory. How do you think dynamic lighting is calculated?

                      Comment

                      Working...
                      X