Is light building CPU only?

Is it possible to use GPU to build the lights?
What about using both GPU and CPU? Or do a network rendering type, like you can with Vray.

I really hope it’s not limited to just the 1 cpu that the file is on.

Lightmass is a CPU renderer. Building lighting requires loading the entire scene into memory though and most GPU’s dont have much memory, even the most expensive ones might go up to 12GB which isn’t that much for building a game level. For the vast majority of people they would not be able to take advantage of a GPU renderer. Also, it wouldn’t necessarily be faster, since waiting for the noise to clear up enough would still take quite a while. GPU renderers are good if you have multiple GPU’s and your scene fits in memory but otherwise the limitations are going to be an issue.

So how exactly do you go about making things faster? Surely the studios making those AAA titles aren’t baking the lights with just 1 computer.

And what if I have a big level that takes 2 days to build, then I realise that I forgot to add a mesh into the scene. Do I have to rebuild for 2 days again?

No, Lightmass has network rendering support–they push the lighting build to their render farm. But even then to a certain point they might drop static lighting and go with a lower-quality dynamic solution.

I have an i7 and it’s possible to build large scenes with high quality in under 10 minutes if you just optimize your lightmap resolutions and general lighting situation. For daytime outdoor levels, if you have more than one sunlight and one skylight, something’s wrong. Landscape does not need the highest resolution lighting if it’s very smooth. While AO baking will definitely require higher res lightmaps, you can typically save a lot of space by lowering the lightmap resolution of objects in shadow. If you have a chandelier casting light, you can make each bulb dynamic and use a single generic static light to calculate bounce. This would be much cheaper than a stationary solution (which would have serious trouble with overlapping) and would look much better than a static solution by itself.

People should know the limits of lightmass going in, though. While you CAN use it to cast gorgeous stained glass lighting on the environment, you CAN’T just crank up the resolution of a massive project with thousands of meshes, build on one CPU with less than 8 GB of memory, and expect a quick and easy build.

@mariomguy Thanks. I’m doing interior archviz, and there are many lights in the scene. I think all the shadows are important so I just make a blanket lightmap resolution of 1024. And even then the shadows still look quite bad.

Are there any good advanced tutorials on how to bake lighting?

@darthviper107 I just tried googling for network rendering using swarm agent, and can only find unofficial tutorials for UDK.
Are there any official documentation for UE4, or is the process still the same?

In that particular situation, with casting shadows on small beams with multiple overlapping lights, you NEED a higher resolution. I wish UE4’s dynamic process was more improved than this, but it’s not. You can use some dynamic ambient occlusion post process to cover the transitions for shadows in smaller, tighter, lower resolution areas, but that’s only delaying the problem. The only alternative besides that is to literally change your scene so you don’t have small objects casting shadows at all, or make those objects bigger so the transition isn’t as obvious. After 1k, your light bakes will become horrendously slower and drain the memory absurdly. Take a look at your scene in the lightmap density mode to see where you can optimize, and increase what you need to.

Why not use stationary lights?

If you use area lights or soft lights, you don’t need to worry about hard sharp shadows with aliasing.

GPU rendering is possible but usability is split into two camps of CUDA and OpenCL so has the same problem in Unreal 4 as hardware physics where nVida supports CUDA and ATI supports OpenCL.

Example of CUDA based rendering.

As mentioned video memory would become a problem for any kind of real time rendering but environment lightmass rendering could be just as fast, if not faster, then network based rendering as well as a huge bump in per frame rendering of 4k images.

At the very least would make for a very useful 3rd party product as a rending solution for Unreal 4 like a V-ray or Renderman plug in solution for say 3ds Max. :wink:

In UE4 it doesn’t take advantage of GPU for PhysX, Epic didn’t want an unbalanced feature.

For building lighting the GPU would still have the issue of memory

For lightmass whose purpose is lightmap baking, I think GPU-based rendering can be slower. Octane, Cycles, etc. are useful in interactive progressive rendering situation but they do not provide good baking solutions. Vray RT 3.0 has baking mode but it is normally slower than Vray CPU mode with irradiance caching because they do not have good noise reduction GPU algorithms. I don’t know much about the internal of current UE4 lightmass, if it can bake scenes in 10mins or sth, i bet it is faster than current naive (path tracing based) GPU-based rendering/baking. But still I would agree there are rooms for GPU baking solutions in the future.

As I mentioned GPU has the same current problem as physics.

Memory requirements is not a GPU issue as the rendering solution uses bucket fill to render to my guess the g-buffer so the rendered bucket could be streamed to the hard drive using very little video memory.

nVidia is King of CUDA GPU rendering and this is how it works in theory.

Interesting question I guess for Epic is does lightmass use bucket fill or scan line rending for things like lightmaps. If scan line then no hope to decrease render times unless using network rendering. If bucket fill one could decrease rendering time by buying a CPU with more cores available but still not even close to the thousands of possible cores available to the GPU driven by CUDA or OpenCL.

More about CUDA.

Which puts the super computer with in a now reasonable price range of the average Indi developers budget.

So yeah the tech is there but until the tech becomes mainstream and bought off the shelf at Best Buy usability is still an issue.

I don’t think bucket rendering has anything to do with memory–when I render with a bucket renderer it still uses the full amount of memory required. I don’t know of any GPU renderer that uses buckets or scanline, but Redshift has some way of using system memory if the GPU runs out of memory, I’m guessing that lowers performance though. That’s the only GPU renderer I know that doesn’t rely solely on the GPU memory.

Also–CPU cores and GPU cores are not the same thing, having more GPU cores doesn’t automatically equal more performance than less CPU cores.

Memory requirement will remain a big issue until GPU memory becomes scalable through multiple G/C or a virtual gpu memory system (gpu ram - cpu ram - hdd) is built. I actually experienced many failures rendering (blender cycles) with 20million+ triangles on 980 ti (6GB system) due to the lack of memory.

Tesla… I would say please don’t overestimate Tesla. The fastest GPU is actually driven by Geforce cards. Teslas are targetted for enterprise market. It has more expensive capacitors and ECC ram (error correction), but it is in general slower than the best geforce card (e.g., 1080). The reason why supercomputers use Tesla is they need to put so many cards (100+) there. In such a situation error rate is very important to both system engineers and customers. But for indie developers, 4-way geforce 1080 (I bet the performance is better than 8 best teslas) is much more adequate for use. Tesla also requires server O/S, server M/B, etc. which means many headaches…

Why can Redshift render giant scenes with their “Out of core” technology then, if memory would be such a big problem??
Its definitly doable. Because others do it already.
And who says that the engine has to run at the same time? Why not shutting the engine down and use all the GPU power for baking?
Or Use a second GPU for baking…

I have 3 GTX980 in my PC and i do heavy scene Renderings without any problems… since almost 3 years now.
Its just a question of will.

easy guide to setup swarm : https://iamsparky.wordpress.com/2010/08/24/tutorial-setting-up-swarm-for-multiple-machines/

Redshift is a renderer that does a specific job with its own specific approach, it is a hybrid but it is still not the all go and be all solution and has tons of features still missing and other limitations for full blown productions (as is the nature of GPU renderers). This discussion about GPU renderers have been made countless times before and there are very valid reasons as to why GPU rendering is not picking up, at least not in the foreseeable future yet. If this was the case then all the big studios and all the best renderers such as Vray, Arnold, Renderman , Hyperion (Disney) would’ve all gone the GPU way, and some of these renderers have been recently rewritten entirely.

Also GPU is not “Cheaper” Than CPU as purchasing a couple of old xeons with 32 gb - 64 gbram to make up a renderfarm in a small room would easily outdo any modern GPU card price wise plus give you a solid tested pipeline without surprises.

Also putting your entire production pipeline of rendering at the mercy of Nvidiia with its “super cards” is a suicidal situation and you would be asking for trouble.

You are upset waiting 2 days for a render? Welcome to the world of rendering, some of us coming from VFX world would be so happy to just have 2 days worth of renderings! when we wait months and weeks for a one minute scene sometimes. :slight_smile:

If there are any improvements to be made to lightmass, we hope it would adopt a more Vray like brute force/light cache hybrid approach for baking GI vs the old mental ray type final gather approach. That should usually bake nicer GI more accurately perhaps even faster, but then again games have low res lightmaps and this quality over resolution can become questionable at some point as to if it is worth implementing it at all. But I think it will only benefit it. Also by the time if lightmass is updated in the far future we would probably get dynamic GI performant enough to make it production ready. But that’s just wishful thinking.

What? I think you missed some years dude…

Redshift is not hybrid its GPU only.
Also its cheaper to buy some GPUs than to buy a whole new workstation.
GPU Renderers are Rock solid in the meanwhile.
Also VRay, Arnold and Renderman are going GPU already (partly).
Why is trusting Nvidia suicidal?? I use Nvidia Cards since ever without a single problem.
You wait months and weeks for a render? Maybe you should consider switching from your old Xeons to GPU? You know time is money.

Before i had a 15 PC Renderfarm, now i render on 2 PCs each 3 Nvidia cards and a lot faster.