What is better for GI, Lightmas or Light Propagation Volume? And what I have to choose for best results for environment with foliage?
Lightmass will give you more realistic results, but you have to spend the time building your lighting and if you’re doing a lot of dynamic objects then those things can’t use lightmass anyways. And if you have a large number of objects then you need a lot of memory to run lightmass.
So please correct me if I’m wrong: lightmass is the best solution for static objects. And what about LPV? Is it for dynamic objects or it is another calculation method for GI ?
Dynamic objects primarily.
LPV is still work in progress + has its limitations and lacks some of the precision of the baked lightmass process.
When it comes to foliage lightmass is not used at all. Foliage does not use any form of baked light. I am not sure how foliage handles the new LPV system.
I thing about foliage and trees. I’m a little confused what is the best for me. I am working on shaders and lighting, playing with lights, but some sort of my work does not satisfy me. I have some experience with Cryengine 3 and I am looking for the best solution to acquire the same result.
This actually not true. If you read the Lionhead article, they mention that dynamic GI can be more realistic because you can make use of reflections. You can’t do that when it’s all precomputed.
If you mean reflections like metal, then you’re right, precomputed does not take that into account. If they wanted precomputed lighting to then they could add that in, it’s not a technical limitation. In any case LPV GI accuracy and quality is so poor that it hardly matters. LPV is currently the lowest quality dynamic GI option available, if you don’t absolutely have to use dynamic lighting then avoid using LPV.
As for alternatives, UE4 uses light environments which sample the area that a dynamic object is in (mainly for characters) and uses that along with the dynamic shadows to simulate GI lighting, not very accurate but it’s fast and works pretty well. the other option is to place lights in ways that get you similar effects where you need them.
LPV is also the only realtime GI solution that actually works fast enough to use in any decent sized environment, so let’s hope Epic and Lionhead can get it working well enough. Combined with lightmass taking both bake time and being completely static, well then there are your two options for UE4 right now. Have completely static levels with almost completely static lighting and solid quality GI using Lightmass, or use LPV and hope it becomes acceptable enough quality over time.
Not the best options around certainly.
With baked lighting we have reflections through the Reflection Environment feature - the scene is captured at different points and then reprojected onto the reflection rays of the view being rendered. The difference is that precomputed GI through Lightmass doesn’t provide the reflections itself, another method is needed. This doesn’t mean that Precomputed lighting can’t have reflections. Also, Screen space reflections layer on top where valid.
It’s true that you need to have mostly static levels with precomputed GI, but it doesn’t have to be completely static. You can mix dynamic objects in with the static objects and their lighting will integrate. You can use World Position Offset on the static objects to animate them without changing the lighting. You can use Cascaded Shadow Maps on Stationary Directional lights (Set Dynamic Shadow Distance to > 0) to get dynamic shadows from the sun around the player, and it fades to precomputed shadows in the distance. This can be very effective at getting the visual benefits of moving foliage without the full performance cost - you are only paying the GPU and CPU cost of dynamic shadows for the nearby foliage. We used this in a bunch of Gears of War 3 levels with dynamic foliage. The same features work in UE4.
Oh certainly you can still produce a game with it, it’s just personally disappointing to see all the power of PS4/XNE/PC go towards yet another upgrade of static environmental effects, when faster iteration times (no light baking) and more dynamic gameplay (especially with the popularity of open/dynamic worlds for triple A games) could have been areas of concentration instead.
I was really impressed by the initial voxel cone tracing papers and demos done for UE4 when the Elemental demo was the vision for the engine. High bandwidth to a very limited amount of ram on the Xbox One and a lot other concerns from thin light blockers to transparencies are understandable reasons why it was abandoned for something more practical. But the goals were really what impressed me the most.
Here my Grass and Foliage Shaders:
I have 3 ways for shading.
- Edit Normals with SSS
- Only SSS
- Without Tanget Space Normals and SSS
I hear you. The main problem for us is that it’s difficult to make a general purpose, high quality and high performance dynamic GI method. It’s much easier to make something targeted at a certain game where you can take advantage of the game’s constraints (short view distance, or always indoors, or always outdoors, etc). That’s why there are a lot of dynamic GI methods kicking around and being shipped by other games, but they have very specific usages. Our main attempt to make something general purpose (SVOGI) didn’t scale down or achieve quality goals.
Also, Xbox One doesn’t have a lot of GPU performance to go around. PS4 and PC do allow a lot of exciting new methods because they have the teraflops and bandwidth to back it up.
Here’s some new progress on the dynamic lighting direction:
Change 2093556 on 2014/06/03 15:49:52
Distance field AO using a surface cache
* Provides mid-range stable AO for dynamic rigid meshes
* Movable sky lights are now supported, and distance field AO is used for shadowing Movable sky lighting from dynamic scenes
* Static meshes are preprocessed into signed distance field volumes at mesh build time when the r.AllowMeshDistanceFieldRepresentations project setting is enabled
* Non-uniform scaling does not work with this method (mirroring is fine), animating through world position offset also causes artifacts as the two representations diverge
* Occlusion is computed along cones to reduce over-shadowing, object distance fields are operated on directly (no hierarchy) to obtain enough resolution to prevent leaking, visibility traces are done with cone stepping to allow better parallelization, and shading is done adaptively in space and time using the surface cache.
Now this is very early, it’s barely working, and it only provides AO from moving geometry, not diffuse inter-reflectance. But it’s a step in that direction.
Oh I know, I’ve worked on GI myself. It’s nice to dream though. Maybe some day soon there’ll be enough clever people battering away at voxel cone tracing to get it to work for a general solution. It’s good to see that AO stuff though, sounds pretty clever and I’m looking forward to seeing how it works!