yea i agree, workflow is clunky, but it’s at least an option to consider.
No the problem in a workflow one. You cannot specify unless you write code how and when to update reflection captures, and you have no clear way to choose which illumination condition to sample using a gui.
This means building stuff by empirical steps, doing operations in series one step at a time, as there’s no official workflow.
So yes, we have the feature, but it’s not really that usable.
Needs an official panel in the spherical capture actor details with a workflow that users can interact with, otherwise it will be strictly limited to scripting/coding, and since it’s a visual thing it’s not the best use case.
even if building an automated placement system would be very difficult , exposing the creation of new reflection captures via blueprints and c++ with parameters such as refresh rate would be the only change necesary to the engine , so is a posibility .
Besides what is the diference betwen cryengine emplementation and unrea´ls that makes the latters workflow very clunky , assumig that the former is not , becuase if it is too , how it was choosen as the primary gi system of a 3A engine.
Also here is another way of using cubemaps to obtain global ilumination and while this document outdated and slower than the tecnique we were talking about,since this one places cubemaps evenly across , while under my understanding the currently used captured the information around areas with direct ilumiantion is worth reading CiteSeerX — Cubemap data structure for interactive global illumination computation in dynamic diffuse environments, unless both are the same , and i am wrong.
UE4 already has pre-convolved cubemaps for GI, that’s part of what static GI is in the first place. And the entire point of other “GI” solutions is to get it to work in realtime, which while you can do pre-placed cubemaps and re-light them in realtime, if you’re quite clever and willing to make tradeoffs, you can forget about rendering entire cubemaps in realtime unless you’re only doing one.
Besides, cubemaps alone for GI have a ton of problems with lightleak and placement and parallax and etc. etc. that ideally, a better realtime GI solution wouldn’t have.
Both of these ideas would be veryuseful for reflection mapping especially the second one , since it would make reflections much more beleivable and using them for gi would incur in very little changes , basically just apliying reflection maps with the direct lightning to all objects , instead of the reflective ones and blurring them or using lower resolution mip map for the rough objects .
But still do not understand how expensive are cubemaps , i thought that they were cheaper , because they are hardware accelerated , after all the 13 yers old thecnique of my previous post uses them semi dinamiclly .
They can be rendered at much lower resolution as stated earlier , only when the scene changes and across frames ; with a much lower polygon lod ; they could even do without bilinear filtering and maybe , just maybe without any texture if the direct lightning is painted onto the polygons.
And again why can cryengiene eaas use it, even if only for static lightning , which i have my doubts against , since it still has a dynamic day/night cycle and supports dynamic destruction and geometry cache(i dont know if both are the same for the renderer , maybe geometry cache can use some sort of precomputation ,since is a predefined playback of geometry across a number of frames.
It seems like meshes painted with the foliage tool don’t appear to be affected by DFGI (even when the instance option for DF is checked), anyone else have the same problem? (in-editor everything looks fine, but upon “play” only manually placed meshes retain the correct lighting) Can’t see why this ought to be a problem & want to avoid manually planting a forest…
Ah, that would explain it… any clue as to what & how to edit the necessaries? I’m currently working on a purely cinematic project & the effect of DFGI in-editor when used also close up is working visually quite well in a forest scene.
edit: I’m quite new to UE, so very much open for a better recommendation of how to use DFGI/GI in general.
This doesn’t even make sense. DFGI is supposed to be used close up. It’s GI. It makes more of a difference up close then it will at a distance. Also, Epic used it in their Kite demo and it affected the trees. I hope they make it affect foliage (with options to disable it on foliage for those that need it) because it’s kinda pointless for some types of projects otherwise. Not all of us want or have the skill to compile from source and make the edits that are needed to do this.
That is most likely the case.
Here is the code I was reffering to. But as statet it might not be causing the problem described.
//@todo - take the settings from a UFoliageType object. For now, disable distance field lighting on grass so we don’t hitch.
component->bAffectDistanceFieldLighting = false;
Also, if I understand you correctly, would changing that quoted value above to “= true” make a difference? (in the source code I’m guessing? not entirely a stranger to compiling, but no coder either) Also, is it the DFSS that’s missing here, rather than the GI?
Skylight is moveable & changing intensity upwards only makes the “miscolouring” worse. Here’s a screenshot (from in “play”) showing a manually placed tree on the right vs. one placed with the foliage tool on the left. As you can see the DF lighting seems to work just fine as long as the meshes in question aren’t placed with the foliage tool (it could be I’ve missed something essential though).
I didn’t see GI from your image,but it also looks good.I built master branch today and tested a scene with DFGI but found some problems.There is GI(II) indeed but too noisy.
Can anyone else reproduce that DF shadowing breaks when play-previewing meshes placed (with instance setting for DF checked) with the foliage tool? I’m totally bamboozled by this issue now…
edit: it’s a bug, so if anyone else experiences it know that it will be fixed at some point.
: I wonder if you could share what you are currently working on in regards to DF AO/GI? Like noise reduction or general performance optimization? Keep up the good work!
GPU management of the distance field objects. Instead of the rendering thread uploading all the distance field objects every frame, the GPU manages adds / removes / updates. This allows many thousands of dynamic objects to exist in the scene and was key to supporting the scale of the GDC kite demo.
Support for instanced static meshes. The above GPU object management was necessary for this to be reasonably performant with trees. There were something like 2 million trees, these are culled down to just the ones near the camera very quickly.
Heightfield occlusion - provides DFAO from Landscapes
Distance Field GI prototype - Surfels (oriented disks) are placed on objects, lit by lights each frame to compute direct lighting (shadowing provided by distance field ray tracing), then compute GI transfer to pixels on your screen from all the nearby disks. Shadowing is provided from the same distance field cone traces as DFAO. Bounce distance is limited to ~10m, there’s some leaking, and it’s about 4x slower than it needs to be to run on mid spec PC / consoles. This was intended to be a general purpose GI method for the GDC open world demo that works on any static meshes but it wasn’t coming together fast enough so I shelved it in favor of heightfield GI.
Heightfield GI - provides GI from Landscapes, with a bounce distance of about 90m. Costs 2.6ms on 980GTX. This was done by creating a GBuffer atlas of the landscape components (diffuse, height, normal) then computing direct lighting for that, then computing the indirect lighting transfer to the pixels on your screen. It integrates with the DFAO pipeline to make use of adaptive shading and DF local occlusion. This is a pretty good feature for open world games and can still be optimized a lot (it was made in 2 weeks).
Then I took a month or so break in there to recover =)
Right now I’m revisiting DFAO, trying to improve the quality and performance so that it stands alone and can be used on consoles. I made 4x performance gains over the worst case by compositing all the per-object distance fields into a global one stored in clipmaps that follow the viewer and are updated incrementally. Then cone tracing only has to go through one volume texture instead of ~150. The quality is basically the same - I use the per-object distance fields for the beginning of a shadow cone trace where the self-shadowing is so crucial and the global DF for further out.
I also switched to computing DFAO at a fixed resolution based on the screen instead of the Irradiance Cache-like algorithm I was using before that did adaptive shading. The problem with adaptive shading is that it only helps in the best case - the worst case like foliage still has to do the cone traces at the high resolution, which kill performance. The result is more stable performance and less noise / splotchiness / shifting.
Once that is wrapped up, I plan to return to DFGI and fix up the major issues with it:
Multi-bounce GI. I want to solve this by computing shadowing in the hemisphere of a Surfel by cone tracing the distance fields, then storing off that directional shadowing representation. That will be used to shadow sky lighting (which will be come indirect sky lighting), direct lighting from other Surfels (which will become second bounce indirect lighting). This will be much faster now that I have a global distance field to sample.
Doesn’t work indoors due to over-occlusion. I want to solve this by combining the two shadow depth visibility functions - one from the pixel being lit, the other from the Surfel. Each visibility function has good quality near the point it is representing, but low quality elsewhere, so combined it makes the best of both worlds.
Insufficient bounce distance. With the combination of shadowing from both the receiver and the Surfels I can double the bounce distance without introducing a bunch more leaking.
Performance cost too high. Right now the Surfel lighting is very naive, culling only happens on an object granularity. I want to generate the Surfel tree using clustering so it is not object-based (light cuts), then I should get a big speedup.
Bright singularity artifacts. Having faster Surfel lighting via the better Surfel tree should hopefully allow the 4x higher Surfel density that is needed to improve these.
Anyway, that’s the plan. We’ll see how it actually pans out.