Confused about different shadow type settings for lights

So been studying on light settings for the last two days and still need some things cleared up. In a directional light there seems to be 3 main settings for shadows:

Shadow resolution scale
Located under the main Light section this appears to fix issues I was having with smaller objects having LOD popping when zooming in and out. However I am not sure what type of shadow this is or how it is calculated. I have found information in the docs for cascade shadows and distant field shadows but not for this default…shadow shadows?

Distance field shadow distance, Trace distance etc
So I understand how distant field shadows work and how you need to enable distant fields to be created first. All makes sense however I’m unsure if this is done in addition to the above setting and in addition to cascaded shadows since you seem to be able to set them all at once. Does this setting override other shadows or are they all used at once for different LOD’s?

Cascaded shadow settings
I understand how these work with basically layers of rendering at different resolution for different distances. As with above however I am unsure what overlap this has with other shadow settings. Do distance fields and cascade and shadow shadow all exist at the same time. Should I manually disable some other settings with this setting.

I’m just working out which is going to be better for quality/performance and would like a better understanding of how it all works for testing to go smoother. Thanks for any help!

For shadow resolution scale: There is no such thing as a “default shadow”, This simply multiplies the resolution of dynamic and stationary shadows, no effect on static shadows as that isnt how they work.

For distance field shadows: Not entirely sure what your question means, but if your asking if it works only at a long distance, being replaced by normal dynamic shadows up close, the answer to that is: I wish. Also static and distance field shadows have no effect with shadow resolution scale.

Cascaded shadows: There is no such thing as a shadow shadow nor has anyone said anything like that, cascaded shadows dont magically turn on/off cascades, and distance field shadows with cascaded shadows are only for if you can see that far.

Tips

Replace your directional light with a stationary one if you dont have a moving sun and enable csm on it, with a lower distance and possibly lower cascade count. You will get both better fps and better quality/shadow distance

Dont even bother with distance fields unless you absolutely NEED them for A: Distance to nearest surface effects, B: Distancefield AO, in the case of procedurally generated worlds only, otherwise just use static, or C: A LOT of moving shadowed lights, like i mean A LOT (10+ at once)

Dont use dynamic shadows if the light doesnt move, instead use stationary, it doesnt look as good and only up to 4 of them can overlap (Excluding skylight, Including directional light) but it runs A LOT faster and it works correctly with reflection captures

Dont use stationary shadows for rect. lights, Lights with a high light radius, or lights that dont need the detail like small or far lights. Your graphics will look decently better this way and it will also save an absolute TON of fps.

Use reflection captures. Seriously, use them. Unless your game has 0 specular at all (Specular manually set to 0 on ALL materials, do not do this unless your really going for a stylised look without reflections) or your going with a procedurally generated game, there is no reason not to use reflection captures otherwise.
In the case of procedural games, look into SSR and update your skylight cubemap occasionally every few seconds at a low resolution.
In the case of stylised games without any reflections, delete all your reflection captures and turn your SSR off.

Make your reflection captures fit your room or place a large one in the center of a body of water, and dont add many per surface, just have a few larger ones. Also, exclude furniture in the center of the rooms from the reflection capture or it will look just… wrong.

This is all i know in lighting boiled down to 1 wall of text

Thanks for the detailed information and tips! Yea, the reason I was using “shadow shadows” or default is I wasn’t sure what to call shadows that weren’t either cascade shadows or distant fields. I guess just dynamic shadows? Confusing as the only page I could find in the documentation specifically mentioning dynamic shadows is actually talking about cascade shadows (Content Examples Sample Project for Unreal Engine | Unreal Engine 5.2 Documentation)

Either way I’ll try some of your tips and mess around more to see how it goes. For context here is what I have been playing with:

I’ve setup this scene just to test different shadow settings and see their results. Around 1000 jigsaw pieces are created at runtime with procedural mesh (combined into one mesh). The final play area will be fairly small with just a table in a room. Each piece is currently 3cm in size so I’m only focused on lighting a small area with small objects. Having shadow not pop change with zoom levels is a high priority since zooming in and out of the area is going to be the main play.

I’ve also designed a way to create custom normal or height maps for a beveled edge around the pieces at runtime and will be testing different texture methods for the best effect.

As I said changing “shadow resolution scale” does fix the pop issue but I’m just learning and trying everything to aim for the best performance/quality. I’ll try some of your tips and see how it goes.

Cascade shadow maps are susceptible to popping, or it’s more of a non-gradual transition, depending on how they’re set. In the video posted, it appears somewhat gradual, but the quality zoomed out is really blurred and I’m not sure if that’s intended or not. It’s probably not a big deal if it’s a puzzle game. However, expanding the distance for CSM some and extending the transition zones might help it become less noticeable or frequent. Depends on the size of the room and the camera for in-game views though.

Distance fields are intended for close-up areas in addition to larger areas where camera is further from objects / buildings / target. There’s two-sided distance fields, which I think are to get more detail for certain mesh designs, but not sure what else. From what I recall reading in the docs, there’s a resolution setting for distance fields too, in the mesh editor for any mesh. There seems to be overlap issues with CSM and distance fields depending on how they’re each set for directional light and meshes. The question I have is: which is priority, and if mixed priority, how are they blending?

Great, these are awesome tips! I love it, thanks!

In that case, yeah, distance field shadows all the way, with a decently high res distance field. If you have 1000 movable shadow casters your going to need distancefields, or a LOT of spare fps.

For example Fortnite uses a lot of ray traced distance field shadows, I heard that cascaded shadow maps are rendered at maximum 60 meter radius around the camera. After that it switches to DF shadows.

Thanks. Btw I’m not sure if it matters with shadows but all the pieces are one mesh. When you grab one it’s separated from the mesh and when you place and it’s settled it gets put back into the mesh. Drawing 1000 unique separate meshes would be unfeasible.

Im not sure thats possible, but that does seem pretty useful, could you tell me more about seperating a piece from the main mesh, if you already know how, because that seems potentially incredibly useful and i have not seen others cover it quite yet

I was thinking that making them all one mesh would be the most practical and efficient…especially for distance fields. A mesh’s distance field can be replaced with another mesh’s distance field also, so it could be possible to simplify the puzzle pieces mesh further in the case it’s taking up too much memory or something. It could be a fallback in that and other scenarios.

Sure. I’m using RuntimeMeshComponent which is kind of modified version of UProceduralMeshComponent to create the mesh with all the pieces. I think either can be used. Pieces are drawn with points based upon generated bezier curves then filled in with triangles in a process that took way to long for someone not good at math :). Have to create uv maps based on shapes too…

Save the entire array of points in a variable and save the array of points for each piece in a struct.

During creation the poly faceid will be created in a certain order and you can save that to an array or whatever that points to a struct array with all the information for the points of one piece. Then when the user clicks do a trace with the following to get the faceid of the poly being clicked on:


FCollisionQueryParams TraceParams(TraceTag, true, ActorsToIgnore[0]);
TraceParams.TraceTag = TraceTag;
TraceParams.bTraceComplex = false;
TraceParams.bReturnFaceIndex = true;
TraceParams.AddIgnoredActors(ActorsToIgnore);
FVector startLocation = WorldLocationReturned;
FVector endLocation = startLocation + (WorldDirectionReturned* traceDistance);
World->LineTraceSingleByChannel(hitObject, startLocation, endLocation, ECollisionChannel::ECC_Camera, TraceParams);

Take the array for the large mesh with all pieces you saved and create a new one with the indices of the piece you clicked on removed. Send that to runtimemesh to redraw the main one with the missing piece. Draw a single piece under the mouse position and make it drag-able, physics, etc.

When you let go use maths on the vertices to align them for the big mesh:


for (int32 vertCounter = 0; vertCounter < verticesSent.Num(); vertCounter++) {
       const FVector WorldSpaceVertexLocation = transformSent.TransformPosition(verticesSent[vertCounter].Position);
       vertices.Add(transformSent.TransformPosition(verticesSent[vertCounter].Position) - GetActorLocation());
}

Take the array for the large mesh and add those vertices back then send it to runtimemesh again.

That the very basic rundown of what I’m doing.

Can someone explain what causes this popping here?

It seems to happen no matter what “shadow resolution scale” is set to (in this example it’s 1 million) and does not matter if raytraced distance fields is turned on or not. Either way looks exactly like that.

It could be the CSM transition between cascades. It’s connected to the camera frustum and distance from camera, so as the frustum (invisible rectangular box extending from camera to camera’s target direction) moves up, it’s basically rotating up in the video from the pivoting of the view. That results in a further distance of the shadow from the frustum, and at a certain point it transitions between cascades according to the CSM settings. The larger the transition zone, the less noticeable the transition from one shadow cascade (or quality level) to the next, up to a point I think I remember reading. Re-read the CSM doc page, and it might help understand how to set CSM so it’s least much less noticeable. The other possible cause is it’s changing from a CSM shadow to a distance field shadow. I don’t know how to fix that.

What is that shadow casting object? If you type FreezeRendering in the console when the shadow gets cut off and move the camera, does whatever that object is show the same hard edge?

Yes, it’s an edge puzzle piece and it should have a hard edge on the shadow:


The pieces are not moving at all currently, but are created at runtime dynamically so need to use dynamic shadows.

If the shadow is meant to have hard edges and the pieces are created at runtime, it could be an issue with their bounding boxes and them switching between cascades based on the size of box. When you spawn them, could you set the bounding box size larger(individually or as a whole) as a test?

Edit: You could also enable the visualizer for bounding boxes in the viewport to verify

In this case, dynamic shadows (No distancefields) are your best option.

Oh sounds like a good idea. I didn’t realize the bounding box made a difference for shadows. That’s a part I haven’t got to yet so I’ll look into how it’s created and see. Currently there is either no collision bounds or it’s the same shape as the mesh. I’m not sure if you can visualize them for procedural meshes; there seems to be a lot of visualize things they don’t work with.

The bounding box isn’t the collision bounds. It’s a part of the rendering of the mesh. If bounding box is smaller than the actual mesh, then pixels of the mesh are outside of the bounding box, and are not renderered correctly or are not rendered at all. That includes other aspects of the rendering of it, such as materials and shadows. It can result in blinking, jittering, and other artefacts such as shadow problems. That is one problem that can occur with bounding boxes, and there’s a visualizer for “Out of Bounds Pixels” under Show button. It’s separate from the bounding box visualizer.