I am “making things harder” because I am trying to improve performance, so my game can run on lower end hardware. Having things cast shadows causes those things to use an extra draw-call because they are being rendered into shadow depths every frame. If we stop rendering them every frame, we will save alot of draw-calls.
Thanks, the per object shadow map thing could potentially be useful.
I already have a scene capture setup to capture scene depth at the appropriate times. My issue is setting up unreal to use those shadow depths properly.
Creating static shadow-maps at runtime. Possible? - #11 by somawheels this post shows where i am at. My shadows dont look the same as unreal’s, they have bias issues, and the blur method i am using too expensive for my liking.
You can look through the UE4 papers & presentations to see if they have implementation details in there.
Also, I don’t think you need to 100% match unreal’s shadows: most/all of the shadows are going to be coming from your method, so there won’t be many unreal shadows to cause discrepancies.
Dynamic lights do not have an extra draw call for static and stationary objects, only movable. You literally just made a dynamic light, except slower in every way due to going through blueprint and scenecapture.
Edit: Corrected myself after testing with draw calls
Like I said several times, that’s not the way to make performance any “better” either.
With RTs and captures you have some chance of achieving something close to a shadow bake without having to shadow bake.
With a material you don’t. You may as well rely on DFAO shadows.
It’s cheaper if you build properly for it.
More tris on screen = less performance. Always.
This whole conversation is ridiculous, literally all the time and effort put into this could be put into actual game optimization that any title should be doing when using UE4, and would gain far more performance than this would for it.
It might be a neat exercise for understanding how shadows are generated, but that’s the only utility this is going to provide.
All of the objects are moveable because they’re procedurally generated at runtime:
What he’s trying to do put simply (which he already has working):
- Disable shadow casting on all lights (i.e. disable shadow rendering)
- Capture scene depth (single channel, no color) only once per minute
- Sample that texture in the directional light’s material function to test whether to darken a pixel (i.e. in shadow)
This is more performant than rendering dynamic shadows each frame. If only one capture happens for the entire duration of the game, the system would only be reading a texture in the material function and nothing else.
Shadows also render the scene, so this is true for shadows as well; plus, shadows have to render offscreen objects, as well, which means shadows render more triangles than the screen. For this reason, not rendering shadows is more performant than rendering shadows (hence why disabling shadows is an option).
He’s already built the system, so it’s not like he hasn’t tested it. His only problem now is fixing the bias issues.
@somawheels Have you considered modifying the engine itself to only render shadowmaps when you request it to? This way, you can have ue4 shadows, but only update them when you need to.
You should be spawning them in as stationary, in that case. Problem solved.
Edit: New forum was being stupid and duplicated my post when i tried to edit it. Ignore post above.
Already tried that: you can’t. Set Mobility is the only mobility node, and it is only is callable in the construction script and only works on static mesh actors. And even if it did work, where is the static shadowmap going to come from when the object was created at runtime?
HOWEVER: Rama’s Blueprint Library does have a set mobility node, and it works. But it suffers from the problem above: where does the static shadowmap come from? Plus, when you change a component to static or stationary, it can no longer move (& causes warnings), so in order to move an object, you need to set it to movable first.
Once again, the main topic has already solved, so that’s not the problem anymore. It’s the biasing on the shadows; he needs them to match unreal’s shadowmaps.
Stationary objects, not light. DO NOT USE STATIONARY LIGHT IN A FULLY DYNAMIC ENVIRONMENT.
Bits360, I found this interesting, so i tested it. It was not true for me. My stationary objects still use an extra drawcall when they have shadow casting turned on.
Yeah, they are lit like Movable Actors (from Actor Mobility | Unreal Engine Documentation):
- For Static Mesh Actors, this means that they can be changed but not moved. They do not contribute to pre-calculated lightmaps using Lightmass and are lit like Movable Actors when lit by a Static or Stationary Light. However, when lit by a Movable Light, they will use a Cached Shadow Map to reuse for the next frame when the light is not moving, which can improve performance for projects using dynamic lighting.
Is there any way you can share how you set up the light blueprint and render target, assuming you still have access to it?
I’m wondering because I’m trying to attempt a similar setup for an unlit material, but the issue isn’t necessarily how the material is set up, but rather being unsure about the setup process of the directional light. I have a render target blueprint and such, but I’m unsure if it needs any specific positioning or properties done to it.
I see there being a practical use for this kind of setup with a toon shader. Reason being that the engine doesn’t provide you lighting information in an accessible manner from a material (The hack that was done for the forward renderer for a while is now gone), and this kind of setup would save me (and probably many others) from having to edit a large source code build of the engine to hack in something close.
I’m aware about the performance cost of this, I just feel like the engine’s own lighting system, and post-processing has some very noticeable limitations that not even a source code build of the engine can fully solve.
I never did get this working well. But I can explain the gist of it.
You need to project the depth map onto the world in your unlit shader. I used the RotateAboutAxis node, to rotate the world position to the same angle as the directional light. I used the directional light position as the pivot point for that rotation.
Then you use the xy components of the rotated world position as the uvs’s for the depth map.
then you compare the depth map value with the the actual distance between the geometry and the directional light, to produce a mask/shadow.
Yeah, I seemed to see how you were achieving this through the material editor, it’s just that while trying to reference the material, and trying to get it one-to-one with the example shown, I happened to notice that the material looks always white unless I move the render target camera to the middle or below the material, so I suspect that I don’t have something set up right on the Blueprints side. It seems to be handling the light position fine, but the shadow casting part doesn’t seem to be working.
Here’s how I currently have the material laid out (I’ll probably approach filtering soon, but I am working towards getting something working first). I noticed that I had to clamp the final emissive output because otherwise it would appear too bright.
I meant to ask if there’s any specific settings or Blueprints related stuff that you had to do to the component (in this case, a psuedo directional light) that the SceneCaptureComponent is attached to prior to doing anything. Just that there isn’t that much to go off in terms of referencing if I’m doing anything correctly.
I would ask if you have the files for that still around, so I could take a look at it, but I’m unsure if that’s something you can do.
The closest thing I think you’ll find that is unreal specific for scene capture / render target shadows is this video series:
Its not exactly what you’re looking for but there is a lot of overlap.
It doesn’t actually do a proper light space to view space transformation of the depth map, but that’s because it’s using decals so it can kind of hack it and get a good enough result.
I dont have all the files I had back then im afraid, so i cant easily send you a working example.
You may not have the “Direction” material parameter setup correctly. I believe this is how i did mine:
This is the RotToQuatLinearCol function’s code:
For testing purposes, I would recommend setting the scenecapture to output colour, that way you can better see what is going wrong with the uv’s/projection.