If in shadow or if in direct light of the sun. It’s the same. It wouldn’t need to be animated, the objects will be static meshes. I only animated the gif to illustrate the point.
I saw similar questions here but either without answers or with difficult to understand answers. And I think all questions were related to a character, so cheaper to detect. But what about thousands of static objects ?
Most people said it’s needed to perform a line trace or raycasting. Can this be done for many objects ? It could be done maybe once every 10 seconds or something, since the objects are static. And someone said it will be easier if the light is directional. In that case I only need to “get the forward vector of the light, and trace from the location in the opposite direction of the forward vector.”
But I have no idea how can that be done. I’m guessing I need to do that in Blueprint and there I will have an output where I will be able to change between 2 materials depending on this “line trace”. Any pointers ?
That changes how the material is drawn in the depth pass that generates the shadow, not how the material looks when in shadow.
This is impossible with deferred rendering, because the lighting and shadow pass are done after the base pass, so the material can ot use shadow information because it does not exist yet.
You would need to perform a shadow pass before the base pass, but the base pass can alter depth (such as when using PDO or WPO) so we cannot guarantee that the depth at this stage will actually be correct. But if we disregard this, a render target could be generated with a shadow map that could be sampled in the material.
Alternatively this can be done with forward rendering although this tutorial is outdated.
Edit: impossible to do in a shader, blueprints can trace a vector towards a light and test for collision, but this would be per object not per pixel and also very expensive at scale.
Per object would be fine. I’m hoping that trace line can detect (roughly) if an object is visible by the sun and if not I could activate a switch or lerp in the material and make a switch to a different branch of the material.
But how to do that ?
I only found this tutorial
UE4 - Highligh objects by switching materials using a LineTrace
Dealing with raycasting and changing a material but it does not have the directional light part in it.
And how expensive would this be to detect if thousands of objects are in the sun or shadow ? Maybe it could be a one time operation ? I mean not continuously.
And how can I generate that shadow pass before the base pass ? And can that custom shadow pass be fed back into material with the shadow pass switch ?
Shadow pass switch is irrelevant for your use case. It changes the shadow, not things that are in shadow.
The only way I can think of to generate a shadow map before the base pass without changing engine code would be to use an orthographic scene capture component to capture a depth buffer of the scene from the angle of the sun to a render target and make the shadow map yourself from scratch using it. You would need to understand how shadow mapping works fully. But yes it would then be possible to sample it as a texture.
If it’s totally static, you only need to test the ray when they object is added to the scene, or when the light moves. If neither ever move, there is no runtime cost to just check it with a ray at build time. If the sun moves, every single one would need to re-test and it could be very expensive but the only way to say how expensive is to test it. No one is going to be able to give you a number.
Your shadow map solution could be helpful. But maybe I should try some kind of a simpler workaround. After all even in the gif animation posted above (3ds max) the objects are not really changing materials based on the shadow, but it’s a “hack” with an invisible box which acts as a “gizmo” and where the gizmo moves and overlaps the objects it changes the materials.
The post process volume is something similar but as far as I know it can only change things like brightness and exposure of the display.
There is also the pixel depth node which can change materials based on the distance from the camera. It would be great if there would be a similar solution which could change a material based on the distance between 2 objects. And there is - the distance node. But I’m not sure if it can be used in the following scenario:
I have the objects in the scene. And then wherever there is shadow I clone that “gizmo” object and place it on top of the object/objects in the shadow. So in the end I have multiple gizmo objects with different sizes and maybe different shapes in the scene. Those objects should be connected to the distance node and the visible objects should be connected to the other input of the distance node. But I’m not sure what kind of logic would be needed in the material graph to get this effect. Is it possible ?
There are ways you can fake it, sure. Distance probably isn’t ideal. Material Parameter Collections can pass values from a blueprint into the material, such as a point is space to measure distance against. But the number of vectors needs to be pre-defined, you can’t just keep adding more points, and every point you add makes the material more expensive for all of the objects using it, even if they are nowhere near those points. And since distance is measured against a point, it would be a spherical mask, not a box.
You could also just stick a decal on them to color them. You could just manually select them and change their material.
If it is totally static and will never change, then you just pre-compute it once when the object is added to the scene.
Someone might be able to offer a better solution if they knew what exactly you’re trying to achieve and under what conditions. Is it visual only, does it need to impact gameplay somehow? Is it 100% static, or will the object or light move at any point during runtime?
There’s endless ways you could achieve something, but there’s probably only one best way and it’ll depend on your use case. But without that information people are just shooting in the dark.
Also, all of this only works if you’re only concerned about the sun. Once you try to factor in a second light, all of the costs and challenges double.
There is no gameplay, this is for a movie. The objects are static but they have very slight vertex animation. Since they are mostly plants I think it’s called WPO. The PBR paradigm was suppose to simplify material creations and also make it more physical based and lead to a solution which works everywhere be it in the sun, at sunset in orange light and so on. But since this is still a game engine with many “shortcuts” taken to achieve an incredible real-time lighting it seems there are materials and situations which don’t work in every type of lighting. With a single material.
I tried hard to create materials which work no matter what lighting there is in the scene but after a while I realized that for better visual effects I could just change the material depending on the lighting - even if it’s the same type of object.
Automatic would be great, but if it’s not possible, manually placing some guide objects to change the materials where the guide objects are - would work too. And usually in a cinematic there are lots of adjustments anyway - it’s not like in a real-time game.
I could also select the objects in the shadow or in the light and just change the materials but that would be the third and most time consuming option.
Having guides over large areas - where you could simple add and change the many small actors/plants - during the scene building - and they would just adopt “shadow” or “in the light” materials would be a much better and much faster workflow.
Since the scene is very detailed - I’m taking advantage of the nanite “infinite” polygons solution - I’m trying to automate and simplify the workflow as much as possible.
For a movie, you can render the scene once using the normal material.
Render the lighting/shadows only.
Then render the scene a second time using the alternative shadow material.
Go into your video editing software, and composite the alternative material onto the original scene using the shadow as a mask. There is no point in fighting the limitations of real time deferred rendering if this is for an offline cinematic render.
I used to do things like that all the time. And I researched and learned a lot of things about Unreal passes and how I can render separate elements and so on.
For example the depth of field system is pretty good and for certain shots I will be able to use the integrated DOF system. But for other shots I will have to render separate elements and a depth pass and do it in post. So it’s not like I can escape the render elements and composite stuff.
But I will at least try to do it as much as possible “in camera”. If at all possible, and if I will not have to fight the engine too much, as you said it. Plus I enjoy the tinkering part and building custom stuff in engine. In 3ds max I made lots of tools and custom stuff for myself and the same for Cryengine back in 2010 I made a raytrace like motion blur and depth of field plus area shadows - with a multi-sampling system. And this was back in 2010.
These days I will not have that much time for custom tools and stuff, but I will still try to get certain things in engine. It’s so great to build worlds with real-time GI.
Each object in my game can only have one unique colour and black. The object material is a standard lit material that takes a texture and converts it into a 1 bit black and white image before applying a colour to the white parts.
I also have a bastardised cel shader that bypasses shading and instead either prints the DiffuseColor scene texture or black. There’s an outline post process shader too but that can be ignored for now.
In the cel-shaded shadows, I’d like to have the inverse happen - the black would be the colour and the colour would be the black. Ideally I’d have this part sent to its own texture, like a second material that I can access when I want, but I’m not sure that’s possible. Even in the forward renderer it was janky, and post processing is a nightmare with the forward renderer.
This is the gist of the cel-shader - at the moment, I’m desaturating it to black and white to try and get something out of it but for some reason that doesn’t work either. Here is the regular desaturated diffuse, the inverted one, and the combined result with skybox (which leaves the shadows completely black because the normal diffuse overlays on top of the inverted one). I’d then recolour them using the custom stencil because I’m fortunate enough to be only using one colour per object.
Like you, I would like to do this within the material because that gives me control over the threshold for colour, so there’d be more black in the shadow texture than just the inverse of the light texture.
I also tried passing the secondary texture through other nodes on the material to no avail, even split into R/G/B then reassembled post process, but the way the different buffers work make that data impossible to preserve.
Here I’m figuring how shadowed an area is by comparing the base color vs the final render.
Then I’m using this to lerp between two thresholds for a step, turning the image black and white. In this case I’m using a fixed set of thresholds, but you could set them per object by reading your stencil value. Then I’m colorizing it (in this case just with the original basecolor, but once again this could be done by stencil.
And @Sebastian , this technique may work for your case too. That is to colorize the material as a post process, where you can determine if something is in shadow or not. It has its limitations, but it’s fairly simple to do as shown.
Since I’m a complete beginner, the CubeIsBad issue and your solution goes a bit over my head. But I will read it again a few more times and try to figure it out. The part with comparing the base color with the final render sounds good, but in the end the result of the compare must be accessible in the material editor.
For now I’m happy I managed to solve something which seemingly the forums and youtube doesn’t know. Reading here and there I discovered that I could not use a static switch to dynamically switch a material temporary to a cheaper version - so I can have more fps during work. As many people said - it’s in the name - “static switch”, so it cannot be switched dynamically from a button I created with blueprint and widget. But in fact it can be switched. You just have to add an “update material” node at the end.
I also discovered that in static switch node you can check the “Dynamic Branch” option and again - it works - but with that option it has errors with certain nodes combinations - probably because of Substrate.
I learned that I could probably use that dynamic switch with Dynamic materials or using a static mesh node inside the blueprint but I wanted to just affect a material, unrelated to any mesh. And I thought that maybe Dynamic materials are more expensive and more “strange” to use.
And about the original issue in this thread, I thought I should try to use the Distance node after all. It would be hard or impossible? to just select the objects in shadows and change their material - because most objects will be painted with the foliage painter and they will be in shadows and sun all together.
But I simplified the problem and realized that not all objects in the shadow will need to have their material switched. Probably just a few groups/patches of objects here and there - the most visible ones. So again, I think I could move some invisibles boxes or spheres to be closer to the targeted objects in the shadow. And in the end merge or boolean somehow all the invisible boxes to make them all a single object and have that object be the target for the distance node.
Is it possible to have only a certain actor or object “X” interact with the distance field node ? Because otherwise all objects with that shader will interact with each other. I can’t find a way to place a certain special object in the material graph.