There is no strong motivation to make these ones into official nodes since all of the inverse trig functions are very slow to compute. So by not exposing them outright it kind of creates a firewall where you need to understand what you are doing to use them.
The team should really reconsider this position. I’ve been bumping my head up against this and other similar decisions about nodes only working in certain contexts in the material editor.
You say these functions are very slow to compute, but Epic’s decision not to implement them as nodes imposes an additional penalty for anyone who might want to use the functions by forcing devs to use the custom node. The devs you are putting a firewall in front of are the devs who know what they are doing, and knowing what they are doing doesn’t necessarily mean using the tools as intended.
Additionally, the code in the custom node is compiled as is, correct? It’s and edge case, but forcing your devs to deviate like this breaks the portability of the assets they create.
I would caution developers at Epic not to assume they know what devs might do with Unreal and not try to save the unwashed masses from their hair brained ideas and ignorant use of the tools. Let us do stupid things. I’ve seen a call from Epic for Unreal based projects that are not games. In aid of that Epic could help by not putting their tools in game developer shackles. Someone may use Unreal to make a toy, educational software or a utility of some sort that doesn’t need to run at 90 FPS 4K res, and they might find these things useful. If anything the material editor functionality should supersede any available shading language. Implement all the trig functions, don’t tell me when I can and cannot get something like a light vector because Epic assumes they are of no use in a deferred renderer, consider implementing better gradient tools, procedural pattern generators, things an artist might find in a 3D package or offline renderer. Please don’t gimp your tools because you think you know better than your customers.
Thanks for your feedback. It is useful and I will pass it along to the rendering team.
Regarding this comment:
"nodes only working in certain contexts "
What exactly are you referring to if you don’t mind? Many limitation are there because it is simply not possible such as reading scenecolor in opaque materials but perhaps others are similar to the missing nodes?
Keep in mind this feedback is from someone with extensive programming, game and film development and 3D art experience who is new to Unreal. I say this not to make the feedback easily dismissed but to frame the perspective of someone who is not clueless having an initial reaction to Unreal’s toolset.
Specifically, I was referring to the LightVector node and the fact that this in the toolset, leads me to assume there are other instances of this that are not bound by technical constraints but rather developer whim. LightVector errors out if you try to use it in any other context than Light Function or Deferred Decal. I can still get a light vector wherever I want but I have to use a material parameter collection to do it which is an entire additional obtuse layer. This works so long as I want the same light vector(s) for every instance of a material that requires a light vector, but this, as idiotic as it may sound, is not often what I want. I intend to put some materials on the marketplace that use this functionality and I think it will be a bad user experience that users will need to duplicate the material, crack it open, create a whole new material parameter collection, and set up a blueprint when they want unique light vectors in different instances of the material. This is something that should be a simple parameter exposed in the material interface. I am trying to do something in Unreal that doesn’t obey the laws of photorealism and it feels like the decisions made in Unreal’s design are putting up constant small roadblocks that are engineered to force devs to use the tools the way Epic intended, specifically to create high performance games with a decidedly photoreal or hyperreal look. It’s frustrating because I can see everything I need is in the engine, but it’s been intentionally engineered, according to the answers I see here and on the answer hub, to make it difficult or undesirable to do certain things because someone couldn’t imagine an end user using the tool to do anything other than make a photoreal game.
As long as I’m airing grievances, the user facing glossary exposed in the toolset is inconsistent and confusing. Examples:
Everyone knows what a lerp is. Every programming language seems to have a lerp(), your editor node is called lerp, yet it is presented in the menu as LinearInterpolation.
2Vector, 3Vector, 4Vector. No defined type in C/C++, Cg, or HLSL begins with a numeral, and it’s bad form if not outright forbidden by the compiler in every language I’ve used for a user defined type. Why is this acceptable in a visual programming interface built on these languages? Vector2, Vector3, Vector4 is the common presentation in other software/interfaces.
A vector can be a lot of things, particularly a Vector3. Why is it always presented with a color picker in the user interface? I think this is confusing for less technical artists who use materials developed by someone else. Although a color is a vector, it is common for other tools and languages to make a type distinction to eliminate the confusion of vectors representing colors vs vectors representing positions or directions.
BreakOutFloatNComponents? Why not a more understandable name like VectorNToFloatN.
MakeFloatN. Another head scratcher. It creates a vector from N floats. The tooltip even says this. Why is it not called MakeVectorN or FloatNtoVectorN to better align with the above vector conversions.
The whole menu organization is pretty haphazard. Nodes exist under multiple headings, not awesome, but accepting that, there are many of fundamentally the same type and use that do not exist in the same place(s) in the menu. Some vector and “coordinate” node names have WS postfixes while others that are also in world space do not. Why is there even a WS postfix at all? What other space might be applicable to something like a camera position? It’s like you can tell just by naming conventions Dev A implemented these nodes, Dev B did these nodes, Dev C… etc.
There are a ton of small issues like the above that I think are pretty obvious and addressing them would go a long way toward making the tools more accessible and user friendly for both artists and programmers. All in all Unreal is one of the most sensible and cohesive tool sets to game development I’ve see from a top level, which makes it all the more frustrating to see things get so inconsistent and sideways at the lower levels of the toolset.
I love Unreal. There are a lot of things about it the Dev team nailed in terms of design and implementation, but I’m consistently encountering issues that I then read about on this forum or the Answer Hub the just baffle or frustrate me, especially when I see explanations that amount to the devs don’t want users to do that.
Yes, the lightvector limitation is there because of UE4 is a deferred renderer. Back in the UE3 days it was a forward renderer so you could actually get the lightvector for each light, but for that to work it was actually re-rendering your whole material/mesh again for each light. Now instead it accumulates all of the data into what is called a “Gbuffer” and then performs all the lights on the final image. The light vector at that point simply does not exist at the object level. Doing what you did with the Material Param Collection is indeed the correct way to do it now, and you shouldn’t think of it is as a lesser method. If you are making a content pack out of this stuff, simply include your MPC asset and include a blueprint that sets it based on a selected light. And then tell people to use it.
We would never go out of our way to make something harder to use or more confusing. it is simply a technical limitation of deferred rendering.
You have a good point about the haphazard naming conventions. Many of those things are old legacy names from UE3 or even earlier, and the material functions lack a cohesive naming scheme because nobody is really in charge of policing it other than reporting outright bugs. I think we need somebody to have that job and I wouldn’t mind helping out there but so far that is not the case.
The functions showing up in 2 categories is a bug. At some point, code changed and ALL functions got added to “Misc”. Sadly this is saved in the assets so we have to go through them one by one and remove the misc category. This is something we need to do for sure but it is not scheduled just yet.
Honestly some of the names are subjective, such as “Break Out Float 2”. Personally I think that name is good because it matches how you break vectors inside of blueprints in UE4. And because I can easily get it in the search by typing ‘break’. If we made it VectorNtoFloatN… it would hide a bit more among a bunch of similar names. So you would type ‘vector’ and see every vector operation listed.
@RyanB I don’t know how deferred rendering is implemented under the hood in UE4, but perhaps you have something like a lighting buffer? Either combined light contribution for each pixel on the screen or maybe even something like a light function stored using spherical harmonics.
To avoid confusion: First case is the buffer which stores calculated contribution from the light sources for each pixel on the screen, not combined with albedo. Second case is the buffer which stores direction/intensity/color of the lights coming to a specific surface point on the screen, some technique use low order spherical harmonics for this.
If engine has something from a second case, we could use it for custom materials. Buffer from 1st case is very handy for stylized rendering where you want to shade shadows with a different strokes, someone had this issue with non-photorealistic rendering and ended up extending engine with c++.
Dynamic lights are still actors in the world. It accumulates the base pass of your materials into buffers and then it basically does a forloop with all the lights in the scene and adds up the actual lighting into the scene color, but once they are rendered, at that point there is absolutely no info about the lights anymore.
Sounds like a neat method you could write though but keep in mind you would be adding a whole new gbuffer so it would be quite expensive to to that. Making the deferred case significantly more expensive in order to support a simple global light vector is probably not the best option when there could be other ways. It would be far easier for somebody to write a c++ option for directional light actors that says “this light overrides lightvector in all materials”. But then you would simply be skipping the MPC and be forcing a global parameter for easy access, and it wouldn’t work with poinglights etc.
Ohh I see, so light contribution is completely resolved in a single pass. But how then ssao is applied only to shadowed areas? Is it taken into account in the same path?
Yes, second technique is memory hungry, I had to use 16bit per channel buffer to get good accuracy but lighting buffer size was from half to quarter of full buffer already. It was used for the prototype of real-time GI, had to calculate contribution of 512 secondary lights. The difference between calculating final light contribution on smaller light buffer with a “normal” approach or accumulating lights into spherical harmonics buffer and resolving light contribution on the next step, looked like this:
On the right, lights are resolved in a “normal” way. Meaning that for each pixel diffuse and specular terms are calculated on low resolution buffer, this is why normal map details are blurred.
On the left, lights are encoded into low order spherical harmonics coefficients and added into light buffer as they are additive by nature. 4 coefficients per color, 3 target textures in total, 16 bit per channel. Even in quarter of the resolution it is still memory heavy but that was a research anyway. On the next step, light contribution is resolved on a full size g-buffer, so all details are preserved.
So practically, to get light buffer from the first method we have to customize pipeline. Do you think something like that could be done as a plugin or the only way to achieve it is by changing engine itself?
SSAO only applies to the indirect lighting. It happens in many different places in code because there are so many different code paths. Some are in the BasePass some are just in the postprocessing shadows or dynamic skylight shaders. You could easily modify it to apply SSAO in the base pass if you want but it requires a shader code change. But if you really want to, I helped somebody do it a week or so ago just can’t remember off the top of my head where the final place we changed was (and it depends on what lighting setup you use etc).