Download

Editing engine source to change mesh draw order?

I’ve been poking around the engine source trying to figure out where the Z-sorting of primitives are, and the chain of code that leads up to that point. Ultimately, I’d like to be able to flag certain meshes to be drawn in front of all other primitives, without actually changing their world-space locations. As a visual elaboration, here’s a sphere mesh that illustrates the effect I’m trying to achieve:

BeforeAndAfter.png

Where in engine source should I be looking? This article was the most info that I could find on the topic. But I’ve been setting breakpoints all over the place in Runtime/Renderer/Private/StaticMeshDrawList.inl and they never get hit. In MSVC, I’ve tried performing a call hierarchy for “Calls To ‘SortFrontToBack’” and no results turned up. Answerhub and Forum searches for “TStaticMeshDrawList” provide no results. I’d really appreciate anyone throwing me a bone to chase after.

I would love to know if doing that is possible at all.
I had a project that needed it, but graphics engineers from EpicGames said sorting is related to depth buffers and that GFX stuff I don’t understand :slight_smile:

Maybe you’d have to go as deep as into DirectX land so they didn’t help me solve that problem then I just gave up on it.

Well my DirectX skills are a bit rusty, but I definitely need to achieve this effect. My project has slowed down to a halt for three weeks now, searching for some kind of solution to have pixel-perfect, fully interactable meshes rendered on the screen which don’t collide with (or get occluded by) surrounding environmental meshes. This is for a card game which utilizes 3D card meshes, and the player has a 3rd person perspective, free to roam around a 3D environment. So it makes perfect sense to have these cards rendered on top of everything except the UI…

RTT scene captures introduce perspective projection issues, making line traces look inaccurate.
The UViewport widget doesn’t seem to have a way to render a transparent background. Trying to cheat this via screen capture (to present in UViewport’s world) has a bag full of it’s own issues, and seriously kills performance.
So the only thing I can think of is to modify engine source to support this…

All I really need to know is which relevant parts of engine/shader code I should be looking into.

Why don’t you use materials with disabled depth test?

The card actors have widget components w/ UI materials applied to them for dynamic text and UI-related effects. Disabling depth test doesn’t work for UI domain materials.

I managed to get UTextBlock widgets to render to a texture, which allowed me to change the card’s material domain to Surface/Translucent. Disabling the depth test on the material introduces several VERY ugly artifacts, making the text basically unreadable when another mesh is occluding it. Also, it would seem that there is no way to have any depth or sorting information of any kind, which is a nightmare when you want certain card meshes to be in front of others.

Are you referring to this post? I just crossed paths with it while searching for more answers.
This sort of feature really does need to be present within the engine. I’ve looked over the code in /Source/Runtime/Renderer, and it doesn’t seem like it requires DirectX knowledge. Maybe some shader programming, but the larger points of interest (in reference to occlusion and sorting) appear to be:

  • SceneOcclusion.cpp > FDeferredShadingSceneRenderer::BeginOcclusionTests (particularly near the end, where there’s a call to SceneContext.BeginRenderingSceneColor)
  • SceneRendering.cpp > FSceneRenderer:: DoOcclusionQueries
  • BasePassRendering.cpp > FSortFrontToBackTask and FDeferredShadingSceneRenderer::SortBasePassStaticData

And anything directly relevant to those bits of code. There’s a lot of stuff to sort through, and I keep losing my way, but I really see no other solution.
It’d be really nice if some of the graphics gurus or Epic staff could chime in here. For all I know, I could be really far from where I need to be looking.

Yes, this is sooooooo^N easy to setup in Unity or UE3;

But in Unreal 4 I remember everything I tried didn’t work for that dead project.

Well, I’m at a loss. Any changes I make to engine source (or shader) code doesn’t appear to affect much in terms of sorting draw calls or some related illusion.
The only other thing I can think of is to simply have an actor component attached to the cards which receives overlap events, stores a map of current materials for any colliding UPrimitiveComponent or AStaticMeshActor, change it to some other transparent material, and then change it back when the overlap ends. This is very hacky, and sometimes leads to other undesirable artifacts (like being able to see into “the void”), but at least it’s a solution that works and doesn’t make gameplay impossible via unreadable (or over-distracting) text.

I’m still open for advice, if anybody has some to give.

have you solved this problem?

Sadly, no. But if you manage to make any progress toward this, please let us know!

I think you should use math and extra meshes to create illusions. In you example above, you can have 2 spheres. One in the place to get outcome in the left figure, a second to get the outcome in the right figure. The flag can swap the visibility of the meshes. The math comes in because the scale of the second sphere needs to decrease (I think) because it is brought forward by X amount. With a method to calculate the size and scale of the second mesh based on the distance from the blue, you can get the effect you want. Note that this becomes a more complicated when you want to have complete control over the Z-order of three different meshes, but writing the method for this base case would help you spring board to the more complicated case. Additionally, you can just move the original meshes to the new locations when flagged if you so chose. Hope this helps.

@MichaelWion if you are going to use surface/translucency as material on the meshes, each mesh have a property in the details panel (better type in the search field) called Translucency Sort Priority which usually defaults to zero. To have any mesh to be rendered on top of the other, you just need to write a higher number than the last one. It only works for translucency materials thou.

@AnotherZach That won’t work for the project that I was working on at the time, because it was a 3rd person card game. You have a character that moves around within an environment, and combat sequences that deal with the interaction of cards (which are 3D meshes with UI-domain materials applied to them for dynamic text and whatnot). The cards appear in front of the camera, but presents the issue of partially clipping through objects within the environment (since both are 3D meshes, and you have control of a 3rd person camera). I’ve also tried the offscreen rendering approach, but it had some issues with inaccurate raycasting vs projection. I’ve also tried simply “fading out” the meshes that the cards would be colliding with, but the results were terribly ugly – for example, you could often see through the environment “into the void”, or some environmental meshes were simply too large for such an effect to make any sense. Also, performance issues arise when everything in sight is needlessly translucent.

@NilsonLima I can’t remember if the card materials were translucent or not, but it doesn’t matter since the environmental meshes that they would be clipping with can’t all be translucent (again, due to performance reasons). Some maps simply have too many different meshes in plain view of the camera at a time, which is fine for static meshes with opaque surface materials. But as soon as you start adding translucency to everything, it gets messy.

I’m fairly certain that the only reasonable solution here either includes some magical shader code, some changes to render code, or both… And after poking around with the UE4 code base for a while in search for answers, I’ve long since admitted defeat and have been working on other projects. So perhaps these workarounds might work for some people, but personally, I’d be more interested in an actual solution to have more control over the renderer, rather than a workaround that effectively simulates this.

I will put some thought on this from time to time and see if some light bulb comes.

From my understanding (and I’m not an expert at rendering), deferred rendering don’t support the concept of depth priority in this way. So when UE4 moved to deferred rendering by default it lost the ability to manually adjust the depth. I don’t know (as in haven’t looked) to see if the experimental forward renderer supports it. I’ve heard that some projects build multiple passes but that has the limitation of lighting can only affect a single pass. So in the case of a weapon (or other in-game mesh) you won’t get accurate in-game lighting. I’m sure there are work around/etc. If I remember correctly for UT we simple made sure the 1st person weapons were scaled to be always inside the collision capsule and then used panini projects to make them look right.

If the cards only need to render in front of the camera for the player to pick - just scale the cards down and put them inside the camera collision hull?

Exactly, then use the panini code to fix their appearance.

That idea sounds like it’s worth testing out, thanks! But I wonder if it would limit card animations to be orthogonal to the camera? ie, could I still rotate along the local up axis of the card (as well as the local forward axis), give them depth (“Z-Order”) vs other on-screen cards, etc? How much camera hull space is there to work with? Can it be adjusted?

Scaling cards up/down in realtime could help to provide the illusion of forward/backward movement, but I think that illusion is ruined when a card’s “scaling animation” is occurring in front of or behind another card (it would simply look like the card is growing/shrinking instead). So that’s a concern.

No idea what you mean by this. Care to elaborate? Google just shows me bread recipies, lol.

EDIT: Oh nevermind, found it by googling “panini projection”, thanks!

@MichaelWion As I have promissed I was giving this problem a thought from time to time and while making a tutorial for another stuff, it came into my attention it has a potential to work for your case. My tutorial (published here the 2nd half of the video) was related to how to make a plane with water material applied and to be hidden inside a boat whose mesh was intersecting with such plane, after all you don’t want to see water inside the boat! The fix was relying on Custom Depth with Stencils, which works on translucent and postprocess materials. While messing with that project, I saw that if you change the material to user interface, the node SceneTexture:CustomStencil was not causing compilation errors. I would test this approach with UI materials because there is no other alternative at least built into the engine as it is.