Why can't we have nice things? Character render depth

Seriously…

How hard would it be to just have the engine natively render a mesh on top of everything else?

I’m not 100% positive since it’s been ages, but didn’t MGS3 essentially do that to prevent clipping when crawling around?
What was that, 2004? Almost 20 years and we still have clipping issues to work around in unreal.

It’s kind of Unreal no?

Do not suggest “transparency”. Making a character transparent just so it sits into the transparency buffer is a ridiculous work around…

Unless there’s some magic PP stuff I haven’t discovered yet, the engine just doesn’t offer a good way to do this at a regular cost.

In deferred.

Though I suppose I should just swap to forward at this point. It’s not like .27 is performing all that much better than .26… ans at least the forward path I haven’t tested out or benched in 2 years…

I have a hard time understanding what your actual problem is.

If the character renders and ignores depth testing, then the character triangles will interfere with itself – an arm “behind” the character might suddenly render “in front.”
If the character somehow depth tests itself, but ignores level depth testing, then any tree/pole/box/building in front of the character will suddenly be unable to hide the character.

And, if you say that rendering in the transparent pass does work for you, why wouldn’t you use that? Most characters have a bunch of transparent bits anyway (hair, clothing, etc.) Just move all the materials there, and if it works, then use it.

But I fundamentally don’t understand what it is you want to do. Pretty sure that “natively render a mesh on top of everything else” isn’t actually what you want, unless all your geometry literally goes behind the characters, with no foreground parallax. Which is, shall we say, “not a common choice” for a game using a 3D engine.

But, if that’s what you want, try a setup that renders the character to a texture, and then uses the post processing / compositing pipeline to overlay the character. Or use stencil, similar to how you’d render target outlines in certain approaches.

If you’re a little more specific with what you actually want to see and what artifacts you’re willing to live with as a trade-off then it’s likely you’d get better responses for how to actually accomplish that. Currently, I don’t understand what it is you actually want, because it sounds self-contradictory, like “why can’t I render a green character with red pixels?”

What he means is weapon mesh clipping trough the wall for example, and part of the player mesh. This option was available in UE3/UDK. There are many workarounds for this problem, but having a checkbox that says “render this mesh on top of everything” would be nice indeed.

2 Likes

Pretty straightforward. If the actor position in world is Before the object then the actor renders in front.

This is done at camera level.
It prevents clipping.

It’s avaliable to any forward rendering engine as a base function.
Even in unreal I do believe.

It’s part of how the scene composting is done.

Think of it at the depth level.

You render the character in a separate depth.
You test the depth levels against the scene depth.
You adjust the image so that the actor’s hand can never clip into geometry and disappear.

Making stuff transprent is a hack.
Using a render target is a really bad hack.

Trying to use the depth buffer without compiling engine from source with heavy modifications is a hack.

Having the engine just do it properly.
In the render posses it already uses, without hacking it, like almost all solutions we have for Deferred.
That would be something wouldn’t it?

As far as how it works when objects are supposed to be “in front”:
99.9% of the time it really doesn’t matter.
This technique is generally used to prevent clipping issues in first person stuff.
You can’t see the pole between the camera and the player when you are in a first person view, can you?

Extending That to work correctly for third person too would also be really nice, but it is indeed more complex to deal, since it isn’t a situation you can just render everything on top of and be OK.

Also to answer this

I exist. In a world where Epic DGAF about quality assurance… :laughing:
@AntiGravity knows.

Are you rendering with or without Z buffer?

And what does “Before” mean? Just an ordered render list?

You need the Z buffer to be “on” to avoid hands that swing behind the body to accidentally render on top of the body.

But if the Z buffer is on, then an object that may sort “behind” may end up drawing “on top” anyway.

The only way to avoid this in a single pass system is using stencil, turn on z buffer, sort objects front-to-back, and once a pixel is rendered, turn on the stencil bit, and prevent further objects from being rendered there. (Well, that, or rendering an impostor per object to a separate offscreen target.)

While this will “work,” it ■■■■■ up how shadows work, and how ambient occlusion works, and even how anti-aliasing works on some hardware/APIs!

What it sounds like you want can’t be had while also wanting those things to work – it’s mathematically impossible. So of course Unreal won’t support it – they care more about those features than about the particular sorted-renderer you’re imagining. They never said they would support it, they don’t target games that look like that, using Unreal is the wrong tool if that’s what you want – again, assuming I understand the technical details of what you’re suggesting, because you’re not being particularly precise.

All of computer graphics is a hack. (Except possibly ray tracing. But that certainly won’t work with the kind of layer hack you’re talking about!) If you can get what you want by flipping some particular bit, then flip that bit.

There is no “properly” here. What you’re suggesting is logically impossible without giving up something else. You can add depth bias if you want (this is easy in the shader) but then you instead end up with hands poking out “this way” from walls in front of the character.

I see, this is the “separate depth pass for first person character overlay” request. That’s a totally different request than a request for “render X before Y in a general renderer.” Glad you at least agree that that one won’t work!

For that restricted case, there still is no zero-cost way of doing this while supporting the features of the engine, because you need to clear the Z buffer, or allocate a second Z buffer, and I still think ray tracing would break in this mode. Because you won’t get all the features to interact nicely in the general case, I can see that as a reason why they might not add this as an option, because then instead you’d get questions like “why doesn’t interact correctly with when I put in the overlay pass” which would be just as motivated.

If you need this particular way of rendering, I can see three ways to do it:

  1. Add another render pass of your own, with a new Z buffer (to prevent breaking all the other features that depend on reading Z.) This likely requires some surgery into the C++ rendering pipeline.
  2. Render the object AGAIN to a separate texture, using a separate scene setup, and composite the image in the postprocessor. This will use more resources, but it will likely work okay, because the “first render” will be used for reflections and shadows and raytracing and such.
  3. Use the “Custom Depth” map, and compare to “scene depth,” and pull the output depth value up to just ahead of the “scene depth” where the “Custom Depth” is equal to pixel depth, but scene depth is closer. This will screw up hierarchical Z, which is a bit of a bummer for something as big on the screen as a FPS overlay, but what you want is fundamentally unmathematical and thus some compromise is needed.

(Tried option 3, but it turns out only transparent materials can read Custom Depth, and they can’t then output a pixel depth offset.)

You’ll still see the actor clip into the wall if you look in a mirror, though – that’s because you want to allow objects to be in a physically-impossible configuration. The better solution is to make collisions good enough that the objects will naturally not clip through geometry, and perhaps make the character detect when it’s pushing up against a wall and pull up/aside the weapon to avoid the penetration. That’s a much better overall solution anyway, IMO.

Actually, you should probably try the Custom Depth / Adjust-pixel-depth approach. It might just work! The comparison between pixel depth and custom depth is important to make Z testing still work for pixels that would otherwise clip through the wall in front of you.

Hey, just because you don’t understand how to do something, or why, it doesn’t mean there isn’t a proper way to do it.

Again, refer to the OpenGL link.
It’s commonplace and done by a thousand and one engines.

In an ideal situation it’s exactly this:
You render only the objects that belong to the specific depth level they are assigned to.

Anything that doesn’t have an assigned level is rendered.

Anything assigned a level 1 is rendered subsequently, then overlayed to the prior result.
Shadows are then also mixed in based on the level of the rendering.

When done properly there isn’t an added cost to doing this. It’s not like you actively render 2 things.
You just have some slightly higher computational costs.

Imagine this.

Except you literally separate the character for all the passes…

I asked about this back in 2014… but was told by a senior engine developer to not count on dirty hacks to avoid mesh clipping :upside_down_face:


1 Like

Saw your original request and the boneheaded response you got.

It’s honestly baffling.

And no, using a render target doesn’t look quite as nice as properly composting the buffers.
Otherwise everyone would just do that, since you can use sprites too the cost would be miniscule if only one could get it to look any good…

Also.
It’s not really a “cheap hack”.
It’s how games were, STILL ARE, and still should be actually made.

Just because one developer considers it a hack - without actually having any idea of what he’s even talking about - doesn’t mean it’s an actual hack…

What is a hack, is hacking the rendering pipeline to overlay a render target… though maybe I should pull the source and try that :thinking:
(The reasoning behind It being you have total granular control of all of the layers of both scenes, and modifying the pipeline is probably a faster result for composting when done properly. Normal with normal, roughness with roughness, etc).

You can project the vertices onto the camera plane. That way, they’re always in front of everything while still being in the same pass/scene.


This has the advantage that it works like anything else (can receive shadows, screenspace reflections, etc.). It even works with occlusion culling:

The only problem is that the first person mesh doesn’t cast self shadows. Though, if you look at a lot of first person games (COD for example), you’ll notice the first person meshes don’t cast self shadows, either. So this makes me think this is similar to how they do it (or at least suffers from the same problem).

1 Like

came here to suggest this but you beat me to it (and with a video again!) :slight_smile:

COD:MW does seem to have self-shadowing in the hands (Call of Duty : Modern Warfare - All Weapons and Equipment (YEAR 1) - Reloads , Animations and Sounds - YouTube - look at the reload anim with the X16 gun). what it lacks is a shadow from the character’s body onto the hands (i.e. seeing your own head’s shadow cast onto the hands and gun)

like you said this method works like anything else but it does have other problems.
if you’re fine with having no shadows cast from the character onto the world it’s mostly ok (but then the player feels like a ghost). if you want them (COD:MW has player->world shadow btw) with this technique the shadow gives away the effect (the shadows don’t fall in the right place and move around a lot as you rotate the camera). you can make the effect not affect the shadow pass but then the self-shadowing gets broken. at that point for player->world shadow you might want an invisible but shadow-casting mesh for the full body, and the trickery needs to escalate further because of self-shadowing.
it also messes up with receiving world object shadows and reflection capture location (but this one probably at an unnoticeable level). and IIRC SSAO also changes a bit (but not too noticeable either)

mind you, I still think it’s a fair method because no method is a silver bullet for this.
rendering on a separate render pass will have similar shadow issues to work around, and it would have an extra cost I wouldn’t be too comfortable with: the world behind the gun will be still rendered - that’s a lot of pixel shader work behind something that’s potentially blocking 10+% of the screen, unless it’s smart enough to do a depth pre-pass of all the render layers combined first

1 Like

A team posted a postmortem on Gamasutra some years ago about how they built FPS weapons with animated sprites instead, with custom shaders to apply normal mapping, lighting and shadows to the sprites that are always rendered o top…

But I fail to find the link to the gamasutra blog post.

Good stuff all.

The vertex solution is similar to panini projection essentially.

If there’s a problem with shadow casting I would just work around it…
Make a character with 2 meshes.
An invisible mesh for casting shadows, and one for displaying.

The cost of rendering just a shadow shouldn’t affect things too much - then again on this engine even breathing the wrong way can be problematic right now, so take that with a grain of salt.

I don’t think it would cast on the gun though. Would have to try it figure it out.

Regardless I have to say this isn’t “the” solution.

Particulalty extending into 2d as well… where rendering something “only on top” is much more common place.

And for 3rd person stuff it wouldn’t necessarily help wuth keeping swords on top of other meshes would it? Never actually tried panini style stuff on 3rd person…

I have implemented the algorithm you’re talking about, in shipping systems, probably 15 years ago. Maybe I forgot some detail, but if you’re trying to make yourself hard to discuss with, you’re doing pretty well…

What I’m saying is that modern rendering won’t interact well with those older techniques. Unreal focuses on the modern rendering. Hence, they don’t want older techniques. From their point of view, that makes sense, because that’s what their goals are. If you want other goals, use OGRE or OpenSceneGraph or whatever. Plenty of life still left if those systems if those are your goals.

Except now your triangles need to be ordered in painter’s algorithm, because they all alias in Z.

You went from “what does this even mean” to " I have actually done this before".
Nice…

Good catch, thanks!

Or just scale the projected vertices back out based on their z value (which is what I did).

But then you extend out into the world, where you may clip?

Maybe you push them back out some smaller amount than their previous location, and rely on high near-camera Z precision to avoid Z fighting? As long as you don’t get close enough to the wall/bush/whatever then that works, but, especially foliage, has no good “guaranteed non clipping” distance. (In fact, pushing the camera into a bush will also render a bunch of clip.)

Years back, we got another “solution” in the form of “near Z saturation” (so, near pixel fragments would saturate to near plane, rather than get clipped) but that ended up with the painter’s problem, and then anything that happened to get really close to the camera exploded across the entire screen because of the projective singularity.

I prefer to do better camera/pawn management. Your mileage may vary.

What was unclear was your request, not the math.

But, you know what? You’re an amazing human being! Never stop being you!

Just because that’s what you prefer it doesn’t mean that there are no factual applications to choosing what renders where via priority.