Is it possible to create a shader that treats the edges of an object differently?

I’m not talking about a toon shader or Sobel edges. Nor fresnel. When hand painting textures, often we like to add some wear and tear on the edges that would be sticking out. Wondering if there’s a way to accomplish this via the shader system somehow.

Here’s a really exaggerated example of what I mean: figure10.jpg

Hi,

That depends on how you layout your texture map. If the edges are always on a predetermined place within the UV space you could create a shader that adresses these parts.
In your posted example I would lerp another texture over the base color and confine the effect to the edges of the texture via UV tweaking. (Thre is alrteady a function that does that)…

Cheers,
Klaus

Agreed - could be done that way for sure, or even extracted from a normal map or the like. Ideally, I’d like to find a solution that works dynamically and doesn’t depend on having the UV unwrap happening in a certain way.

Hi,

I just dont see how a material could have any knowledge about the geometry (and with that about edge locations) without relying on UV data.
In most cases, edges will be located arbitrarily.
You would need to compare the angle of adjacent faces to check wether they are coplanar or constitute an edge.
As you said your textures are handdrawn, I would just spend some extra time, creating a black/white texture (or maybe put it in the alpha channel if not used otherwise) which just masks out the edges with the tear and then lerp in a tear texture. This way you also keep the flexibility of how the tear looks. Like changing the tear-masks you could progress through various stages of decay…

Cheers,
Klaus

I wonder how Algorithmic is doing it for the edge masks they generate in substance designer? I suspect the use local normal space somehow. Hmm… It’s an interesting problem.

They use the “ambient o” and “curvature” map for this kind of effect calculation… the curvature map is usually created from a normal map but can be also created in xnormal…

It depends on the situation, for substance it uses the normal map to figure out where edges are. Other programs can figure it out based off an angle threshold, but it gets complicated when you have chamfered edges, since the angle threshold won’t catch it and if it’s not in the normal map then that can’t catch it either.

1 Like

Hi,

I once wrote a little tool for edge detection where I not only compared adjacent faces but the angle sum over several consecutive faces, so I could catch the chamfering. Practically: Walking over the model until the angle threhold is reached, then inspect how the angle divides up among the surfaces in between. With nested intervals this can be set up recursively.
Sure, its not like a “hello world” program, but not rocket science either :smiley:

Cheers,
Klaus

Ahhh, yep that makes a ton of sense. Thank you for the insight!

That’d be nice to have in 3ds Max

Hi,

My original preogram used a custom mesh format.
I am now somewhat inclined to rewrite the application and use FBX (the ascii version) for data exchange…
The output would be then a lerp texture that masks the edges.
If I really do that, I would make it freely available here.
Ill have to research the FBX format first…

Cheer,
Klaus

I wanted something like this, for example to grab the normals of the point, and surrounding, and if they are co-planar then it gives a 0 for example, and the further away they are from co-planar the closer the return is to 1

Use that number to lerp between two values, would essentially allow you to identify edges.

It wouldn’t be great flexible but would be easy in theory, but I’d have to learn how to code a shader I think

The main obstacle is the (relative) absence of mesh information, albeit the UV data. And that is a flat projection without any normal information.
So on the shader level there is insufficient data to detect edges.

I just gave the acsii fbx format a quick look. Its not overly complicated.
I might finish the import parser today… :slight_smile:

So, in game, or renderer, whatever, the shader has no access to the mesh or normals?

Im afraid so.
Which is the reason for normalmaps, heightmaps, etc. That and the UV maps are everything the shader (need to) know.
Its less trivial than something one would do within a shader…

In my program I will use the vertex/polygon information to see what quads/tris are adjacent (they are stored independently).
I will also maintain a vector from origin to the surface center. (usefull later).
From that node network I will compare the normal angles. Then I can get a angular delta from between any two nodes in the network.
Then, I collapse coplanar nodes together. Any immediate delta between two nodes above the threshold is already identified as an edge.
When I walk over several meshes to catch chamfering, etc, I use the previously stored vector to the center. The theory behind this: The plane that is costituted by two vectors to surface centers should indicate the direction of travel on the model, right (?).
In order to catch all edges (above the threshold), this walk over the model needs to be done exhaustive on all the nodes in all adjacten directions (minus the reverse ones).
Then I have to match these found edges to their respective place in the UV space.
Finally I can apply a parameterized gradient to both sides of that edge.
Which already reveals one requirement for the UV map: It needs to be non overlapping. Yupp, just like lightmaps, for similar but not identical reasons :slight_smile:

You see, for a shader it would be umpf:smiley:

Cheers,
Klaus

Holy ****. Way beyond me

:slight_smile:

Sorry for resurrecting this! I am not a super trigonometry pro nor am I a shader god like @RyanB (who might have an idea for this^^) But you can do slope detection in world space and we have access to the objects vertex normals via the shader. Somehow this tells me it has to be possible to dynamically detect those “slopes” or “edges” or however you wanna call them, on objects and use that as a mask that you maybe project triplanar onto the object? Then you dont have to mess with UVs etc. I mean it works from the top for landscape, so why would it not work for the sides as well and then blend together the result?

Just throwing this in here cause I was hoping to find someone who has already solved this problem :smiley:

In case anyone is looking, this might be what you’re looking for. Works well with assets that have a normal map - not sure how well it works for arbitrary geometry:

https://forums.unrealengine.com/community/community-content-tools-and-tutorials/117666-free-curvature-shader?144724-FREE-Curvature-Shader=

A fragment shader operates on a single pixel while a vertex shader operates on a single vertex. They have no way to access neighborhood data from arbitrary meshes.

The landscape uses a texture to store its height map, so it can therefore access information about its topology.

You would need some pre-calculated data, either a texture map or some information encoded on the vertices as color or UV data, that the shader can use to figure out how close a pixel is to an edge.

In many 3d softwares, you can use egdes to modulate a texture. There are plenty examples, from curvature to convex / concave ambient occlusion + edges shaders. They are based on angle at the edges. (duh) Agreed, we’d need something similar in Unreal.