Announcement

Collapse
No announcement yet.

Is it possible to create a shader that treats the edges of an object differently?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Is it possible to create a shader that treats the edges of an object differently?

    I'm not talking about a toon shader or Sobel edges. Nor fresnel. When hand painting textures, often we like to add some wear and tear on the edges that would be sticking out. Wondering if there's a way to accomplish this via the shader system somehow.

    Here's a really exaggerated example of what I mean: Click image for larger version

Name:	figure10.jpg
Views:	1
Size:	16.5 KB
ID:	1141293
    Trevor Lee

    #2
    Hi,

    That depends on how you layout your texture map. If the edges are always on a predetermined place within the UV space you could create a shader that adresses these parts.
    In your posted example I would lerp another texture over the base color and confine the effect to the edges of the texture via UV tweaking. (Thre is alrteady a function that does that)..

    Cheers,
    Klaus

    Comment


      #3
      Agreed - could be done that way for sure, or even extracted from a normal map or the like. Ideally, I'd like to find a solution that works dynamically and doesn't depend on having the UV unwrap happening in a certain way.
      Trevor Lee

      Comment


        #4
        Hi,

        I just dont see how a material could have any knowledge about the geometry (and with that about edge locations) without relying on UV data.
        In most cases, edges will be located arbitrarily.
        You would need to compare the angle of adjacent faces to check wether they are coplanar or constitute an edge.
        As you said your textures are handdrawn, I would just spend some extra time, creating a black/white texture (or maybe put it in the alpha channel if not used otherwise) which just masks out the edges with the tear and then lerp in a tear texture. This way you also keep the flexibility of how the tear looks. Like changing the tear-masks you could progress through various stages of decay....

        Cheers,
        Klaus

        Comment


          #5
          I wonder how Algorithmic is doing it for the edge masks they generate in substance designer? I suspect the use local normal space somehow. Hmm... It's an interesting problem.
          Trevor Lee

          Comment


            #6
            They use the "ambient o" and "curvature" map for this kind of effect calculation.... the curvature map is usually created from a normal map but can be also created in xnormal....

            Comment


              #7
              It depends on the situation, for substance it uses the normal map to figure out where edges are. Other programs can figure it out based off an angle threshold, but it gets complicated when you have chamfered edges, since the angle threshold won't catch it and if it's not in the normal map then that can't catch it either.

              Comment


                #8
                Hi,

                but it gets complicated when you have chamfered edges
                I once wrote a little tool for edge detection where I not only compared adjacent faces but the angle sum over several consecutive faces, so I could catch the chamfering. Practically: Walking over the model until the angle threhold is reached, then inspect how the angle divides up among the surfaces in between. With nested intervals this can be set up recursively.
                Sure, its not like a "hello world" program, but not rocket science either

                Cheers,
                Klaus

                Comment


                  #9
                  Originally posted by darthviper107 View Post
                  It depends on the situation, for substance it uses the normal map to figure out where edges are. Other programs can figure it out based off an angle threshold, but it gets complicated when you have chamfered edges, since the angle threshold won't catch it and if it's not in the normal map then that can't catch it either.
                  Ahhh, yep that makes a ton of sense. Thank you for the insight!
                  Trevor Lee

                  Comment


                    #10
                    Originally posted by KVogler View Post
                    Hi,


                    I once wrote a little tool for edge detection where I not only compared adjacent faces but the angle sum over several consecutive faces, so I could catch the chamfering. Practically: Walking over the model until the angle threhold is reached, then inspect how the angle divides up among the surfaces in between. With nested intervals this can be set up recursively.
                    Sure, its not like a "hello world" program, but not rocket science either

                    Cheers,
                    Klaus
                    That'd be nice to have in 3ds Max

                    Comment


                      #11
                      Hi,

                      That'd be nice to have in 3ds Max
                      My original preogram used a custom mesh format.
                      I am now somewhat inclined to rewrite the application and use FBX (the ascii version) for data exchange.....
                      The output would be then a lerp texture that masks the edges.
                      If I really do that, I would make it freely available here.
                      Ill have to research the FBX format first...

                      Cheer,
                      Klaus

                      Comment


                        #12
                        I wanted something like this, for example to grab the normals of the point, and surrounding, and if they are co-planar then it gives a 0 for example, and the further away they are from co-planar the closer the return is to 1

                        Use that number to lerp between two values, would essentially allow you to identify edges.

                        It wouldn't be great flexible but would be easy in theory, but I'd have to learn how to code a shader I think

                        Comment


                          #13
                          but I'd have to learn how to code a shader I think
                          The main obstacle is the (relative) absence of mesh information, albeit the UV data. And that is a flat projection without any normal information.
                          So on the shader level there is insufficient data to detect edges.

                          I just gave the acsii fbx format a quick look. Its not overly complicated.
                          I might finish the import parser today....

                          Comment


                            #14
                            So, in game, or renderer, whatever, the shader has no access to the mesh or normals?

                            Comment


                              #15
                              So, in game, or renderer, whatever, the shader has no access to the mesh or normals?
                              Im afraid so.
                              Which is the reason for normalmaps, heightmaps, etc. That and the UV maps are everything the shader (need to) know.
                              Its less trivial than something one would do within a shader...

                              In my program I will use the vertex/polygon information to see what quads/tris are adjacent (they are stored independently).
                              I will also maintain a vector from origin to the surface center. (usefull later).
                              From that node network I will compare the normal angles. Then I can get a angular delta from between any two nodes in the network.
                              Then, I collapse coplanar nodes together. Any immediate delta between two nodes above the threshold is already identified as an edge.
                              When I walk over several meshes to catch chamfering, etc, I use the previously stored vector to the center. The theory behind this: The plane that is costituted by two vectors to surface centers should indicate the direction of travel on the model, right (?).
                              In order to catch all edges (above the threshold), this walk over the model needs to be done exhaustive on all the nodes in all adjacten directions (minus the reverse ones).
                              Then I have to match these found edges to their respective place in the UV space.
                              Finally I can apply a parameterized gradient to both sides of that edge.
                              Which already reveals one requirement for the UV map: It needs to be non overlapping. Yupp, just like lightmaps, for similar but not identical reasons

                              You see, for a shader it would be *umpf*....

                              Cheers,
                              Klaus
                              Last edited by KVogler; 11-28-2014, 06:51 PM.

                              Comment

                              Working...
                              X