Im afraid so.
Which is the reason for normalmaps, heightmaps, etc. That and the UV maps are everything the shader (need to) know.
Its less trivial than something one would do within a shader…
In my program I will use the vertex/polygon information to see what quads/tris are adjacent (they are stored independently).
I will also maintain a vector from origin to the surface center. (usefull later).
From that node network I will compare the normal angles. Then I can get a angular delta from between any two nodes in the network.
Then, I collapse coplanar nodes together. Any immediate delta between two nodes above the threshold is already identified as an edge.
When I walk over several meshes to catch chamfering, etc, I use the previously stored vector to the center. The theory behind this: The plane that is costituted by two vectors to surface centers should indicate the direction of travel on the model, right (?).
In order to catch all edges (above the threshold), this walk over the model needs to be done exhaustive on all the nodes in all adjacten directions (minus the reverse ones).
Then I have to match these found edges to their respective place in the UV space.
Finally I can apply a parameterized gradient to both sides of that edge.
Which already reveals one requirement for the UV map: It needs to be non overlapping. Yupp, just like lightmaps, for similar but not identical reasons
You see, for a shader it would be umpf…
Cheers,
Klaus