Can't figure out how to use Vertex Color instead of VertexNormalWS to create rim offset and offset outline based on Stylised paint shader breakdown

I’m following this tutorial to create a rim offset and it uses Vertex Color to substitute for VertexNormalWS partially for performance reason to create a fresnel, and I’ve watched the instruction baked the normal map.




I tried exporting it as an .fbx file with the settings over there in Blender, I think that might be the issue but I’m not sure? Or it has to do with my import settings, but I thought I’d want to replace the vertex color with the one in the FBX? Which I thought would make sense since we’re using the vertex color from that model.

image
I’m trying to create the fresnel but it doesn’t look right. When I try to import the mesh into the editor and check how it looks for the vertex color, it doesn’t match?

I’m comparing the results with the Vertex Color compared to the cheap fresnel method that the author used and it certainly doesn’t look like a fresnel. Previewing with the Multiply node also doesn’t change the issue.


image

What it’s supposed to look like, relatively. Specifically the rim lighting.


image

Thank you to anyone who replies. Not sure if this is the right place to ask but I think it makes sense considering the topic.

So I realised that the video had the Swizzle G be -Y, so I changed that and exported it. Still doesn’t look right.
image
image

Ok, I think I might have gotten something closer.
image


Don’t use TransformVector. Not sure if this is why but my theory is that the normal is baked in object space, not tangent space so the calculation is going to be incorrect if you try to use that node.


So I ended up not using TransformVector because the vertex normal colors that I baked in Blender was already in world space.


From my understanding, you’re distorting how the texture is sampled based on the camera’s view direction specifically from the horizontal (X) and depth (Z) information. The CameraTextureScale param is meant to affect the intensity of this distortion.

For the gradiant map, although I had an alpha channel, inputting from A didn’t work, I had to use RGB or RGBA as the input from the texture sample. Don’t know why that’s the case though. I’ve tried several texture maps, including a normal map.

Result:
image

For my case though, I think I might do a 1-x to invert the blacks and whites. I don’t entirely get why de Laubier originally had it the way they did but it might have to do with the material being two sided and having the outline offset.

You have some sort of massive confusion over what’s what.

World Space - anything, means in the final world. Often relative to the camera, particularly for normals.

Baking something in blender (or any dcc) is not “world” anything. It’s local. Always.
Period.

Translating something ferom local to world requires some work. Some sort of comuptation.
Does that cost less than what the engine’s node precompute? That’s something to be benchmarked. Usually, no. But there could be exceptions.

Second thing of note.
Vertex Paint values go from 0 to 1.

-.5 * 2 means you are tranforming the range of the final result into somethinf that goes from -1 to 1.

1-.5 = .5 * 2 = 1
0-.5= -.5 *2 = -1

This is usually normal/desired/ even required for vector fields and the like.

What that equates to for the vertex paint is anyone’s guess though.

I have utilized vertex color as data for just about anything, including vector displacment calculations that required translating the value out to a ±1.
Unless you understnad the rest of the math you can’t really know if this is desired or not.

Adding a tranform vector to this value after adjusting its range is probably incorrect - the value isnt in tangent space (well, I suppose it depends on what was baked and how mostly).

Third thing.
VertexNormalWS is the value of the normal (range 0 to 1, not -1 afaik) in World Space.
So, relative to the orientation of the object.

Picture this as turning on the face normals view in Blender.
This nodes provides that exact value but for each of the vertex that compose faces (tris only in engine).

It allows you to infalte and deflate an object while maintaining its same exact shape.
Scalar * vertexnormalws > world position

The cost is relative to the number of tris in the object. More tris = more cost.
Vertex paint is also per vertex and it would have a more or less identical cost.

So the claim that using vertex paint is somehow cheaper for this is probably false.

If you were to use WorldPositionOffset - which is the value of each rendered pixel - then the cost would indded be cheaper while limiting it to just the verticis.
But vertex Paint vs vertex Normal likely has no difference in cost at all.

Hope that helps.

Of note:
Using vertex paint you completely loose the rotation of the object in world space from the normals.
This may not matter - really mostly depends on the geometry of an object.
On a sphere, like the example, it will not. Because sphere math is its own thing.
On complex geometry it very likely will.

Thank you for responding and clarifying several things, including clearing up my misconceptions.

I’d modify my reply to say “look below for explanation” but I can’t now.

I do have another question since you definitely know more about shaders than me, I also tested it with the nodes for cheap fresnel that the article used and it does seem like the instructions are actually less than the one that was used in the final version of the Stylised Paint Shader.

image

Do you get why they did it the way they did? Is it because of the edge detection issue essentially? Or could you essentially bypass the edge detection issue without having to avoid flat surfaces and baking the smooth normals into vertex colors.

This makes me wonder then if I shouldn’t use the vertex paint by this point for rim erosion (again, my confusion comes from why it’s used in the first place, even after reading it - I’m an amateur with materials and textures). It’s not cheap like you said and I could possibly use more complex geometry.

Just cost probably.
Uaing WPO is expensve as it would be Per Pixel.
It will look best on reflecting shaders - glass, water, mirrors, metal etc - but the cost of it may not be worthwile for all objects.

The result is pretty bad obviously, and being per vertex something has to be done to expand or filter the value into a somewhat cohesive map.

They chose to use power for this - though the result isn’t clamped, so out of range values will likely cause issues.

My guess is that because they are doing this on a sphere they can reduce the cost with no visual issues…

Thank you for the information.

Would you instead of using vertex colors, would you basically export a painted normal map and then use that instead of VertexNormalWS or VertexColorNormals? It should give you more control too, right? From my understanding the point is to convert the VertexNormalWS/VertexColorNormals into a normal map, based on what you wrote.

Am I correct that if I went for screen aligned instead of view aligned, that there would be rim erosion no matter the angle the camera is facing like how I drew the blue lines surrounding the cube? If not, how exactly do you achieve that effect?

If so, is there any way to have it be screen aligned instead of view aligned? I’m assuming that instead of finding the dot product between the CameraVectorWS and the VertexNormalWS, it’s going to have to be calculating something different. Or maybe somehow ‘rotating the texture to always be facing the camera’ kind of scenario? Because wouldn’t we still need that calculation for the fresnel no matter what.

Also, about the current dot product calculation…wouldn’t the result be one? I inputted it in a calculator.

I’m looking at this and I’m not seeing the range for CameraVectorWS or CameraPositionWS but are they both [0, 1]?

Again, I’m an amateur at shaders and bad at math, so sorry if this is obvious.

Because you’ve said that the vertex paint and vertex normal isn’t going to have a difference in cost, I went with the vertex normal for now. I’ll change the nodes for the screen align if screen align is actually what I’m looking for.


image

Can you explain a bit more about the effect you want in the end?

Also your rules for it.

For instance, if I wanted 2 edges on an object I’d just create a mesh with an outer shell, and add a transparent like material to the outher shell.

Cost wouldnt matter too much if thats the effect I need.

Gameplay over anything else - provised its within the fps range I need on the device with the lowest specs.

Thank you.

So the effect I’m trying to achieve is something like


but with a texture for the rim erosion. I just want it so that wherever the camera is facing, there’s that eroded effect at the edges of the object or character. I hope it makes sense now that you see pictures of it? The problem is that I don’t want the faces to disappear depending on where I’m turning.



The erosion should always be at all the outer edges that are visible to the camera no matter what and I’d like to be able to control the intensity of the erosion, like how much it eats into the mesh, so I’m assuming there’d be a scalar parameter?

I already have the texture, it’s just I don’t know how you get the effect other than trying to make a fresnel, plug into the opacity mask, and that’s it. I looked up Sobel edge detection but I’m not sure if I’m supposed to use that here.

If possible I’d like to make an outline offset too, like in the picture below:


Preferably for the outline offset, I want it to only be visible when the mouse clicks on the object or character or whatever. If possible, change the offset distance as well as the size of the outline.

Really, there is no reason not to use a postprocess effect for at least half of this.

The outer outline is trivial in a PP.
Near impossible in a shader (the shader doesn’t know what is around the object).

As far as the effect itself.

Its not exactly Edge Detection.
That would be figuring out where the hard edge is, more so than figuring out where the mesh ends.

Fresnel is nothing but the dot of the camera vs the object’s single tris normal (vertexnormalws).

Basically, you are looking to isolate values of 1, as that is where the object is facing the camera straight on.(Or the camera is facing the tris squarely? Semantics).

That won’t necessarly highlight edges - you could have a plane facing the camera straight on. The whole plane would be solid colored.

It will work for a sphere 100% of the time like edge detection. Because every point on a sphere is equidistant from its center.
Your normals will always have one thats more or less 99% facing the camera, and the rest around it fading out.

The character mesh, will work somewhat - chin/neck area may have issues since the normals can become opposite on long chins.

I’ll try and think up a way to make something similar, without using a PP.

However you may benefit from opening up a dedicated topic out in rendering, others my have simpler solutions to this altready thought out…

I have to say that the best way to achieve what you want is exactly what you have drawn…

A solid model in the center, with a solid shader.
An outer shell thats got the panning corroded material all over it.

If its black over black in an unlit material the 2 will be rendered out as a unique merge.

If the material is lit it becomes problematic as the outer shell lighting may differ or the inside may bounce light onto the shell, so that aspect would need further PostProcessing.

Additionally, with 2 shelss you could put different stencils on each, and just use a post process to color both the same / making a cartoon like effect on top of any render…
Or you could render the inside of it always on top, which may just achieve the effect you want.

1 Like

Alright, I’ll do that in the future, thank you MostHost.

I’ll probably just do the post-processing material for the outline then. I think the reason why the article didn’t do that was because of fear of the cost but in my case I can’t see it being a big deal.



So something like these so far? Again, sorry for not catching on right away. Also, I’ll open a new discussion thread instead of continuing this one. Tomorrow, I’m almost out of time today. Another busy week coming up as well. I’ll also look more into stencils and research into them, not good with materials/shaders. So I appreciate your explanations and time. Thank you.

You can see it works ok for the sphere.

I think the double shell method can work similarly.

You can shift the inner object vertex in a material to inflate/deflate and create the same effect - at a higher cost ofc.

You can do this on a single object, and with a single material using different UVs for the inside and outside shells…

1 Like

How would you grab the inner object vertex? Via the object position and, modifying that and then plugging it into the world position offset? I just don’t know how to imagine this.

The UV mapped to the inner shell gives you the starting area, you then just multiply by vertexnormalws and a scalar variable.
Plug into wpo. And yoy got your inner inflatable/deflatable mesh…

1 Like

Ok, I’ll try starting to work on that tonight if I can. This week’s been busy. Thanks a lot for your patience. I appreciate it.

Sorry for taking so long to respond. As I’ve said, been a busy week. I’m assuming that’s what it’s supposed to look like?
image
image

How would the outer shell work then? Would you have to use custom data and have two instances of the mesh, one for the inner shell and one for the outer shell? You said you could use stencils for the rim erosion too.

image
Also, I made an outline using PP, although it doesn’t match the outline created in the article, which makes sense. It’s detecting edges, not gonna be able to detect pixels if they’re invisible.

Does that mean that it would have to use custom stencils, apply that stencil onto the inner shell (which could possibly be an instance) and then scale the PP outline from there somehow? Like wouldn’t you still need to grab the normal to get the direction the edge is facing? :thinking:

I know you said the rim erosion wouldn’t be dependent on edge detection, but from what I’ve researched, you can get edge wears using Sobel or whatever edge detection algorithm, can’t remember where I found it but saw something similar about edge detection in general here, and I already have an outline PP material. Theoretically couldn’t I just grab the outline I made, and instead of having a color there, somehow apply the rim erosion gradiant to it, play around with the alpha and scale of it to give that effect?

Otherwise I’m not sure how you’d make the outer shell the way you described it? Unless, am I supposed to have two instances then? 1 for the inner shell, 1 for the outer shell.

Thank you.

Stencils and PP are only a need if you have an issue with how the object lights up.

Avoid them if not required artistically.

Use custom UVs for the mesh.
Then use the appropriate UV channel to select the inner or outer part of the mesh.

You can easily achieve this with a single shader by just creatively using the UV channels.

Think of it as using 2 different materials, one for the inner shell, one for the outer shell.
Instead of that, just UV channels.

Re what VertexNormal * Scalar should look like:
Yes, but the engine cube shouldn’t be falling apart like that when you inflate or deflate. The edge should be a result of something else you are doing.

EDIT:
Actually, the stencil/depth buffer may be needed to make the Inside object appear on top of the Outside shell.
This does depend on how you want the object to look like. But if you need to be able to manually sort who’s on top. then stencil values can help.

The non PP alternative is to subtract UV0 (the inner shell) from UV1 (the outer shell) somehow.
I’d have to get back to you on how exactly, but off the top of my head you can probably vertex sort the object off the UV to isolate the 2 parts…

The non complicated way would be to make the outershell be half.
So in the case of the sphere you have a sphere for the inside, and half a sphere for the outside (the rotation of which is kept camera relative via shader).
This too would work for most (but not all) objects geometry…

1 Like

Thanks for replying, I appreciate it. So far I’ve used Custom Data as kind of like a ‘switch’ (and I definitely don’t think that’s the intended purpose) and I checked with cubes, it does work. It tanks for shader complexity though which makes sense to me. But I think this method is meant to lower draw calls.

I feel like an idiot for asking but is that how shell texturing for fur and stuff like that works? I’ll look more into shell texturing, I understand the logic behind it but not really how to make it (and in my scenario I wouldn’t need a random seed). I’ll try playing around with the different UV channels and looking into vertex sorting.

I thought the inner shell would be the solid part and the outer shell would be the rim erosion. So the rim erosion would be the one that’s inflated, from what I understood with this effect.

Would you just take base color texture, grab a component mask, split it into RGBA, use one of the channels (B because that’s the height?), use the gradiant as the Alpha/Opacity, modify the base color texture to have parts be invisible based on that? Wouldn’t I have to use a Lerp node, plug the gradiant as the Alpha? But then what would be A?

Also if I am unable to use a single shader I’ll use two different materials. I can imagine using custom stencil buffers for PP outlines on the inner shell material then somehow ‘offset’ that outline based on it. It’s hard for me to imagine how custom stencil buffers would work if it’s 1 material and different UV channels, unless that is possible. I’ll try working on it more today. These couple of weeks are/will be busy, and this is a learning experience for me. :slight_smile:

https://aqu.hatenablog.com/entry/2018/08/12/070306

This does look like what I was attempting to go for, just without the offset outline and you can see the rim erosion on the trousers as well but that part’s fine I think. I wasn’t familiar with the name of the method, so sorry about that (and also for the fact that it took me this long to piece how shells worked).

After thinking a bit.

The effect you eant is easily achievable with 2 shells, IF, you set the normals for the outside shell to be inverted - inside only.

Ill give you a quick blender made example if I can…

PP does it’s thing. UVs are on the object. the 2 are unrealated.
You can tell a mesh to belong to whatever stencil you want.
Most common use for this - with completely separate meshes in this case - is to hide water from inside a boat.

Anyway. let’s get to the example here
the model’s screenshot:

The forum upload smushes this into a 690px image, so use it as a reference more than anything.

Wierd model with complex geometry to the right.
It’s got 2 shells, the outer one has inverted normal.
The inner one’s normal’s point correctly.

To make the mesh you have to select all the faces, then duplicate, then extrude along face normals. This process is not without issue on complex geometry - and it simulates what the inflate shader will do, so you can deflate fine. Inflate, not so much. Depends on the way the model is made.

On the left you can see the UV(a mess)
and the material.

Jumping into the engine now
The model as is - without any material:

That’s already essentially what you want, so let’s throw together a quick shader for it.
Here is the result:

Here is the bits of the shader;
Isolating UVs - this is done based on how the UV was mapped in the DCC.


NOTE: the bottom example uses IF instead of a lerp. If you need more control than lerp offers. (might also cost less. you’d have to bench test).

These are plugged right back into the custom UV channel 0 and 1.
That’s important to re-use the nodes like I did, but you can probably also just plug in directly into textures. The savings of doing this on a lerp are minimal.

Here you have the part that allows you to isolate the 2 shells:

This plugs back into a custom UV 2, so that’s why 0 is appended. If you don’t use the custom UV you take the IF output as the alpha of the lerp.

Here’s the WPO output to inflate/deflate the outside mesh.

And ofc the texturing, which is likely to be different from inside to outside:

After that, to get the rim erosion effect going, you just apply a texture to it as the opacity. the UV for it is already isolated.

This will give you something something that operates like this:

(never you mind the crappy textures used. it’s just what sits in my test project.)

From then on, you can actually refine it.
Say the bits and pieces showing up on top of the mesh are an issue for you (those are normals that are essentially facing you but within the “BOUNDS” of the object)…


A little bit of SphereMask magic and they can just be gone.
(this generates a sphere, with a radius the size of the bounds, on the edge of the mesh that’s facing the camera.
The result is that the inside bit is eaten out. May not work right on character skins because they aren’t a sphere like primitive).

On this, if you implement Vector Scaling along with it to inflate the shell, well then the bounds change. you have to manipulate either the radius or the position of it. or both. A quick edit/change to utilize the same scalar:

Try it, see how this works.
Anything on top of it should probably be done via Post Process.

Also a side note:
I found that using custom UV for isolating the inside/outside value is mixing up the vector displacement. I recommend plugging in directly off the IF statement to it.


I’m really not sure of “why” passing it on the GFX is altering a direct 0 to 1 input. but again, the benefit is so small it doesn’t matter…

If it lets me… the blend file:
2shelltest.blend (2.2 MB)

Thank you so much. I’ll try applying your technique. Hopefully soon, schedule’s been a bit rough right now.

Thank you so much, sorry for late reply (been busy) and not getting this correctly right away. I just have one question though, for the opacity, I don’t think I isolated the UVs correctly because it looks like this or I’m not using the correct shading model? (Sorry for crappy textures, I’m just testing).

image

I’ll keep in mind of the limitations with complex geometry. It works for the post processing outline though.

I made another post for the rim erosion/outline like you suggested and I’ll link your answer from here to there as well because you found the solution but we talked in this thread.