in gif evident that the normals are not considered properly in the camera
Because in the transformation of normal are not taken into account the distortion of the camera FOV.
How to calculate and transform the normals in camera space, taking into account the FOV?
You could do additional transform for normals that are already in camera space, using screen-aligned UVs. But for the effect you are after, there is a better solution.
I still can not get adjusted coordinates in the camera-space, taking into account the difference in the direction of pixels.
the whole way transformation? Thanks.
Transform the pixels normal to camera space, given fov. And create camera aligned UV coordinates for the object.
(matcap UV like in shaderFX in 3dsmax or maya).
While rotating the camera, “camera aligned UV” should not change. UV change should only be when the camera is moving.
It is approximated to the desired result, but still not the same. http://i.imgur.com/vg6dyZu.mp4
on gif seen a slight rotation of the UV when to rotate the camera.
If you want a screen-aligned texture on your object, but one, that stays stationary while cam is rotating, why don’t you just use screen aligned UVs, with addition of offset to compensate for camera rotation?
screen aligned UVs - normalWS matrix*camera direction vector, and that space is flat.
but I need to transform normalWS*(cameravectorWS*camera_direction_vectorWS), is curved space.
then convert RG in the UV
and then if we consider the sphere of UV 0.5,0.5 center of the sphere will always look at the camera. And the spheres of the edge will always be in the tangent space perpendicular to the camera’s view
A reference of the effect, that was already implemented. Video, screen shot, or description or something like that. What you just posted is, as said above, as trivial as taking a dot between view dir and vertex normal.
There is a formula in the link I’ve posted.
Use this for custom node:
float3 R=C-2*dot(N,C)*N;
float D=2sqrt( R.xR.x + R.yR.y + (R.z+1)(R.z+1));
float U=(R.x/D)+0.5;
float V=(R.y/D)+0.5;
return float2(U,V);
Custom node should be set to float 2 output and have View-Space Camera vector as input C and view-space normal as input N.
For the distortion of the texture at the sides of a viewport, well… your sphere isn’t a sphere in the corner of a viewport anymore, so there will inevitably be some distortion. It will increase with larger FOVs. Typically, this is ignored. Have a proper padding in your matcap texture.
You could play around further and calculate divisor separately for U and V, introducing sort of compensation for position of the pixel on screen, but in the long term it is not worth the effort, no matter what kind of effect you are after.