Combining Rotations (RotateAboutAxis) for World Position ffset

I’m trying to do something like a camera facing displacement. I cannot use screen coordinates for this. The solution I came up with is to rotate the Absolute World Position so the model faces forward set up and apply the displacement then rotate the model back. I’m using RotateAboutAxis, which works fine for one rotation but when I try to combine rotations the things get haywire (skew). Is there a way to combine rotations or a better solution than what I’ve come up with?

Edit: So it seems using transform 3x3 matrix works fine for multiple cardinal axis rotations, but my first rotation axis is arbitrary. Any shortcuts here better than building out a series of matrices to do this?

Are you adding the position of the original RotateAboutAxis input to the output of it before chaining that to the 2nd rotate? Remember that node only returns an offset which is why it works in the worldposition OFFSET input. But if you want to chain them you need to re-add the original position.

btw, doing that just to get a screen space displacement seems a bit excessive but I’m not exactly sure what you are trying to do.

Ahhhh thank you. I may not have done that.

To try to clarify, to see if there is a better way;

I’m actually doing two things, a planar projection along the camera vector and a displacement along the camera vector. What I am trying to avoid is the object translating through the projection, rotation and scale are fine. Imagine a person standing in front of a running film projector. I want the projector attached to the person (my object) but always aligned to a third party viewer (my camera) if that makes sense. My math is very rusty and rotating the object uvs or pixels, building the projection and rotating back seemed comprehensible to me. I played with screen coordinates and couldn’t get what I was after. What I am doing now sort of works, but I’m having issues with axis flipping as the camera moves around the object, which I am hoping I can fix by flipping the rotation angle when the axis is crossed.

Your suggestion works great, thanks a ton. One question though.

I do a dot product of my rotation angle and the normalized cross product of my forward and aim vectors to determine whether to flip the rotation. This is being fed into and if. Works great except for a slight snap as the rotation switches direction. Not the end of the world but I would like to fix it if I can. Any ideas?

Reverse engineering a transform using rotations may work but it is just going to be way more complicated and probably 100x more expensive on the GPU. The correct operation for what you need is a transform. A transform is only a few multiplies and adds on the GPU whereas rotations are many sin, cosine, square root and other very slow calculations involved. Transforms can also go from any coordinate space to any other so there should be no limitation on rotate type like you mentioned, but perhaps I am not understanding what the issue was that you saw.

You said screen space didn’t work, but why?

You can transform into any custom basis using the material function “Invert Transform3x3Matrix”. You need to supply the 3 orthogonal vector for the new space.

If you want to add some sort of FOV or width to the projection you can divide the two other coordinates by 1+the distance along the axis of the projection. Ie if you are projecting along camera X axis, then you need to divide the YZ by the X after it has been scaled.

I appreciate all the time you’ve spent on this. What I am doing, I understand is atypical and may not make much sense without complete context. I am trying to do some artistic rendering. I have a few materials that are based on the style of particular artists and doing a projection like this is critical to getting the look right and having them be usable in a 3D environment.

Hopefully this will clear things up. Imagine you do a Screen Space Aligned UV projection on a sphere. You get a perfectly flat projection on a 3D object. However, the UVs are bound to the screen (camera). If you dolly the camera, the texture seems to scale on the object. If you truck, tilt, pan, whatever the camera, the texture appears to slide across the surface.

If you take that same sphere and do a planar UV projection, it will not slide or scale on the object. If you are looking straight down the projection vector you see is a reasonably flat projection with a little stretching on polygons that have a higher incidence angle with projection vector. Doing a planar projection along the camera vector gives a reasonably similar result to a screen aligned UV projection, for my purposes, and eliminates the sliding and scaling texture issues of screen space projection.

So the result I am looking to achieve is a planar UV projection that is always faces the camera. The texture I am projecting is a fine pattern and in the experimenting I’ve been doing the last few days, it seems to look much better if I can “lock” it to the surface and eliminate the sliding that happens with a screen space projection.

Here’s where the ceiling of my math knowledge and understanding of the tools comes in. I get the concept of various spaces, and I get transform matrices in so much as what identity is and how they produce a transform on an object. Going from 2x2 screen space to 3x3 world or local to 2x2 UV, I haven’t been able to wrap my head around, other than rotating the UVs to a cardinal axis in the material so I can eliminate one axis and produce a UV coordinate. InvertTransform3x3Matrix is not documented and though I guess it should be self explanatory, I don’t really know what it does. I did mess around with it trying to figure it out a couple days ago, but I don’t know what to use for a W coordinate for the screen space UVs, what the basis vectors for my space are supposed to look like, by that I mean I assume my space isn’t just local, it has to be relative to the camera in some way, or how to go from whatever the result is to a 2D coordinate I can plug into a texture.

Going from 2d screen space to 3d space is not as hard as you might think. The math involved there is called a Clip Space transformation. Basically what happens is that in the 2d space, you still have scene depth and 3d positions can be recreated using it. So there are still 3 axes for screen UVs, just the 3rd axis is straight looking down the camera and is only meaningful for depth.

You can get the camera vectors by doing things like 0,0,1 transform View->World. You can do that for all 3 vectors. I forget if the forward vector is X or Z. Its something I use all the time yet still have to double check every time I do. Either way, the side vector is definitely 0,1,0.

That said I think this could be done very simply a number of different ways.

First up, have you tried simply taking the Local Position of the object and transforming it into View Space? That should make the vectors always face the camera yet the positions will remain fixed on the mesh. Try the material function “BoundingBoxBased_0-1_UVW”. Also I have made very similar material functions that do things like this for baking out depth textures, try looking at the function “BoundingSphereLightTransform”. It may not be exposed to the library (So search under engine content) and it is probably not exactly what you need since it only specifies a single vector not all3, but it should get you pretty close or at least demonstrate the type of nodes you need.

Holy **** this could be easy. Your explanation really cleared things up.

I had been reading all these posts with crazy things people were doing to create camera facing textures and whatnot and it send me down the hard path. I think part of the problem is I was under the impression view space and screen space were synonymous.

My solution looks to be, absolute world space node to some space. View almost works. I will need to rotate view space coordinates to align with the camera direction vector, not the view vector. Anyhow, feed that into my network that builds the projection and it does what I need.

Thanks a ton :slight_smile:

I have solved this and I figured I’d post the solution for anyone dumb enough to want to do this who might stumble upon this thread and come to the wrong conclusion based on the discussion.

Restating the Problem.

Create a planar UV projection in Object space that is always aligned to the camera. No matter how the object is oriented or where it is in the world, a planar projection generates a set of UVs that are attached to the object and face the camera. With the result, a projected texture will look similar to a screen UV projection, however the scaling and sliding that happens with screen space projections will be minimized.

Solution

My initial instinct I think is the only way to achieve this. The UVs or Pixels need to be rotated and the projection applied. No common space transform can solve this. The projection has to happen in the object’s space and the vector the projection aligns to is the vector between the object position and the camera position. View space comes close, however the camera view vector and the desired projection vector become divergent when the camera is rotated away from the object and the projection will appear to rotate away from the view point. No matter what space you attempt to do this in, you will be three rotations away from lining up the projection coordinates correctly, so it’s easiest and cheapest to not use a transform node and just work in world coordinates. The planar projection must be defined explicitly. Using a BB function to define the extents of the projection scales the projection with the object’s animation.

The solution is to rotate the object position->camera position vector to a cardinal axis and apply the projection It’s important to note you are not actually rotating the object. You can do all the rotations you want and as long as you don’t feed them into World position offset, the model will not physically change shape or orientation.

You will need to define 6 vectors and do three rotations. You are trying to rotate about an arbitrary axis. I found it easiest to use Transform3x3Matrix functions and not have the additional expense and headache of dealing with offsets generated by rotate about axis.

Some useful links.

[This explains rotations about an arbitrary axis](https://www.siggraph.org/education/materials/HyperGraph/modeling/mod_tran/3drota.htm#Rotation about an Arbitrary Axis)
This has a clearly explained solution for calculating rotations and dealing with axis flipping on rotations.

I found it easiest to rotate the object->camera vector to align with World Y and apply the projection, so that is what this explanation will do.

Steps

  1. Define your vectors. Three unit vectors representing the world, X (1,0,0), Y (0,1,0), and Z(0,0,1). Your Projection Vector (object position - camera position). Using the projection vector and World X, create orthogonal vectors for your projection space with the Create Third Orthogonal Vector node. This will produce normalized orthogonal vectors representing your projection axis. I refer to them as Projection, Up and Right.

  2. Subtract the actor position from the Absolute world space position from the outset. I’ll call this UV Position.

  3. Create a seventh vector you will use to start aligning your projection vector with World Y. You can refer to Pythagoras to generate this vector, or cheat. The vector should be a projection of your Projection vector in the XZ plane. To cheat, just grab the Projection vector break out it’s components and make a new float 3 substituting 0 for it’s Z value. I will refer to this as XY Projection.

  4. First rotation. Get the angle between XY Projection and World Y. Rotate UV Position on World Z using the angle between XY Projection and World Y. This and all subsequent rotations use 0,0,0 as their base position or pivot.

  5. Apply the same rotation to Projection and Up vectors. You can additionally rotate Right vector if needed but it’s not required for this example.

  6. Using the angle between the Projection vector generated in step 5 and World Y, rotate UV Position on X Axis.

  7. Apply the same rotation to the Up vector generated in step 5. Again, you can rotate Projection and Right from step 5 as well to ensure all your vectors are where they should be, but it is not required for this example.

  8. Using the angle between World Z and the Up vector generated in step 7, rotate UV position on the World X Axis.

At this point you can build a planar projection in the XZ plane. Use the X and Z coordinates for your U and V.

PlanarProjection.jpg

The result here is a sphere with a planar projection of a pattern. No matter how the object is transformed in world space or where the camera is positioned or aimed, the result looks like the image. The texture will appear to slide when the camera orbits the object, but is quite stable when compared to using screen UVs.

This method is not cheap. The vertex program comes in around 190 instructions and includes three acos functions just to rotate the UVs and build the projection.

You can also use this method to apply a displacement. Do the rotations above, displace on Y, reverse the rotations, and plug it into world position offset. This comes in around 225 instructions. You will notice in the image the texture does not appear perfectly flat like a screen projection. By rotating the UVs, scaling on Y, and rotating the UVs back, then feeding the result to World Offset, you can flatten the object on it’s Projection vector and get a perfectly flat appearance. Flattening the model works well but is not suitable for environments with a free perspective camera.

This is how you combine two RotateAboutAxis nodes into a single WPO:

2 Likes

Thank you a lot. Saved me!

I’m trying to do this now too and seem to be getting the correct rotations, but seem to be having trouble with the geometry getting sheared after the second RotateAboutAxis is applied:

Anyone else have this issue? Am I doing something silly?

Thanks!

Yes. You’re missing the final step of combining the two rotations as shown in MarkJG’s picture.

Correct combination of two rotations is the following:

2 Likes

Very useful! Do you know how to do the same with the normal nodes? I assumed it would be the same but it’s not