# How Unreal Engine figure out Rotation, Velocity, Transformation through coordinate system?

Hi Community, I am trying to figure out how unreal engine calculate “rotation”, “velocity” ,“transformation” through coordinate system but I don’t have any idea about the calculation logic.
I want to know about this fundamentally so that I can use those without having any doubt in my mind.

If you mean’t about the point where your object is understanded as it’s own Y point it depends on the Mesh object or any settings inside it and it can be changed. What I mean is that your object can have it’s own Y point at it center or at any other edge for example.

Don`t be scared, but this is how all 3d engines should calculate rotations:
v=zjMuIxRvygQ

This thing solves poles problem ie. gimbal lock. And it is easier to code once you know how to build transformation matrices.

However if you want to know how to use blueprints rotation and vector math, well it depends what you want to do.

I think it will be more easy to understand it as if you imagine that nothing practically exists. Everything is a data like it’s written on a paper and is unusable. What happens then is you have a camera, this camera have mathematical formula to describe it’s field of view depending on where it is placed in the coordinates and where it is pointing at. Then the engine reads the data of the coordinates of the objects in the level, everything that is inside the camera sight coordinates (and closer objects too) are becoming to be readed more in details - this means their poligons start to become calculated. All objects are made from triangle poligons that have 3 coordinates and form a plane in between, this plane is what you see on the screen. So if your camera is located at 0 0 0 XYZ and is directed to +X you will see everything in +X in range of sort of triangle +z -z +y and -y, and eventually it will stop somewhere in the X distance too which is the “field distance”, and if object is located at close range inside the camera view for example X100, Y2, Z-5 it will be displated on your screen at the known place, again everything depend on the camera position. Rotation is depending on frames - if your object at X100, Y2, Z-5 have only one poligon (THREE POINTS) from for example X100, Y3, Z-6 to X100, Y6, Z-6 to X100, Y2, Z-4 you will see a triangle at this frame, but next frame your object rotates and it’s poligons are now located at X100, Y3, Z-6 to X100, Y6, Z-6 to X101, Y2, Z-5 (the top corner leans back a little bit), you will see a different triangle. Frames by frames you see a movement. How movement is calculated is just mathematical solution - points rotating around point. If you need to calculate such thing I advice you to use internet triangle calculator, it’s nothing complicated.

1 Like

v=3BR8tK-LuB0

Yes youtubes sometimes give good suggestions

It’s just your standard hierarchical transform hierarchy that positively every 3D tool (games, CAD, simulators, etc) have been using since the 1960s.

The specific conventions used by Unreal:

• If interpreting the coordinates in the real world, X forward, Y right, Z up, left handed.
• Because it uses left handed interpretation, rotations go clockwise around the axis of rotation (like a compass, or clock.)
• Direct3D style row-vector-on-left math convention (as opposed to Math/GL-style column-vector-on-right) with row major element ordering (translation lives in elements 12, 13 and 14.)
• Simulation-style `XYZW` quaternion component order (as opposed to some math libraries that use `WXYZ`)

Because it uses row vector on left convention, order of transforms is left-to-right, so when you have `local vertex * Component Transform * Actor Transform * Inverse Camera Transform` then the transformations really happen in that order, as opposed to column vector systems where you write `Inverse Camera Transform * Actor Transform * Component Transform * local vertex`
Except, because computer graphics and 3D math have never settled on a standard, you can’t write it out using the `*` operator, you have to write it out as `FMatrix.TransformPosition(FVector)`

And that’s pretty much it. Very straightforward.

This is the best example as to the root of coordinate systems with in 3d space.

v=v9j0IpoXXkk&t=134s

A bit silly but is typical of most of not all 3d applications as to there not being an accepted and established coordinate system as being referenced by X,Y,Z. In most cases the idea of direction is based on the requirements of a given project as determined by an individual, project leader, by first determining what is to be the point of reference. Also in a lot of cases the application of choice helps you out by establishing a “default” system that works with the tools they provide “relative” to their established world space using world origin and units as to base function such as rotation velocity and transformation.

Units for example is not a true measurement but a means to established a form of measurement that is true to all applications that once again the end user can determine what the value represents. In UE4 for example world space = 1unit =1cm so accepting world origin as 0,0,0 and 1unit = 1cm more complex math could be worked out with out having to resort to what is being provided. Kind of reminds me of the math mumbo jumbo used in the movie “Cube” :D.

However

UE4 helps you out in some areas as to common use by providing tools that already define a practical coordinate system built in. The most typical is the character blueprint which by default provides a forward vector along with a way to determine things like velocity ,transformation and rotation. Just need to keep in mind that just because it’s called a character blueprint does not mean you can only make use of it using characters.

This of course only applies to the use of world space.