Am I understanding this correctly?

I’ve been tutorials to get use to using unreal again. The first major thing I made was a door with a switch. You walk to the door, it doesn’t open, so you step on the switch, walk up to again and it opens, pretty simple. I understood that pretty well so I decided to make a bridge in the same way.

It took some time but I figured it out. The thing that made it take so long was figuring out the movement of the bridge. Like in order to make the door go down, it has to be positive on the Z axis but the bridge has to be negative on the Z axis to go up. But if I’m moving something up on the Z axis, the values are positive. Why is this? I know unreal is a Z up engine, does that anything to do with it?


This is part of the blueprint for the bridge.

It depends where the pivot point is in the mesh, and what it’s starting position is.

Plus you’re changing relative location, not world location. You can rotate the bridge however you want in the world, but the cube will move relatively to the BP’s root component, and it might not be vertical in the world.

The other answers are good! The transform hierarchy can rotate your frame of reference. It’s good to dive into that part of the system and understand it, because it affects many things.

Separately: I would recommend adding a timeline for this animation, and moving the object by doing “play forward” and “play backward” on that timeline. This gives you a little more in-editor control of where things show up, rather than some hard-coded math. There’s nothing wrong with math, per se, but the more editor-driven workflow tends to be better once you’re working with artists and are deep into “make this level work well as a cohesive whole” territory.

I have a two timelines for the bridge currently, one for when it goes up and the other for when it goes down. Are you suggesting I put them into one instead?

I was thinking you’d use a location-based timeline (real animation) or perhaps a marker for “down” vs “up” and interpolate between them using a float.
Whether you use one, or two, timelines, is less important. You have more flexibility with two.
It’s more the “where in the world is this” bit that is hard to control when you’re just jamming a Z value in there. Ideally, you use a full transform, rather than just a location, so you can reuse the same thing for a drawbridge that swings open.

Oh ok.

I’m not sure if know how to do that just yet, the full transform, but I think I do understand the logic of it.

You can make a position Vector parameter in the blueprint and check the “use 3D widget” checkbox on it, and it will be editable in the editor by dragging it around. However, that only gives you a position, not an orientation:

image

You can ALSO do this for a full Transform variable:

Note that they will be actor-relative positions in the editor:

An alternative is to put two copies of the same static mesh component into the blueprint, so you can see what they look like in “open” and “closed” position. Then, in Event Begin Play, hide one of them, and just use its position/orientation as the transform to blend to in the animation. Or set the “hidden in game” flag on it, to make that happen automatically.