[Feature Request] Convert World/Screen space coords by CameraComponent

Hi, guys!

The title says it. I had a problem where I had a simple camera actor at the world to view my scene (it’ll be a static view point) and by requesting Convert World Location To Screen Location gave me absurd values. I found out it was because Unreal was not using my active camera. Instead, it was using the default Pawn’s Camera (the pawn that’s created automatically if we delete the player start and set None for the Default Pawn Class on GameMode).

I believe it would be much more versatile to have this function on the camera instead of the PlayerController, after all, it depends on the CameraComponent properties, I believe. Then we could request Actor Screen Location from any point of view for specific effects. It’s this way in Unity3D and I think it makes perfect sense, since you want to know the screen position of an Actor from the POV you’re currently using.

I solved my problem by creating a Pawn with only a Camera and setting it to be the Default Pawn Class. But what if wanted to do some game that kept changing POVs and needed to know an actor screen location from each of them? Would I have to keep switching between Pawns? It sounds awful.

It doesn’t use the Pawn, it uses whatever Camera you’re currently looking through as your current Player Controller. It gets the camera from the PC’s Camera Manager class (which is why the Deproject function is part of Player Controller).

Well, that’s how it should behave anyway IIRC.

Thanks for your reply! Ok… if you said it works, it should work. :slight_smile: So, I did some more trials.

I redid my previous setup: deleted the Pawn and PlayerStart from the level, created a CameraActor, set DefaultPawnClass to None and used SetViewTargetWithBlend on PlayerController, with Blend Time set to 0, to activate my CameraActor at BeginPlay.
Then, I kept printing the Actor’s Screen Location (ASL) to screen, using PC’s Convert World Location To Screen Location on Actor’s GetActorLocation.

• The first and second Tick’s ASL results differ. At the first Tick, the results seemed just a bit wrong. From the second on, it is constant, with values that seem perfect.
• Without SetViewTargetWithBlend, therefore using the default POV created by the engine at 0,0,0 and looking at the same direction I was before hitting Play, the ASL values were consistent since the first Tick. And also seemed correct.
• Using SetViewTargetWithBlend at each Tick and requesting ASL right after, the first and second Ticks’ results still differ, but a lot.

My thoughts:
When using SetViewTargetWithBlend on BeginPlay, even with BlendTime set to zero, at the first Tick the POV is still blending from the startup camera to the one we set it to, but it’s close to the final result. When I read the ASL right after SetViewTargetWithBlend (at same Tick) the first result is far from correct, probably because it has just started the blend.
By setting BlendTime to 0, shouldn’t it just ‘pop’ to the target POV, being completely blended right at the following node execution?

I saw the function works and you were correct. But doesn’t this one Tick delay looks odd?