PIP (Picture-in-Picture) Actor and Following Mouse

I am attempting to program a PIP (Picture-in-Picture) “binocular” functionality that allows the user to left click the mouse, it will create this PIP and have a viewport that is 10% the size of the current game window (monitor resolution for fullscreen or game viewport resolution for non-fullscreen). The users can continue left clicking to change zoom levels (Field of View).

So, me being extremely new to Unreal, I have looked around online and it seems that this functionality is pretty straight forward. From what I can gather, I should just create a ASceneCapture2D inherited class. Then, in this class, I should just modify a material that takes a texture from the content browser, assign the texture target of the ASceneCapture2D object as the texture used in the material, then when the actor is spawned in the level, it will start automatically start rendering the camera’s viewing angle in the world onto the texture.

Then, I should just be able to place the texture onto the screen and the user will be able to see this. There is additional functionality that I need to perform to have the camera actor and texture follow the mouse on the screen, to allow the user to zoom in on the screen.

So, I created some basic blueprint objects and I was able to get the texture to be rendered with the camera viewport and was able to see it dynamically update as I made modifications to rotation/translation/position of the ASceneCapture2D object.

With this, I started coding a c++ class to handle creation of a UMaterialInstanceDynamic so I can dynamically create a texture of the 10% size and modify the blueprint’s texture parameter to point at this newly created material based on the blueprint material.

But, when I did this in c++, after I load the UMaterial from the data, get a UMaterialInstanceConstant, then call ::Create using the loaded UMaterialInstanceConstant, generated a new UTextureRender2D from the UKismetRenderingLibrary::CreateRenderTarget2D(), and call SetTextureParameterValue() with the generated UTextureRender2D, I notice the parameter doesn’t exist to set, so it just generates a new parameter.

The blueprint that contains the material has a BinocularTextureSample parameter (Param2D) that has a default texture loaded into it. I was hoping, when the ABinocular object is spawned into the world and the C++ code generates all of these new objects and performs the functions, I should be able to click the actor in UE and see the Texture Target containing a texture that looks like the camera viewport. But, when I check this, the variable in the editor shows “None”.

Another detail, I am unsure if I can just drag this ABinocular blueprint object into the BP_SimFloatingPawn blueprint, attaching it to the root component so that when the BP_SimFloatingPawn is spawned into the game, it will automatically spawn the ABinocular object with the facing of the BP_SimFloatingPawn (since it is now a child of the root component).

I am at a loss on how to perform this. I have been working on this issue for almost two weeks and have tried all kinds of things, ideas, theories, ect to get to work. I am going to post some pictures and code below to show what I am doing:

Binoculars.h

#pragma once

#include "CoreMinimal.h"
#include "Engine/SceneCapture2D.h"
#include "Tower3DGameInstance.h"
#include "Binoculars.generated.h"

class UObjectLibrary;
class UMaterialInstanceConstant;

/**
 * 
 */
UCLASS()
class TOWER3D_API ABinoculars : public ASceneCapture2D
{
	GENERATED_BODY()

	void LoadBinocularsLibrary();
	FVector2D GetNDisplayViewportSize() const;
	void GenerateNDisplayCamera();
	FVector2D GetNormalViewportSize() const;
	void GenerateNormalCamera();

	void OnViewportResized(FViewport* Viewport, uint32 Unused);
	void OnViewportToggleFullscreen(bool IsFullScreen);

	static const FString BLUEPRINT_PATH_BINOCULARS;
	static const FName BINOCULAR_TEXTURE_SAMPLE_NAME;

	static UObjectLibrary* m_pBinocularsMaterialLibrary;
	static UMaterialInstanceConstant* m_pBinocularMaterialInstance;

	// delegates
	FViewport::FOnViewportResized EventViewportResized;
	FOnToggleFullscreen EventViewportToggleFullscreen;

public:
	ABinoculars();

	UPROPERTY(BlueprintReadOnly)
	bool bUsingBinoculars;

	UPROPERTY(BlueprintReadOnly)
	int ZoomLevel;

	UPROPERTY(BlueprintReadOnly)
	TObjectPtr<UTextureRenderTarget2D> RenderTarget;

	UPROPERTY(BlueprintReadOnly)
	TObjectPtr<UMaterialInstanceDynamic> BinocularMaterial;
	
	UPROPERTY(BlueprintReadOnly)
	TObjectPtr<UTower3DGameInstance> GameInstance;

	virtual void BeginPlay() override;
};

Binoculars.cpp

#include "Binoculars.h"
#include "Engine.h"
#include "Kismet/KismetRenderingLibrary.h"
#include "Kismet/KismetMaterialLibrary.h"

const FString ABinoculars::BLUEPRINT_PATH_BINOCULARS = TEXT("/Game/Tower3D/Blueprints/Binoculars");
const FName ABinoculars::BINOCULAR_TEXTURE_SAMPLE_NAME = TEXT("BinocularTextureSample");
UObjectLibrary* ABinoculars::m_pBinocularsMaterialLibrary = nullptr;
UMaterialInstanceConstant* ABinoculars::m_pBinocularMaterialInstance = nullptr;

ABinoculars::ABinoculars()
    : bUsingBinoculars(false)
    , ZoomLevel(1)
{
}

void ABinoculars::LoadBinocularsLibrary()
{
    if(!m_pBinocularsMaterialLibrary)
    {
        // the base binocular material has not been loaded, attempt to load it
        bool bIsEditor = false;
        UWorld* pWorld = GetWorld();
        if(pWorld)
        {
            bIsEditor = pWorld->IsPlayInEditor();
        }

        m_pBinocularsMaterialLibrary = UObjectLibrary::CreateLibrary(UMaterialInstance::StaticClass(), true, bIsEditor);
        m_pBinocularsMaterialLibrary->AddToRoot();
        m_pBinocularsMaterialLibrary->LoadAssetDataFromPath(BLUEPRINT_PATH_BINOCULARS);
        if(!m_pBinocularsMaterialLibrary->IsLibraryFullyLoaded())
        {
            m_pBinocularsMaterialLibrary->LoadAssetsFromAssetData();
        }
        // assets loaded from a PIE (play-in-editor) or uproject will contain only the blueprints
        //  and their names will NOT contain "_C" at the end
        // assets loaded from a cooked/packaged build will contain blueprints and classes generated from blueprints
        //  and the generated classes will contain "_C" at the end
        TArray<FAssetData> assetDataBaseBinocularMaterial;
        m_pBinocularsMaterialLibrary->GetAssetDataList(assetDataBaseBinocularMaterial);
        UObject* pLoadedAsset = assetDataBaseBinocularMaterial[0].GetAsset();
        UBlueprint* pBlueprint = Cast<UBlueprint>(pLoadedAsset);
        UClass* pClass = nullptr;
        if(pBlueprint)
        {
            pClass = pBlueprint->GeneratedClass;
        }
        else
        {
            pClass = pLoadedAsset->GetClass();
        }
        // create the binocular material to be used in the render target texture
        if(pClass)
        {
            m_pBinocularMaterialInstance = Cast<UMaterialInstanceConstant>(pClass->GetDefaultObject());
        }
    }

    if(!BinocularMaterial)
    {
        // create the binocular material to be used in the render target texture
        BinocularMaterial = UMaterialInstanceDynamic::Create(m_pBinocularMaterialInstance, this);
    }
}

FVector2D ABinoculars::GetNDisplayViewportSize() const
{
    UGameUserSettings* pGameUserSettings = UGameUserSettings::GetGameUserSettings();
    return FVector2D(pGameUserSettings->GetScreenResolution().X, pGameUserSettings->GetScreenResolution().Y);
}

void ABinoculars::GenerateNDisplayCamera()
{
    FVector2D viewPort(GetNDisplayViewportSize());
    int nWidth = viewPort.X * 0.1;
    int nHeight = viewPort.Y * 0.1;
    RenderTarget = UKismetRenderingLibrary::CreateRenderTarget2D(GetWorld(), nWidth, nHeight);
    USceneCaptureComponent2D* pSceneComp = GetCaptureComponent2D();
    pSceneComp->TextureTarget = RenderTarget;
    FHashedMaterialParameterInfo paramInfo(BINOCULAR_TEXTURE_SAMPLE_NAME);
    UTexture* pTexture = nullptr;
    BinocularMaterial->GetTextureParameterValue(paramInfo, pTexture);
    BinocularMaterial->SetTextureParameterValue(BINOCULAR_TEXTURE_SAMPLE_NAME, pSceneComp->TextureTarget);
}

FVector2D ABinoculars::GetNormalViewportSize() const
{
    FVector2D vecViewport;
    GetWorld()->GetGameViewport()->GetViewportSize(vecViewport);
    return vecViewport;
}

void ABinoculars::GenerateNormalCamera()
{
    FVector2D viewPort(GetNormalViewportSize());
    int nWidth = viewPort.X * 0.1;
    int nHeight = viewPort.Y * 0.1;
    RenderTarget = UKismetRenderingLibrary::CreateRenderTarget2D(GetWorld(), nWidth, nHeight);
    USceneCaptureComponent2D* pSceneComp = GetCaptureComponent2D();
    pSceneComp->TextureTarget = RenderTarget;
    FHashedMaterialParameterInfo paramInfo(BINOCULAR_TEXTURE_SAMPLE_NAME);
    UTexture* pTexture = nullptr;
    BinocularMaterial->GetTextureParameterValue(paramInfo, pTexture);
    BinocularMaterial->SetTextureParameterValue(BINOCULAR_TEXTURE_SAMPLE_NAME, pSceneComp->TextureTarget);
}

void ABinoculars::OnViewportResized(FViewport* Viewport, uint32 Unused)
{
    if(GameInstance->bIsNDisplay)
    {
        GenerateNDisplayCamera();
    }
    else
    {
        GenerateNormalCamera();
    }
}

void ABinoculars::OnViewportToggleFullscreen(bool IsFullScreen)
{
    if(GameInstance->bIsNDisplay)
    {
        GenerateNDisplayCamera();
    }
    else
    {
        GenerateNormalCamera();
    }
}

void ABinoculars::BeginPlay()
{
    Super::BeginPlay();
    GameInstance = Cast<UTower3DGameInstance>(GetGameInstance());
    GEngine->GameViewport->OnToggleFullscreen().AddUObject(this, &ABinoculars::OnViewportToggleFullscreen);
    GEngine->GameViewport->Viewport->ViewportResizedEvent.AddUObject(this, &ABinoculars::OnViewportResized);
}

Any additional help would be greatly appreciated!

1 Like

Hello ZyllosF, +1 for describing your case well, and adding the code.
also +1 for the TObjectPtr, you might want to also check at the meta tag Transient for some of your members (like GameInstance, and maybe rendertarget)

some comments:

  • you don’t necessarily need to subclass a SceneCapture, you can just add a component for scene capture. (though subclassing seems the ideal way to me).

  • you don’t need to set the texture via code. you can just assign it to the material instance (that would be the most common workflow). just open that “BPBinocu…Inst” and set the correct render target. done.

  • you don’t need to create a render target via code. just create it on the browser. right click, create render target. then assign in your scenecapture and material. you do that statically, on the browser, not via code.

  • also take a look at ConstructorHelpers::FObjectFinder is easier to use than the objectlibrary. though they differ in capabilities a bit.

nande, thanks for the quick reply. I do have some comments on the suggestions you gave:

The reason why I need to set the texture via code is because the texture will be created during runtime, not during compilation. This is needed because the render texture size will be different per run of the game (10% the size of the viewport resolution or monitor resolution) so I have to create the render target of that size and assign it at runtime.

After I had posted this question yesterday, I created a solely blueprint make of a ASceneCapture2D and UTextureRenderTarget, assigning the render target directly to the texture target of ASceneCapture2D. This worked straight out of the box without any assignment to a material for the texture. I was under the impression that a material is needed to render, but I might be wrong. If that is the case, could I just have created a UTextureRenderTarget2D and just assign to the USceneComponent2D::TextureTarget and this would have worked, skipping the texture parameter material assignment?

I am going to test this in code for my ABinoculars class to see if this would work.

i’m struggling to understand what you’re saying. maybe if you show me a screenshot it will help.

yeah, that’s what i’ve suggested i think.

no you don’t need a material to render. nor you need any material to show the rt in order for the rt to be render (that means, you can render but not show it and it should work).

well, it would render to the rt yes. but how do you show it if you don’t have it on a material?
the material is so that you can show it.
the thing is how do you set it on a material.

you need to set the rt in the material, this line is correct

but there’s a ton of other code that i think it’s extraneous and not needed. maybe it was a test.
like this.

also i never used the way you use loading, and i’m afraid that might be causing issues.

also i think there’s an error in your code.
you need a material INSTANCE DYNAMIC to set the param, those are two words.
INSTANCE: you need the material instance though it doesnt matter much, you can load the material instance from disk, or you can call

then you need it to be DYNAMIC;
once again calling

what i’m saying is, you need to call that function. and right now there’s a flag.

remove that. always create the material instance dynamic, otherwise you can’t set the texture param (iirc).

that seems a bit of a waste.
my recommendation, even though hackish, is to have 1 size of the RT, and have it working.
once that works, play with changing the scenecapture resolution.
also, afaik, the scene capture will SET the resolution of the rt automagically (iirc ymmv), so your code to set the resolution is also unnecessary and potentially creating issues.
if that doesn’t work, then i am 70% positive you can set the resolution of a RT at runtime without having to create a new one.
which means you can delete most of that code, and have the rt already assigned to the material and the scene capture on editor.
the only thing you need to do is… well. nothing really. no need to create the material instance, just use the one in disk.

The blueprint one that I created, I have been playing around with it.

Yes, I created a RenderTarget blueprint and just assigned that to TextureTarget of the ASceneCapture2D. That then makes the RenderTarget texture now contain the ASceneCapture2D view. I noticed that the resolution can be rescaled with the UTextureRenderTarget2D::ResizeTarget function, but I was actually expecting to change the actual size shown on the screen, but instead it just changed the resolution of the texture. Really, like you mention, I can just create a single size RenderTarget blueprint and ::ResizeTarget to change its resolution, if I need to.

After that, I started working on getting the RenderTarget to actually show up on the screen. I got that working via a UMG widget and added an image and set the image as the RenderTarget blueprint, which it works. At this point, the UMG widget is really controlling how big/small the image will actually be displayed on the screen via the Render Transform section. What this means is that I need to somehow capture the specific instance of the UMG widget created, during runtime, and then make modifications to the UMG widget so that I can move/resize it based upon whatever data/user action.

And, yes, when I was using the blueprint stuff instead of the C++ code way, basically all of my ABinocular code was just commented out because it’s all connected together via the Blueprint editor.

This leads me to trying to figure out, is this the best way to do this? I am taking a camera, rendering its view onto a texture, then displaying that texture as an image into a UMG widget, which I want the UMG widget to move around on the screen with the mouse when engaged. The camera also needs to move with the mouse movement.

And then, once I get this working, I then need to run this in nDisplay mode to see if the UMG widget will even display with running with nDisplay, will it sync properly across multiple nodes, ect. Of course, this nDisplay stuff is beyond the scope of this thread.

There is some things you mention about the texture needs a material so you can display it. Right now, I have completely deleted all my materials from my Content Browser and I only have the ASceneCapture2D rendering to the RenderTarget, then I place that RenderTarget as an image via UMG widget, so that might be getting around the fact of displaying without a material because I am displaying it as an image on the screen via the UMG widget.

Before I was doing blueprints and asking questions, which you can see in my own code post above, I was calling UMaterialInstanceDynamic::Create to create a dynamic material, but that was originally what prompted me to make this post because it didn’t contain the texture parameter I had created as the blueprint. I initially created an instance of this blueprint with the call before that with Cast(pClass->GetDefaultObject()), to get a constant material of the blueprint then create a dynamic material so I can modify it at runtime. But, with everything you mentioned after that, I don’t think I even need to go that far. I can just make everything blueprints and plug them all together via the editor, which I got successfully to display on the screen.

It’s just coming up with a way to dynamically resize what I display on the screen via start up parameters and dynamically move across the screen via user interactions with the mouse.

I will keep working this this and thank you for the input. My code and stuff is in a really bad state at the moment. Once I get this into a better state, I will show what I have so far to see what you think. The ABinocular class basically has no code in it at the moment (commented out) and I have something displaying on the screen. So, I will just keep moving forward at the snails pace I am moving at.

1 Like

yes. the size on the screen is just a relative size. and it’s totally arbitrary.
the resolution of the RT is the res at which the capture will capture. this impacts the performance. as well as impact the size at which the captured rt will start to look blurry in the ui. (like any other texture. if you show it at a bigger pixel resolution than it actually is will look blurry). not a problem, but something to keep in mind.

yes that also works. i got stuck on the material because you’ve mentioned it (i feel like an llm). i dismissed/forgot about widgets. you can use an rt for oh so many things.

yes that is correct.
i recommend you do that on the umg widget. and then add a function that you can call from the pawn or whatever.

well i have a personal problem with words like “best” and “should”.
best is relative to a goal (the is/ought fallacy and the orthogonality of intelligence).
umg is the “best” if your goal is to show that in screen space, probably without moving around much.
umg has a ton of features though. the one thing you can’t do is make the rt look in world.
you can also use in-world widgets to show widgets in world, and you can also use the rt as a material, wrapped in objects so that it can change it’s shape and do weird stuff,
or you can also use materials in the widget itself.
I think you might still want to keep one material and use that in the widget, because it will allow you to apply effects (which i hardly imagine you wont need), like masks, fades, vignete, noise, animation, and distortion (like metroid for example).
you don’t need to make an dynamic material for this, UNLESS you have some parameters that you animate.
this is common, i have a logo in my game, it’s a umg widget but it also has a material, and that material has parameters, which are animated through the animation sequence.

without knowing what you actually want to do, i think umg is the way to go.
i really don’t know what you’re going for, so i can’t tell for sure. (a screenshot might help)

that kinda confuses me. it can be a world widget, or a material on an object, or a on screen widget (umg).
you can still change the position on the screen with the umg, but it will always be in screen coordinates.
you can change the camera anyway you want, via other blueprints.

yeah i suggest posting that separately.

that sounds about right.
just add some functions of your own to the widget.
and check the docs about creating umg that dynamically resize based on screen resolution (and dpi).

is good that you have a baseclass for ABinocular. even if empty. for the future. (unless you think you wont need it.

also i suggest sharing a screen of what you try to achieve, maybe a screenshot from other game, since it’s a bit ambiguous what you try to do. i can think like 3 to 5 ways to implement it.

“That’s how winning is done.” - Rocky Balboa

I imagine, without a screen shot of what I am attempting to do, it hard to see what I am attempting to do put only in words.

But, basically, the requirement here is that the user (game player) needs the ability to left click the mouse, which will bring up a PIP that is zoomed in, and left click will continue swapping between 2x/x3/x4/x5 or whatever. Then, as the user moves the mouse around on the screen, the PIP will move around on the screen. This acts like a binoculars for the current viewing point and in the direction. It basically acts like zooming in on the pixels in the current viewport angle within the world.

The real reason why I am doing this is because our group used to use another industry 3D engine that the company closed shop. Upon research, the Unreal Engine seemed like a logical choice to replace their 3D engine with. This other engine (Vega Prime by Presagis) supported this type of functionality to take a loaded world (level), change it’s viewing port and be able to render across multiple computers with multiple monitors networked together (this is what nDisplay gives us for Unreal, but it’s very new to Unreal). I am just trying to mimic the same functionality from Vega Prime, which contains a zooming PIP which you can move across the screen (and across networked computers across multiple monitors), which I am hoping I can achieve with nDisplay.

Like I mentioned above, I will share more of what I am attempting to achieve when I get my code/project more cleaned up at the moment.

yes. i was expecting a screenshot instead of more words lol.
something like this? (it’s a gif, click play)
https://ostechnix.com/how-to-magnify-screen-areas-on-linux-desktop/
Magnify-Screen-Areas-On-Linux-Desktop-Using-Magnus

ah ok, i thought you were making a shooter game and you were trying to implement a sniper thing. which maybe onscreen widget won’t make sense.

Try the on-screen widget then, i think it will make sense. (what you call umg).

i’m not acquainted with vega prime, nor that specific networked setup. so the networked part might take a bit to do. i recommend start with something basic, and making a new post when you reach that issue. (maybe link to this thread).

one note on “zooming” is that, you can fake zooming by moving the capture camera forward.
if you do it via other method then the render distance and pixelation might be an issue. (like if you were to implement zooming using a post process for example. i dont remember a camera having a zoom factor).
from imagining it, i think you might need to put extra effort on rotating the camera. as you would not rotate the capture cam itself, but rather you’ll rotate the main camera. and project a direction vector forward (ideally normalized), then you’ll put the capture cam in that vector multiplied by the distance. with a lookAtRotation with that main cam direction vector.

try to do a proof of concept first. good luck :+1:

So, I have something working, which I am going to post some code and blueprint nodes to show what I have done so far. There is a lot of incorrect logic (calculating zoom and FOV wrong, some data might be handled incorrectly, ect) but I have something working.

My pawn class (which is just a base created class from creating a blank project for simulation) contains the following logic for BeginPlay, Left Mouse Button, and Mouse XY 2D-Axis events.

BeginPlay() just gets the current HUD from the player controller, casts it to my C++ class HUD (TowerBaseHUD, see code below), sets the TowerBaseHUD variable in the blueprint and pulls the BinocularsWidget variable from TowerBaseHUD, casts it to the BP_BinocularWidget to get the BinocularRenderTarget variable (which points to a UMG image in BP_BinocularWidget) and sets it to hidden to make the widget not render when beginning play.

OnLeftMouseDown() captures the BP_Binoculars object and calls ::ChangeZoomLevel (ABinoculars, see code below) and if the zoom level is not set to 1 (meaning there is zoom), it will set the BinocularRenderTarget variable as visible, else it will hide it.

OnMouseMoveXY2DAxis() is called each frame to return if the mouse has moved from the previous frame (I believe) and I check if any axis has a vector value on both axes. If so, I capture the viewport mouse position, ABinoculars object, and BinocularsWidget object then call ABinoculars::ChangeCameraRotation and ATower3DPawn::UpdateBinocularWidget (see code below) updating the widget based on the passed FVector2D of the mouse position.

This is a simple AHUD class that assigns the HUD to the viewport, which is based upon the level at the moment (World Settings::GameMode tab for the level). Not much here.

This is my UMG widget. The thing is set to fill the screen. The Border is just set to whatever pixel size I want the texture to be on the UI. The BinocularRenderTarget is set to match the Image (which is a BP_RenderTarget in my content, which the BP_Binoculars is rendering to). So, while the render target size might enough pixels for whatever resolution I want to support, I can dynamically perform a ResizeTarget on it to render in a lower resolution. Also, I can resize the Border in the widget to get the output to match whatever size I need it to fill on the screen.

ABinoculars.h:

#pragma once

#include "CoreMinimal.h"
#include "Engine/SceneCapture2D.h"
#include "Tower3DGameInstance.h"
#include "Binoculars.generated.h"

class UObjectLibrary;
class UMaterialInstanceConstant;

/**
 * 
 */
UCLASS()
class TOWER3D_API ABinoculars : public ASceneCapture2D
{
	GENERATED_BODY()

	float currentHorizontalFOV;
	float currentViewportX;
	float currentVerticalFOV;
	float currentViewportY;
	float aspectRatio;
	FVector PerZoomLevelFOV;

	void CalculateBinocularsData();
	FVector2D GetNDisplayViewportSize() const;
	void GenerateNDisplayCamera();
	FVector2D GetNormalViewportSize() const;
	void GenerateNormalCamera();

	void OnViewportResized(FViewport* Viewport, uint32 Unused);
	void OnViewportToggleFullscreen(bool IsFullScreen);

	TObjectPtr<USceneCaptureComponent2D> m_SceneCaptureComponent2D;

public:
	ABinoculars();

	UPROPERTY(BlueprintReadOnly)
	int ZoomLevel;

	UPROPERTY(BlueprintReadWrite)
	TObjectPtr<UUserWidget> BinocularUI;
	
	UPROPERTY(BlueprintReadOnly)
	TObjectPtr<UTower3DGameInstance> GameInstance;

	UFUNCTION(BlueprintCallable)
	void ChangeZoomLevel();

	UFUNCTION(BlueprintCallable)
	void ChangeCameraRotation(FVector2D MouseViewportPosition);

	virtual void BeginPlay() override;
};

ABinoculars.cpp:

#include "Binoculars.h"
#include "TowerBaseHUD.h"
#include "Engine.h"

ABinoculars::ABinoculars()
    : currentHorizontalFOV(0.0)
    , currentViewportX(0.0)
    , currentVerticalFOV(0.0)
    , currentViewportY(0.0)
    , aspectRatio(0.0)
    , ZoomLevel(0)
{
}

void ABinoculars::CalculateBinocularsData()
{
    if(!BinocularUI)
    {
        m_SceneCaptureComponent2D = GetCaptureComponent2D();
        APlayerController* playerController = GetWorld()->GetFirstPlayerController();
        if(playerController)
        {
            ATowerBaseHUD* playerHUD = playerController->GetHUD<ATowerBaseHUD>();
            BinocularUI = playerHUD->BinocularWidget;
            currentHorizontalFOV = playerController->PlayerCameraManager->GetFOVAngle();
            // the Unreal Engine defaults to a static horizontal FOV, thus the aspect ratio can change but the horizontal FOV will be maintained
            // this will make the vertical FOV change as the aspect ratio changes
            currentVerticalFOV = currentHorizontalFOV / aspectRatio;
        }
    }
}

FVector2D ABinoculars::GetNDisplayViewportSize() const
{
    UGameUserSettings* pGameUserSettings = UGameUserSettings::GetGameUserSettings();
    return FVector2D(pGameUserSettings->GetScreenResolution().X, pGameUserSettings->GetScreenResolution().Y);
}

void ABinoculars::GenerateNDisplayCamera()
{
    FVector2D viewport(GetNDisplayViewportSize());
    currentViewportX = viewport.X;
    currentViewportY = viewport.Y;
    aspectRatio = viewport.X / viewport.Y;
}

FVector2D ABinoculars::GetNormalViewportSize() const
{
    FVector2D vecViewport;
    GetWorld()->GetGameViewport()->GetViewportSize(vecViewport);
    return vecViewport;
}

void ABinoculars::GenerateNormalCamera()
{
    FVector2D viewport(GetNormalViewportSize());
    currentViewportX = viewport.X;
    currentViewportY = viewport.Y;
    aspectRatio = viewport.X / viewport.Y;
}

void ABinoculars::OnViewportResized(FViewport* Viewport, uint32 Unused)
{
    if(GameInstance->bIsNDisplay)
    {
        GenerateNDisplayCamera();
    }
    else
    {
        GenerateNormalCamera();
    }

    CalculateBinocularsData();
}

void ABinoculars::OnViewportToggleFullscreen(bool IsFullScreen)
{
    if(GameInstance->bIsNDisplay)
    {
        GenerateNDisplayCamera();
    }
    else
    {
        GenerateNormalCamera();
    }

    CalculateBinocularsData();
}

void ABinoculars::ChangeZoomLevel()
{
    // swap between the zoom levels and change the field of view to match the zoom level
    switch(ZoomLevel)
    {
        case 1:
        {
            if(m_SceneCaptureComponent2D)
            {
                m_SceneCaptureComponent2D->FOVAngle = 45.0;
            }

            ZoomLevel = 2;
            break;
        }

        case 2:
        {
            if(m_SceneCaptureComponent2D)
            {
                m_SceneCaptureComponent2D->FOVAngle = 30.0;
            }

            ZoomLevel = 3;
            break;
        }

        case 3:
        {
            if(m_SceneCaptureComponent2D)
            {
                m_SceneCaptureComponent2D->FOVAngle = 22.5;
            }

            ZoomLevel = 4;
            break;
        }

        case 4:
        {
            if(m_SceneCaptureComponent2D)
            {
                m_SceneCaptureComponent2D->FOVAngle = 18.0;
            }

            ZoomLevel = 5;
            break;
        }

        case 5:
        {
            if(m_SceneCaptureComponent2D)
            {
                m_SceneCaptureComponent2D->FOVAngle = 90.0;
            }

            ZoomLevel = 1;
            break;
        }

        default:
        {
            if(m_SceneCaptureComponent2D)
            {
                m_SceneCaptureComponent2D->FOVAngle = 90.0;
            }

            ZoomLevel = 1;
            break;
        }
    }
}

void ABinoculars::ChangeCameraRotation(FVector2D MouseViewportPosition)
{
    FRotator cameraRotation = GetActorRotation();
    cameraRotation.Yaw = (MouseViewportPosition.X / currentViewportX) * currentHorizontalFOV;
    cameraRotation.Pitch = (MouseViewportPosition.Y / currentViewportY) * currentVerticalFOV;
    SetActorRotation(cameraRotation);
}

void ABinoculars::BeginPlay()
{
    Super::BeginPlay();
    GameInstance = Cast<UTower3DGameInstance>(GetGameInstance());
    GEngine->GameViewport->OnToggleFullscreen().AddUObject(this, &ABinoculars::OnViewportToggleFullscreen);
    GEngine->GameViewport->Viewport->ViewportResizedEvent.AddUObject(this, &ABinoculars::OnViewportResized);
    ChangeZoomLevel();
}

You can probably see from my initial post, there is a lot less code, but it’s much cleaner and the ideas are more consistent with what I am attempting to do. This also places a lot of gameplay logic into blueprint nodes with more finer control within functions of the class. I am not certain if this is a good idea, and there is part of me that feels like I can implement ALL logic within C++, but I don’t want to fight the editor and code if this gets me what I want. This class will compile and work but is incomplete, as it doesn’t handle various states correctly, but it’s a starting point.

TowerBaseHUD.h:

#pragma once

#include "CoreMinimal.h"
#include "GameFramework/HUD.h"
#include "Blueprint/UserWidget.h"
#include "TowerBaseHUD.generated.h"

/**
 * 
 */
UCLASS()
class TOWER3D_API ATowerBaseHUD : public AHUD
{
	GENERATED_BODY()
	
public:
	UPROPERTY(BlueprintReadWrite)
	TObjectPtr<UUserWidget> BinocularWidget;
};

This class is very small, as it is only utilized to expose the binocular widget to blueprints so the logic can be connected via other blueprints (namely the pawn class).

Tower3DPawn.h:

#pragma once

#include "CoreMinimal.h"
#include "GameFramework/DefaultPawn.h"
#include "Blueprint/UserWidget.h"
#include "Tower3DPawn.generated.h"

/**
 * 
 */
UCLASS()
class TOWER3D_API ATower3DPawn : public ADefaultPawn
{
	GENERATED_BODY()

	public:
		UFUNCTION(BlueprintCallable)
		void UpdateBinocularWidget(UUserWidget* BinocularWidget, FVector2D MouseViewportPosition);
};

Tower3DPawn.cpp:

#include "Tower3DPawn.h"
#include "TowerBaseHUD.h"
#include "Blueprint/UserWidget.h"

void ATower3DPawn::UpdateBinocularWidget(UUserWidget* BinocularWidget, FVector2D MouseViewportPosition)
{
    if(BinocularWidget)
    {
        BinocularWidget->SetPositionInViewport(MouseViewportPosition);
    }
}

While small, finding this function took me forever to find. I had to spend many hours and lots of trial code to condense down a way to just set the widget to the mouse position. So, this is all it took.

Basically, the picture you have above, have that but when you move the mouse, not only does PIP window moves with the mouse, where it is zooming in the world is also moving. Think of literally placing a magnifying glass onto your monitor, assume you could actually peer into the world, and it’s zooming.

But, thanks for all the input you had given me to get to this point. I am kind of worried how this will interact with nDisplay, but, again, that is beyond the scope of this thread.

Thank you very kindly, nande.

1 Like

awesome work man! congratulations!
just a note


consider having a variable for a reference for the binocular and the widget in it’s final type (class) and cast in once. to avoid having to cast every frame you move the mouse.

btw you can optimize that by using a static array


constexpr float ZoomVals[] = {45, 30, 25, ...};
static const int32 ZoomNum =  std::size(ZoomVals);  // https://stackoverflow.com/a/51761695/260242 maybe you can make it constexpr
if (ZoomLevel <0 || ZoomLevel >= ZoomNum) ZoomLevel =0;
m_SceneCaptureComponent2D->FOVAngle = ZoomVals[ZoomLevel];
ZoomLevel = (ZoomLevel+1)%ZoomNum;

great find!

yes its noticeably cleaner. good work. i wouldn’t worry too much about that. that’s the nature of programming.

i understand. i’ve seen that implemented in linux/compiz way long time ago. so it makes sense to me.

one step at a time. i’d advice to keep doing proof of concepts first, instead of polishing the code much, to remove the big unknowns and risks. once you are confident enough, or at least have an idea of the major constraints, you can redesign your solution and consolidate it.

you’re very kind. i appreciate it :slight_smile: you’re welcome. you’ve done a great work! all the best!

1 Like