Using Slate to draw a Grid Pattern to Screen - Any tips for faster line rendering?

I’m wondering about whether it would be possible to improve the speed of drawing lines in the Engine. I’m hitting a hard wall with my current implementation, but I see no reason why (with better implementation) I couldn’t draw ten times as many lines as I currently am before being GPU bound or something.

So here’s what I’m trying to create, and I’ve got fairly far with it. It’s a local-area radar that essentially scans it’s surrounding environment and draws blips for objects and a wireframe of the terrain underneath. It also rotates to always point north (like a compass) relative to the players rotation. I do this by using the DrawLines() functionality of UMG/Slate – which draws a bunch of elements to the screen that connect together. However, it seems incredibly slow to do so.

Goal:
483f445d22fcd91f95c154be9d5f33b0fb2f4b57.jpeg

Current in Unreal:
82d71485fa7180c326f79a47d1a3f224ca91d77e.jpeg


A little background on how I’m doing it – I break a big volume brush around the level into a sparse grid, and do a series of line-traces from the heavens to the ground to work out the location at each grid point (really, I only need the altitude since I know the column and row of each point, and the density, but I’ll get to that later) – and then store that in a 1D array of FVectors in the GameInstance (1D array is faster to access and UE doesn’t support nested containers anyway). This is only done once on level load, so there’s no runtime overhead here.

I then go column-by-column then row-by-row, transform each point to the correct position on the screen accordingly (via a rotated FLookAtMatrix and ForthoMatrix) and submit an Array of 2D points to UMG/Slates DrawLines() function in OnPaint(), which actually creates the visual lines as part of a UserWidget. All the transforming is done in Code and eventually the OnPaint method will be as well, but this is it in Blueprint at the moment:

49476317f51021defdf82ca69648d4f374172c05.jpeg

The code which projects the points into screen space is pretty **** fast (< 0.3ms for 10000 points), but the bottleneck appears to be not only multiple calls (Columns * Rows) to DrawLines() per-frame, but also the actual drawing of the lines themselves.

I did a few tests, and submitting a single Batch of 65536 points to DrawLines() takes slate about 9ms to draw, which still seems extremely high considering it’s only drawing lines. I feel like this could be a lot faster, or maybe there’s a better system for doing it?

My current method does a call to DrawLines() for each column and row, so for a 100*100 grid with 10,000 vertices, I’m doing 200 calls to DrawLines() per frame and slate takes a full 30ms to draw on it’s own. 10,000 vertices isn’t a lot, and I want to be able to draw them much faster. I need a way to submit all the lines in one Batch by creating my own version of DrawLines() - which still maintains the grid-shape with no connecting lines at the start/end of each column/row. Anybody know how I could do that?

Another huge performance boost I can gain is by only submitting the lines that I need to draw. Right now I’m submitting the entire array, but I actually want to draw a circle of points around the player (up to Range X) – so is there a way I can extract the points from the array in a circular pattern and still draw them correctly in a grid pattern? This should result in far less lines being unnecessarily drawn before then being clipped by the widgets screen size bounds anyway.

Perhaps doing this with Slate Elements isn’t the way to even go about this. Is there a preferable solution?

Looks like a draw call issue to me.

I think it would be way more efficient to use a subdivided plane with a wireframe material and then drive the height via vertex information (WorldPositionOffset). You could use a section of the landscape heightmap to drive the height of the wireframe.
Also it seems that you draw all the lines every frame, using the plane has other optimization options, like editing the height information on a timer, instead of every frame.

My instinct is this:

You would probably get a lot better performance if, at startup when you calculate all those points, you just create a procedural mesh which basically mimics the terrain, i.e. a big tessellated quad with the vertices all set to the heights you calculate. Then, just attach that mesh to the camera and assign a material that has a grid pattern with depth test off, or use a scene capture actor to render it to a texture if you really want to use it inside of UMG. Depending on the size of the terrain and/or the density of the height samples you take, you may want to have the procedural mesh represent a local area around the player and modify the vertex heights as you go.

This is something I wish Unreal had better support for, by the way. From a workflow point of view, what you’d LIKE to be able to do is just create a scene capture actor and set it to render only the terrain with a custom material applied to it (one that shows a grid texture). Then you could attach that scene capture actor to your character with some Z offset looking down at the terrain and you’d get exactly what you want. The support for rendering subsets of actors into a texture with material swaps applied is really lacking IMO. Even if lighting weren’t supported when using scene captures in this way, it would be infinitely useful. I would also add that this is something more easily done in competing engines.

That’s actually much harder. Firstly Slate can’t draw 3D Meshes at all yet (especially with materials) - and the wireframe would then be triangles instead of quads. The mesh would have to be loaded somewhere and I’d have to transform it’s vertices and render it over the top of everything else. Additionally, accessing the actual height map for Landscapes at runtime is seemingly impossible (definitely is outside of the editor), and aligning the meshes offset to the altitude/rotation of the craft would be tricky too. Either way, drawing meshes in the UI = impossible.

Draw Calls do seem to be the primary factor but currently I can’t think of a nice way to reduce it to a single call. The actual overhead of drawing the lines isn’t cheap either though and even reducing the calls to one and drawing around 60K lines still took around 9ms (GTX980 and i7-3820K), and I feel like I can get to something much lower-level that will be able to do this in less than a millisecond. Hell Wireframe rendering mode in any UT map is wayyyyy higher than 60K verts and draws a hella-lot faster than 9ms.

The overhead of a mesh would be much higher, I’d have normals, UV’s etc. All I want to draw is a wireframe grid which can definitely be done faster than drawing full polygons with materials. I mentioned it above too, but I also still can’t draw that to the UI without a full Camera and support for meshes & materials in Slate.

Worth noting btw, there is no ‘Camera’ as such drawing these points, they’re just transformed from World Space to Radar Space, then orthographically to Screen-Space and finally to the size of the widget. For 65K points this takes around 0.25ms so is insanely fast, and only around 20 lines of code. Calculating the transforms for each point is pretty quick - and once the array of points is culled down to around 100-200 points which it will be drawing most of the time, it’ll be around 0.01 ms tops.

I definitely want to avoid the scene capture route. I still find that method of drawing meshes to the UI a horrific hack… and that has a ridiculous overhead of it’s own. I’m aiming for less than 1ms to do this entire step. It has to be fast, because drawing the grid won’t be the only thing it does in the end and it’s only a tiny slice of a very big pie.

Well, why not put it into world space in front of the camera. It is a lot easier to use 3d UI in world space. The Division uses a full 3d UI and it looks awesome.

You could just use a masked/translucent quad texture and adjust tiling accordingly.

Yes, I agree. I mean it’s not impossible but definitely hard.
An easy workaround could be to use your already implemented trace system and render these points into a height texture with some falloff blending/lerping. You can do this entirely on the cpu or gpu and then and then reduce the whole thing to a few draw calls.

Placing the mesh on the camera is all well and good until I get too close to something and it clips right through (which will definitely happen with the cockpit and FP-meshes) - And if it’s too close it’ll get caught by the clipping plane. Again though, while all good suggestions none of them will be faster than a wireframe. It’s just the way of doing it with loads of slate elements like this is not such a great way to do so ;P.

Hopefully Mr Darnell or Mr Noland will have some solid suggestions for performance being the wizards of Slate and RHI/draw stuff respectively.

I agree with TheJamsh, there must be a way of drawing lines in camera-space that is just as simple as drawing OpenGL/DirectX lines.

Drawing lines is a common feature of any graphics library and UE should have an easy way to do it together with a convenience method to do it in screen space, on a custom depth and hence away from clipping problems and ignoring post-process like AA, etc.

I’ve a method implemented with procedural meshes which draws on a custom depth with an unlit overlay material but it’s not fast enough since I can’t batch vertex/triangles data in my case.


This is the material I’m using for procedural meshes lines.

When I think about this functionality, Vectrosity for Unity comes to mind… Starscene Software - Unity Utilities - Vectrosity

I think you are confusing wireframe with line drawing. In fact gpu wireframes are just meshes with vertex buffer information that have a different render option set in DirectX or OpenGL. The reason that wireframes are so fast, is that the meshes already live on the gpu and only have to be put out as rendered image. That is what the gpu does best.

I think the largest problem is, that your lines live on both sides, cpu and gpu. Sending information across slows stuff down tremendously.
A mesh is the only way to solve that problem, imho.

Well, the clipping issue on the other hand - no clue about that :confused:

OpenGL/DirectX have dedicated operations for drawing lines, I’ve used those methods to draw millions of lines on mobile devices with incredible performance even on low-end devices.

https://www.opengl.org/sdk/docs/man3/xhtml/glDrawArrays.xml
GL_LINE_STRIP and so on…

Ok, I’ve digged a bit into slate implementation, Epic is actually rendering with GL_LINES if you disable antialiasing on DrawLines(), the magic is in ElementBatcher.cpp, method AddLineElement().

Well, what you just posted is exactly what I meant just using vertex arrays instead of a buffer. A vertex array that is drawn as individual lines. A mesh is just a vertex buffer that is drawn as polygons.
the key is, all the drawing is batched. You only have to send stuff once per frame, hence you only get one draw call.

TheJamsh’s solution sends every single line individually and therefore, you hit the draw call cap.

Edit:

I haven’t checked that yet, but if you are able to batch the line rendering this way, then this should do the trick.

For example the OpenGL code for this is clear, on SlateOpenGLRenderingPolicy.cpp:


void FSlateOpenGLRenderingPolicy::DrawElements( const FMatrix& ViewProjectionMatrix, const TArray<FSlateRenderBatch>& RenderBatches )

there, glDrawRangeElements() is called with GL_LINES.

I think that’s the fastest implementation for this kind of thing. And then 9ms for 30k unbatched lines on such specs are anyway too high.

And to Epic devs…a quad-based line drawing implementation would be nice, so we can draw thick lines too.

I’m chatting with some guys on the Slack group atm. Cancel is suggesting I go much lower and use RHI directly within slate. One of the big costs seems to be that Slate dynamically manipulates buffers, and it copies the info I give it for drawing lines and stuff. Providing my information directly to the engine should see a huge boost.

However… I’ve got no idea how to do that yet. I feel like I’m entering the realms of “there be dragons”.

“We need to go deeper…”

I shall of course, post my findings :slight_smile:

Hi TheJamsh,

As others have suggested, doing it as a mesh is going to result in a much faster render than a bunch of individual line draw calls, and you’ll be able to do fancier materials than you’d be able to do with slate line draws (think like a periodic ‘tracking’ pulse, etc…). However, I’d first suggest moving it into C++ and see where that gets you. Doing ~50x50 lines per frame in Blueprints is going to burn a huge amount of time just in the VM, which if it’s a part of a UMG widget will still show up as ‘slate’ time (there are lots of other little benefits you can do in C++ like not emptying the array, just resetting it to avoid extra memory allocations and resize copies as the array grows also).

Cheers,
Michael Noland

Hey Michael!

Thanks! I Spoke to Nick on Slack and he’s suggested disabling AA and also moving it to code, so I’m going to do that now. DrawLines() creates a new context layer each time you call it which also adds a lot of overhead.

In terms of drawing the mesh, how could I go about it? I guess I could create the mesh on game startup and cull vertices of it, but I’d have to rebuild the mesh each frame and there’s still the issue of being able to draw it inside a widget.

RE: culling the mesh, I’d probably create a set of tiles, and then you only have to draw 4 or 9 of them in any given frame, do the rest of the culling with a vertex distance check going to make the radius sort of effect.

Cheers,
Michael Noland

Okay so, update time. Here’s the latest iteration. I’m still following the line route with Slate because going down the mesh route is a going to be a very hard process. This is without any culling still for the radar ‘range’, so every point is still submitted to be drawn and in this test case, there’s a total of 4,225 points in this test level with 65 rows and 65 columns - so still not a lot of points in the grand scheme of things.

e50d7fea1f3f33bd527e86a77b1a0837683e62ff.jpeg

Here’s the code now translated into C++. There are a few issues, for one thing NativePaint() is const, so I can’t create a member variable TArray<FVector2D> and just change the values (this just in, apparently I can make a ‘mutable’ member variable which will allow the changes). - so atm it creates and allocates the array many times per-frame. So, changes so far:

  • Moved Paint function to C++
  • Changed all Get functions for the Vertex Buffer to be FORCEINLINE
  • Overwrote DrawLines() and created new inline version, which doesn’t call InContext.MaxLayer++ each time it’s called
  • Turned off Anti-Aliasing

I tried to go into FSlateDrawElements::MakeLines() to actually copy the code that’s there to prevent function call overhead, but unfortunately ‘Init’ is a private member of that class for some reason (annoying) - so I cannae do that. Definitely room for speed improvements here.

BZGame_TopoRadar.h



UCLASS()
class BZGAME_API UBZGame_TopoRadar : public UBZGame_BaseWidget
{
	GENERATED_BODY()
	
public:
	UBZGame_TopoRadar(const FObjectInitializer& ObjectInitializer);

	virtual void NativeConstruct() override;
	virtual void NativePaint(FPaintContext& InContext) const override;

protected:

	int32 NumRows, NumColumns;

// Can't modify because OnPaint() is const :(
// 	TArray<FVector2D> DrawPoints;
// 
// 	FORCEINLINE void AddItemToPoints(const FVector2D& Point)
// 	{
// 		DrawPoints.Add(Point);
// 	}

	FORCEINLINE void Fast_DrawGrid(FPaintContext& InContext, const TArray<FVector2D>& InPoints) const
	{
		// Ideally want to be able to do whatever this does right here, but lots of FSlateDrawElement is private -.-
		FSlateDrawElement::MakeLines(
			InContext.OutDrawElements,
			InContext.MaxLayer,
			InContext.AllottedGeometry.ToPaintGeometry(),
			InPoints,
			InContext.MyClippingRect,
			ESlateDrawEffect::None,
			FLinearColor::Blue,
			false);
	}

// We want to do this, but Init() is private :(
// 		FPaintGeometry PaintGeo = InContext.AllottedGeometry.ToPaintGeometry();
// 		PaintGeo.CommitTransformsIfUsingLegacyConstructor();
// 		FSlateDrawElement& DrawElt = InContext.OutDrawElements.AddUninitialized();
// 		DrawElt.Init(0, PaintGeo, InContext.MyClippingRect, ESlateDrawEffect::None);
// 		DrawElt.ElementType = EElementType::ET_Line;
// 		DrawElt.DataPayload.SetLinesPayloadProperties(DrawPoints, FLinearColor::Blue, false, ESlateLineJoinType::Sharp);
	//}
};


BZGame_TopoRadar.cpp



DECLARE_STATS_GROUP(TEXT("Radar"), STATGROUP_Radar, STATCAT_Advanced);
DECLARE_CYCLE_STAT(TEXT("BZ ~ Draw Radar Grid"), STAT_DrawRadar, STATGROUP_Radar);

void UBZGame_TopoRadar::NativeConstruct()
{
	Super::NativeConstruct();

	if (OwningBZHud) { OwningBZHud->SetRadarWidget(this); }

	// Cached Rows/Columns
	// Get the Game Instance. Again, can probably be cached or at least inlined
	UBZGame_GameInstance* BZGI = UBZGame_GameInstance::GetInstance(this);
	ASSERTV(BZGI != nullptr, TEXT("BZGame Instance Is Nullptr"));

	BZGI->GetRadarGridSize(NumRows, NumColumns);
}

void UBZGame_TopoRadar::NativePaint(FPaintContext& InContext) const
{
	Super::NativePaint(InContext);

	SCOPE_CYCLE_COUNTER(STAT_DrawRadar);

	// Get the Game Instance. Again, can probably be cached or at least inlined
	UBZGame_GameInstance* BZGI = UBZGame_GameInstance::GetInstance(GetWorld());
	ASSERTV(BZGI != nullptr, TEXT("BZGame Instance Is Nullptr"));

	TArray<FVector2D> DPArray;

	// Just submit all the points for now. We need to do some 'clipping' here to only get points in radar-range of the player.
	for (int32 CIdx = 0; CIdx < NumColumns; CIdx++)
	{
		for (int32 RIdx = 0; RIdx < NumRows; RIdx++)
		{
			const FVector WorldPos = BZGI->Fast_GetRadarPosAtGrid(CIdx, RIdx);
			const FVector2D ScreenP = ABZGame_InGameHUD::WorldToTopographicalRadar(WorldPos);

			DPArray.Add(ScreenP);
		}

		Fast_DrawGrid(InContext, DPArray);
		DPArray.Empty();
	}

	for (int32 RIdx = 0; RIdx < NumRows; RIdx++)
	{
		for (int32 CIdx = 0; CIdx < NumColumns; CIdx++)
		{
			const FVector WorldPos = BZGI->Fast_GetRadarPosAtGrid(CIdx, RIdx);
			FVector2D ScreenP = ABZGame_InGameHUD::WorldToTopographicalRadar(WorldPos);

			DPArray.Add(ScreenP);
		}

		Fast_DrawGrid(InContext, DPArray);
		DPArray.Empty();
	}
}


Another thing I’m doing is getting and transforming all of the points twice… If I can figure out how to draw the grid without that, that’ll save some performance too, and cut the transform cost in half.

I’ve been playing with a procedural mesh based approach. Currently I’m still drawing in world space but on a custom depth pass with the material posted by me above here in this thread.

Performance is pretty good (consider I tested this at home on a MacBook Pro with a 750M), difference between grid turned off and a 50x50 grid is 0.81ms and a 100x100 grid is 1.26ms.

Now this is on world-coords. It would be nice to have the matrix to apply to send it to screen space, I’ll work on this later. Or does any of the Epic devs have a method for that?


GridProceduralMeshComponent.h



#pragma once

#include "ProceduralMeshComponent.h"
#include "GridProceduralMeshComponent.generated.h"

/**
 * 
 */
UCLASS()
class CTANDROID_API UGridProceduralMeshComponent : public UProceduralMeshComponent
{
	GENERATED_BODY()
	
public:

	void UGridProceduralMeshComponent::CreateProcGrid(int numRows, int numColumns, float stepDist, float thickness);
	void UpdatePoints(float time);

	FVector2D GetNormalOf2DSegment(FVector2D startPoint, FVector2D endPoint);

protected:
	TArray<FVector>		Vertices;
	TArray<int>			Triangles;
	TArray<FVector>		Normals;
	TArray<FVector2D>		UVs;

	float					m_thickness;
	int					m_numColumns;
	int					m_numRows;
	float					m_stepDist;
};



GridProceduralMeshComponent.cpp



#include "GridProceduralMeshComponent.h"



void UGridProceduralMeshComponent::CreateProcGrid(int numRows, int numColumns, float stepDist, float thickness) {

	m_thickness = thickness;
	m_numColumns = numColumns;
	m_numRows = numRows;
	m_stepDist = stepDist;

	for (size_t c = 1; c < numColumns; c++)
	{
		for (size_t r = 1; r < numRows; r++)
		{
			FVector startPoint = FVector(c - 1, (r - 1) * stepDist, 0);
			FVector endPoint = FVector(c - 1, r * stepDist, 0);

			FVector normal = FVector(GetNormalOf2DSegment(FVector2D(startPoint.X, startPoint.Y), FVector2D(endPoint.X, endPoint.Y)).X,
									GetNormalOf2DSegment(FVector2D(startPoint.X, startPoint.Y), FVector2D(endPoint.X, endPoint.Y)).Y,
									0);

			float halfThick = thickness / 2;

			int idx = Vertices.Num() - 1;

			Vertices.Add(startPoint - (halfThick * normal));
			Vertices.Add(endPoint - (halfThick * normal));
			Vertices.Add(endPoint + (halfThick * normal));
			Vertices.Add(startPoint + (halfThick * normal));

			Triangles.Add(1 + idx);
			Triangles.Add(2 + idx);
			Triangles.Add(4 + idx);

			Triangles.Add(2 + idx);
			Triangles.Add(3 + idx);
			Triangles.Add(4 + idx);

			Normals.Add(FVector(0, 0, 1));
			Normals.Add(FVector(0, 0, 1));
			Normals.Add(FVector(0, 0, 1));
			Normals.Add(FVector(0, 0, 1));

			UVs.Add(FVector2D(0, 0));
			UVs.Add(FVector2D(0, 1));
			UVs.Add(FVector2D(1, 1));
			UVs.Add(FVector2D(1, 0));

		}
	}

	for (size_t r = 1; r < numRows; r++)
	{
		for (size_t c = 1; c < numColumns; c++)
		{
			FVector startPoint = FVector((c - 1) * stepDist, r - 1, 0);
			FVector endPoint = FVector(c * stepDist, r - 1 , 0);

			FVector normal = FVector(GetNormalOf2DSegment(FVector2D(startPoint.X, startPoint.Y), FVector2D(endPoint.X, endPoint.Y)).X,
				GetNormalOf2DSegment(FVector2D(startPoint.X, startPoint.Y), FVector2D(endPoint.X, endPoint.Y)).Y,
				0);

			float halfThick = thickness / 2;

			int idx = Vertices.Num();

			Vertices.Add(startPoint - (halfThick * normal));
			Vertices.Add(endPoint - (halfThick * normal));
			Vertices.Add(endPoint + (halfThick * normal));
			Vertices.Add(startPoint + (halfThick * normal));

			Triangles.Add(0 + idx);
			Triangles.Add(1 + idx);
			Triangles.Add(3 + idx);

			Triangles.Add(1 + idx);
			Triangles.Add(2 + idx);
			Triangles.Add(3 + idx);

			Normals.Add(FVector(0, 0, 1));
			Normals.Add(FVector(0, 0, 1));
			Normals.Add(FVector(0, 0, 1));
			Normals.Add(FVector(0, 0, 1));

			UVs.Add(FVector2D(0, 0));
			UVs.Add(FVector2D(0, 1));
			UVs.Add(FVector2D(1, 1));
			UVs.Add(FVector2D(1, 0));

		}
	}



	CreateMeshSection(0, Vertices, Triangles, Normals, UVs, TArray<FColor>(), TArray<FProcMeshTangent>(), false);

}


void UGridProceduralMeshComponent::UpdatePoints(float time) {

	Vertices.Reset();

	float baseZ = 300;

	for (size_t c = 1; c < m_numColumns; c++)
	{
		for (size_t r = 1; r < m_numRows; r++)
		{
			FVector startPoint = FVector((c - 1)* m_stepDist, (r - 1) * m_stepDist, 10 * cos(time + r-1) + baseZ);
			FVector endPoint = FVector((c - 1)* m_stepDist, r * m_stepDist, 10 * cos(time + r) + baseZ);

			FVector normal = FVector(GetNormalOf2DSegment(FVector2D(startPoint.X, startPoint.Y), FVector2D(endPoint.X, endPoint.Y)).X,
									GetNormalOf2DSegment(FVector2D(startPoint.X, startPoint.Y), FVector2D(endPoint.X, endPoint.Y)).Y,
									0);

			float halfThick = m_thickness / 2;

			int idx = Vertices.Num() - 1;

			Vertices.Add(startPoint - (halfThick * normal));
			Vertices.Add(endPoint - (halfThick * normal));
			Vertices.Add(endPoint + (halfThick * normal));
			Vertices.Add(startPoint + (halfThick * normal));

		}
	}

	for (size_t r = 1; r < m_numRows; r++)
	{
		for (size_t c = 1; c < m_numColumns; c++)
		{
			FVector startPoint = FVector((c - 1) * m_stepDist, (r - 1)* m_stepDist, 10 * cos(time + r-1) + baseZ);
			FVector endPoint = FVector(c * m_stepDist, (r - 1)* m_stepDist, 10 * cos(time + r-1) + baseZ);

			FVector normal = FVector(GetNormalOf2DSegment(FVector2D(startPoint.X, startPoint.Y), FVector2D(endPoint.X, endPoint.Y)).X,
									GetNormalOf2DSegment(FVector2D(startPoint.X, startPoint.Y), FVector2D(endPoint.X, endPoint.Y)).Y,
									0);

			float halfThick = m_thickness / 2;

			Vertices.Add(startPoint - (halfThick * normal));
			Vertices.Add(endPoint - (halfThick * normal));
			Vertices.Add(endPoint + (halfThick * normal));
			Vertices.Add(startPoint + (halfThick * normal));


		}
	}

	UpdateMeshSection(0, Vertices, Normals, UVs, TArray<FColor>(), TArray<FProcMeshTangent>());

}


FVector2D UGridProceduralMeshComponent::GetNormalOf2DSegment(FVector2D startPoint, FVector2D endPoint) {
	FVector2D norm = FVector2D(endPoint.Y - startPoint.Y, -(endPoint.X - startPoint.X));
	return norm.GetSafeNormal();

}


(the video here looks slow, the problem is of the screen capture software I’m using, it’s causing all that spikes on the CPU thread)

Looks good! I’ll upload my code later that transforms my world positions into screen space. It’s based on an Orthographic camera and for some reason flips Z, but I’ll work that out later.

I managed to squeeze some more performance out of mine by only storing the Z-value of each vertex, and inferring the X and Y values based on the grid position and the grid origin. It shaved about a millisecond off the total time, I’ll post later :slight_smile:

As promised here’s the code I use to transform a world position into the widgets location position. It’s a little overwhelming out of context but you can pretty much make sense of it.

Here’s how it works. There’s a bunch of Static variables which are calculated once-per-frame inside of DrawHUD(), which define the transform of the radar, the range of points we want to get, and the View Matrix and Ortho Matrix to transform the points by. This is done here to prevent any unnecessary duplication of code and to avoid calculating the same vars more than once per frame. This is done via ‘CalcTopographicalRadarProps’.

There are two prevelant issues with this. The first is that during ‘WorldToTopographicalRadar()’ - the World-space Z value is flipped upside down, so I have to invert it manually (though, I see no reason why that would be so I must be doing something wrong). Additionally, the altitude of the points in the screen-space widget to not raise or lower depending on the central-points altitude, and therefore all of the points are based around an altitude of zero and don’t correctly transform up or down I feel like this is something to do with the FOrhoMatrix or FLookAtMatrix - but can’t work it out.

What I’m essentially doing is calculating the position of a fake ‘Camera’, then calculating it’s projection and view matrix based on that. Seems a little backwards but that’s my current approach.



	// Static Radar Properties
	static FVector2D TRadar_ScreenSize;
	static FVector TRadar_SourceLocation;
	static float TRadar_Range;
	static float TRadar_RangeSqrd;
	static FRotationMatrix TRadar_CamRotWS;
	static FLookAtMatrix TRadar_CamViewMatrix;
	static FOrthoMatrix TRadar_ProjMatrix;
	static const float TRadar_Tilt;

	/* Calculates the properties for the Radar frame (or however often we want them to update). We do this to avoid recalculating them multiple times per-frame in WorldToTopoRadar */
	FORCEINLINE static void CalcTopographicalRadarProps(const ABZGame_PlayerController* ForPlayer, const FVector2D& ScreenSize)
	{
		SCOPE_CYCLE_COUNTER(STAT_CalcRadarProps);

		// Get Properties From Player Pawn / GOC
		const APawn* PlayerPawn = ForPlayer->GetPawn();
		const UBZGame_GameObjectComponent* PlayerPawnGOC = ForPlayer->GetCurrentPawnGOC();
		const float RadarYaw = PlayerPawn ? PlayerPawn->GetActorRotation().Yaw : ForPlayer->GetViewCamera()->GetComponentRotation().Yaw;
		
		// Set all the Statics
		ABZGame_InGameHUD::TRadar_ScreenSize = ScreenSize;
		ABZGame_InGameHUD::TRadar_SourceLocation = PlayerPawn ? PlayerPawn->GetActorLocation() : ForPlayer->GetViewCamera()->GetComponentLocation();
		ABZGame_InGameHUD::TRadar_Range = PlayerPawnGOC ? PlayerPawnGOC->GetRadarRange() : 0.f;
		ABZGame_InGameHUD::TRadar_RangeSqrd = ABZGame_InGameHUD::TRadar_Range * ABZGame_InGameHUD::TRadar_Range;
		ABZGame_InGameHUD::TRadar_CamRotWS = FRotationMatrix(FRotator(ABZGame_InGameHUD::TRadar_Tilt, RadarYaw, 0.f));

		const FVector CamPosWS = ABZGame_InGameHUD::TRadar_SourceLocation + (ABZGame_InGameHUD::TRadar_CamRotWS.TransformVector(FVector::ForwardVector) * -ABZGame_InGameHUD::TRadar_Range);

		ABZGame_InGameHUD::TRadar_CamViewMatrix = FLookAtMatrix(CamPosWS, ABZGame_InGameHUD::TRadar_SourceLocation, FVector::UpVector);
		ABZGame_InGameHUD::TRadar_ProjMatrix = FOrthoMatrix(ABZGame_InGameHUD::TRadar_Range, ABZGame_InGameHUD::TRadar_Range, 1.f, 1.f);
	}

void ABZGame_InGameHUD::DrawHUD()
{
	Super::DrawHUD();

	// Calculate Radar Props
	if (OwningBZPlayer && RadarWidget)
	{
		ABZGame_InGameHUD::CalcTopographicalRadarProps(OwningBZPlayer, RadarWidget->GetDesiredSize());
	}
}


The Radar widget then does a for loop for each Column and Row, and transforms the point into screen-space like so. WorldGridSize is the spacing between the grid elements, and the ‘MinExtents’ is the origin of the grid. I basically have a big volume of the level which determines the area to be considered for the radar (amongst other things).



	FORCEINLINE const FVector2D RadarBufferToScreen(const int32& ColumnX, const int32& RowY)
	{
		// Calculate World Position For Radar Based off of Grid Indices
		// For some reason we have to Flip Z, but this is probably a problem with the projection NOT the array
		FVector WorldPos = FVector(
			(ColumnX * UBZGame_GameInstance::WorldGridSize) + UBZGame_GameInstance::PlayAreaExtents.MinExtent.X,
			(RowY * UBZGame_GameInstance::WorldGridSize) + UBZGame_GameInstance::PlayAreaExtents.MinExtent.Y,
			RGBuffer[Get2DIndex(ColumnX, RowY)] * -1.f
			);

		// Temp Hax
                // This is a temporary workaround for culling vertices that are outside of radar range so that we form a circle shape, but doesn't stop them from being drawn!
		const bool bTooFar = FVector::DistSquaredXY(WorldPos, ABZGame_InGameHUD::TRadar_SourceLocation) > ABZGame_InGameHUD::TRadar_RangeSqrd;
		if (bTooFar)
		{
			return FVector2D::ZeroVector;
		}

		WorldPos = ABZGame_InGameHUD::TRadar_CamViewMatrix.TransformPosition(WorldPos);
		WorldPos = ABZGame_InGameHUD::TRadar_ProjMatrix.TransformPosition(WorldPos);

		WorldPos.X = ABZGame_InGameHUD::TRadar_ScreenSize.X * (WorldPos.X + 1.f) / 2.f;
		WorldPos.Y = ABZGame_InGameHUD::TRadar_ScreenSize.Y * (WorldPos.Y + 1.f) / 2.f;

		return FVector2D(WorldPos.X, WorldPos.Y);
	}

	FORCEINLINE static const FVector2D WorldToTopographicalRadar(const FVector& TransformLocation)
	{
		SCOPE_CYCLE_COUNTER(STAT_WorldToRadar);

		// Flip Z Axis
		FVector FlippedVector = FVector(TransformLocation.X, TransformLocation.Y, TransformLocation.Z * -1.f);

		FlippedVector = ABZGame_InGameHUD::TRadar_CamViewMatrix.TransformPosition(FlippedVector);
		FlippedVector = ABZGame_InGameHUD::TRadar_ProjMatrix.TransformPosition(FlippedVector);

		FlippedVector.X = ABZGame_InGameHUD::TRadar_ScreenSize.X * (FlippedVector.X + 1.f) / 2.f;
		FlippedVector.Y = ABZGame_InGameHUD::TRadar_ScreenSize.Y * (FlippedVector.Y + 1.f) / 2.f;

		return FVector2D(FlippedVector.X, FlippedVector.Y);
	}


Finally, the widget paints them as part of OnPaint()



void UBZGame_TopoRadar::NativePaint(FPaintContext& InContext) const
{
	Super::NativePaint(InContext);

	SCOPE_CYCLE_COUNTER(STAT_DrawRadar);

	// Get the Game Instance. Again, can probably be cached or at least inlined
	UBZGame_GameInstance* BZGI = UBZGame_GameInstance::GetInstance(GetWorld());
	ASSERTV(BZGI != nullptr, TEXT("BZGame Instance Is Nullptr"));

	// Just submit all the points for now. We need to do some 'clipping' here to only get points in radar-range of the player.
	for (int32 CIdx = 0; CIdx < NumColumns; CIdx++)
	{
		for (int32 RIdx = 0; RIdx < NumRows; RIdx++)
		{
			DrawPoints.Add(BZGI->RadarBufferToScreen(CIdx, RIdx));
		}

		Fast_DrawGrid(InContext, DrawPoints);
		DrawPoints.Empty();
	}

	for (int32 RIdx = 0; RIdx < NumRows; RIdx++)
	{
		for (int32 CIdx = 0; CIdx < NumColumns; CIdx++)
		{
			DrawPoints.Add(BZGI->RadarBufferToScreen(CIdx, RIdx));
		}

		Fast_DrawGrid(InContext, DrawPoints);
		DrawPoints.Empty();
	}
}


All messiness aside in the above code, kind of proud of it so far :smiley:

Little update, managed to transform my proc mesh so it’s drawn in camera space. Just a basic transform. Remember to set tick of the actor to PostUpdateWork to avoid input lag.

On the parent actor of the component:



void ACTATeighaActor::Tick( float DeltaTime )
{
	Super::Tick( DeltaTime );
​
	gridComp->UpdatePoints(time);
	time = time + DeltaTime;
​
	auto vec = GetWorld()->GetFirstPlayerController()->PlayerCameraManager->GetActorForwardVector() * 10;
	gridComp->SetWorldLocationAndRotationNoPhysics(GetWorld()->GetFirstPlayerController()->PlayerCameraManager->GetCameraLocation() + vec,
												   GetWorld()->GetFirstPlayerController()->PlayerCameraManager->GetCameraRotation() + FRotator::FRotator(45, 0, 0));
​
	gridComp->SetRelativeScale3D(FVector(0.2, 0.2, 0.2));
​
​
}