Download

Rendering an array of vertex positions in UE4

Hi all, I’m trying to figure out how much control I can get over the rendering in UE4.
What I want to do is given an array of vertex positions from lets say a DLL source I want to be able to pass these vertices to the GPU (just like when using a standard VBO in OpenGL etc.) and to represent each of these vertices by a quad riangle which I will be giving some material.
in a normal OpenGL/D3D pipeline I would just send the vertices to the GPU using a VBO and either use point sprites or geometry shader to render the quads.

What are my options here inside UE4? can I create such mechanism for rendering point data that isn’t saved in a mesh file?
I saw a solution that encodes the positions into an image and then use particles, Can I do it without the particles? use a geometry shader somehow?

This can either be with blueprints or code, preferably code.

Thanks,
Amit

You can use the procedural mesh component in both blueprints or code.

It basically needs a vector array with the vert. positions and an integer array for the triangle index (for more info about the component, google is your friend ofc :p).

Hey, thanks for answering.
I actually found out about the procedural mesh right after posting this question.
But after looking at the example it seems that procedural mesh only supports a triangle list as a primitive type? am I correct?
The main issue here is that I don’t have a fully connected mesh, I’m talking about a point cloud where each vertex will be translated into a quad and each of these quads will be saperated from the rest.
Is this possible?

Thanks

Anyone? is this possible?

Yes, you can add the polys to the mesh any way you want. Ie, add 4 verts for a quad, then add those 4 ID’s into two triangles (which would be 6 items). Then just repeat over and over.

Cool, I will try it.
Thanks!

Hi, I’m bumping this up again since I have a few more question regarding this topic.
First of all, I tried using the UProceduralMeshComponent in order to create quads and it indeed worked when providing the list of vertices as mentioned.
The challenge in my situation is that when I create the procedural mesh I don’t have all the vertices, but only the position of the center of the quad.
To be more specific I have a data structure, or for simplicity lets say I have a text file with the world coordiantes of a point cloud.
Each of these point I want to translate into a camera facing quad in a certain predefined size (given in view coordinates).
For example: given the coordinates X1,Y1,Z1 and the variable “quadLength” I would like to calculate the poisitions of the 4 corners of the quad so that in view space the distance between two adjacent vertices will be “quadLength”.

If I were to do this in OpenGL for example, I would pass the center coordinates to the shaders and use the geometry shader to calculate the vertex position in view space but multiplying by the worldView matrix, then adding “quadLength” / 2 in order to create each corner in view space and then multiplying by the projection matrix when passing to the fragment shader (or multiplying by worldView inverse if I wanted the world position of each vertex).

I’m not sure what is the right way to do this here, as in order to create the procedural mesh I would need these vertices before passing the geometry to the GPU.
I looked at getting the camera view and projection matrices through c++ but it seems that I can only access camera view and projection parameters but not the matrix itself, at least without changing engine code.
I also looked at the option of using UE4 billboards but I’m still not sure how to generate a billboard for each point I have in C++ but more importantly I’m not sure if its the right thing to do performance-wise.

What do you think is the right way to achieve my goal?
Also its important to note that I will also need a way to apply a texture or at least a region of a texture to each of this quads according to its world coordinates.

I know that this scenario is rather unusual for UE4, I’m trying to assess here if UE4 is the right tool here or I should maybe try a different engine (I already succeeded at doing this in Unity but I like UE4 a lot more for all the tools its offering me).

I really appreciate your help on this,

Amit

Hi,

Not sure if I can answer your question about what the right way would be to do what you are trying to do but you can access the projection matrix in code if you need to like so:


ULocalPlayer* LocalPlayer = GetWorld()->GetFirstLocalPlayerFromController();
if (!LocalPlayer || !LocalPlayer->ViewportClient || !LocalPlayer->ViewportClient->Viewport)
{
	return;
}

FSceneViewFamily ViewFamily(FSceneViewFamily::ConstructionValues(
	LocalPlayer->ViewportClient->Viewport,
	GetWorld()->Scene,
	LocalPlayer->ViewportClient->EngineShowFlags)
	.SetRealtimeUpdate(true));

FVector ViewLocation;
FRotator ViewRotation;
	
FSceneView* View = LocalPlayer->CalcSceneView(&ViewFamily, ViewLocation, ViewRotation, LocalPlayer->ViewportClient->Viewport);
if (!View) { return; }

FMatrix matProj = View->ViewMatrices.GetProjectionMatrix();
FVector CameraLoc = View->ViewLocation;


I have been using that piece of code for a quadtree ocean plane, it works fine as long as you set your actor’s tick group to TG_PostUpdateWork (otherwise it tends to get the camera data from the previous frame).

Hi, thanks for replying, this is very helpful.
Can someone also comment on my other questions?

Thanks,
Amit

Hi,

One “simple” solution would be to generate the quads mesh from code/blueprint, and making the quads oriented toward the camera with the material :

  • For each of your input vector position construct a 0 sized quad but with proper uvs in 0-1 range (a simple planar map), with all the vertices position are equal to the corresponding vector.
  • In the material, use the uvs to identify the quad corner and process it to output a world position offset that will generate a camera facing quad. Offsetting the vertices from their original position (the input vector) will make you a quad where the center is equal to the input vector.

I hope it’s clear, I’ll try to do a small example with screenshot.

Here’s a screenshot of the material I described :
f2f533709c831bda9e3972f09528a0cdec925bf3.jpeg
This will indeed transform any quad mesh with the 4 vertices having the same position to a camera looking quad.

Here it is applied to a particle system with initial size set to 0 (so, 0 sized quads) :
667ce2530669892b20870688f1c2f36643327632.jpeg