Workflow to create and render a line on the GPU

For about 200 hours I’m trying to draw a textured line (mesh in world space) at runtime.

The Procedural mesh component (and any other CPU mesh creator) turned out to be useless. You can make the line with them but texturing it (without distortion) is impossible as they rely on triangles and undistorted texturing requires quads.

So, a workflow entirely on the GPU seems unavoidable.

But there’s little to none information available how to do this, as 99% consider only vertex manipulation or pixel shading.

250

Hey there,

Just wanted to ask, are you trying to do this on shader level? or you just want a line rendered on gpu with textures (not distorted)? Are you trying to do something specific or just drawing a line?

Cause niagara can easy do that with ribbon quads without distortion. If you are trying to to do a custom shader to draw a line existing mesh vertices then you run into the limitation that shaders can’t generate new geometry by themselves.

Also, if your use case is screen space lines rather than world-space geometry, you can draw textured lines directly in UMG. UMG supports custom lines (tangent or custom ones) with additional textures if you want also possible to position correctly on world space, so you can render a textured line in the UI without touching mesh rendering at all.

Thanks, good ideas. Niagra provides only unconnected quads. Although my lines are given by a set of quads, these are connected, have fixed positions and the texture has to be applied to them as a whole.

The reasons I would prefer world-space are:

  • reduction of overdraw, since the lines are thin
  • easy to make them hover and clickable
  • easy to put other objects in front
  • I have code to adjust the segment thickness such that the line appears to have constant thickness in the camera at any time

In general shaders can also create new mesh but that seems not easy to do in UE.

Yes think its called something like vertexfactory i saw one time never tried though have limited knowledge around those still should be doable, i would check the function of the DrawDebugLine which engine uses since it does something over there, a simple one though not sure similar can be textured. Did you check that one ? (possibly you did :slight_smile: and with their own comments they say its not very good idea to use in production)

Also wondering can be something in NiagaraRibbonVertexFactory related

@klen_dhatu Hi there! So wanted to visit this again and a little bit more dig into this area to understand what is going on a better level and also wanted to write here in order to slightly correct myself and provide some information.

So I was looking on this side of the engine and how things setup and honestly 200 hours is something understandable even modest since there is a lot going on under the hood and its not that simple.

The thing is , in some of the engines this process and actually rendering a quad or triangle is straight forward, even some have thin api where you can do this in 50 100 lines of code.

In Unreal Engine things are structured a bit differently. To get something into the renderer you usually need to go through quite a bit of boilerplate setup. That makes sense when implementing a larger rendering feature, since that foundation will be reused, but for something simple like drawing a quad or line it can feel like a long process which is.

There are already many helpers for screen-space rendering (drawing rectangles, quads etc), but doing the same in world space, especially when the geometry is generated on the GPU, is less obvious.

So what you need to do in very high level.

1 : A Compute Shader : You register RDG create compute shader to create a line, quad whatever. You create your UVs over here too. In buffer we get vertex cordinates out.
2 : A Vertext shader : After you get your buffers from compute shader, position and transform this buffer in the world.
3 : A Pixel shader : to sample texture on it at given uvs, vertices and the cordinate, basically render it and then we inject it after the PostRenderView.

I was able to do this in plugin, that I define some vertices, for compute shader and register this as FSceneViewExtension in BeginPlay, I capture actor data transform and pass it to the renderer so the generated quad can be positioned in world space.

The FSceneViewExtension then gives access to the renderer callbacks. On PostRenderView_RenderThread I use FRDGBuilder to draw compute pass. I just draw a quad which was quite enlightening process for me too if you want I can share also if you had some progress would like to hear things from your side since there is literally little to none documentation around the topic.

My turqoise quad completely drawn from gpu from a plugin with compute, vertex, and pixel shader.

1 Like

Hi, for some reason I did not get any notification, sorry.

As you said, compared to other engines you have to dig very deep to do this.

I’m still using the procedural mesh (CPU) variant which suffers from this problem, as you can only pass triangles to the renderer.

Have you tried putting a checkerboard texture onto your quad for the case of a trapezoid quad?

No worries! Good to hear back.

So yes I actually passed a texture and render it, it shows like this.
1

In screen space UV that is a common problem, actually like “today” I was working on a native Slate UMG demo for rendering parallax and has exact same problem about uvs which you have to re project uv’s according to 3d matrix again, or like I did I just resample it (subdivide) to lower distortion. It’s something quite annoying but the nature of it is that. However not sure if that approach is more shortcut to what you want or my native rendering plugin.

If your pluggin also allows mesh generation (not just vertex manipulation) at runtime, this would be awesome.

So I make the plugin public

It was quite a challenge and ide ai helped me a lot on the way to understand what steps to take, breakdown the pipeline but still there is quite a lot to digest. I will use this to be honest as a “template” to do things since I want to work a bit around custom cables physics etc.

Whatever so what happens over the plugin

  • There are 2 shaders in plugin, when plugin being registered these shaders are also compiled.
  • MGVertexComputeShader.usf basically creates the vertex the other shader is for clipping in world and actually drawing the texture.
  • All controlled by the ActorComponent acts as the bridge between the game thread and the render thread. It prepeares a SceneViewExtension to hook into the PostRenderView phase then captures the actor’s transform and passes it to the render thread. Render thread cleans, does the transformations again, do the texture wrapping/sampling and inject into view.

I did similar job before but quite noob on these in unreal still so there could be some funny things inside ( but it works :slight_smile: )

Let me know

1 Like

Looks great. I’m just wondering why there’s two separate usf files containing the code for the vertex generation.

For demo reasons since I wanted to test that what I give is what i see. I had headaches there passing the buffer to shader. In other words a sanity check for me. These are the places i need a bit practice and deconstruction, but solved it finally. When we create the vertices we write them to gpu cache (there are some different caches which I am also learning) but the thing is when another shader wants to do something with that exact data / cache, it needs to know exactly where it is in the GPU and thats where I was failing still changed it and I make this change also on git.

You can access the buffer already if you want you can delete it or get latest version since the data is now 1 struct

StructuredBuffer<float4> InVertices; 
StructuredBuffer<float2> InUVs;
float4x4 WorldToClip;

struct VSOutput {
    float4 Position : SV_Position;
    float2 UV : TEXCOORD0;
};

VSOutput MainVS(uint VertexId : SV_VertexID) {
    VSOutput Out;
    
    // Read buffer
    float4 LocalPosition = InVertices[VertexId]; 
    
   // Project position
    Out.Position = mul(LocalPosition, WorldToClip);
    Out.UV = InUVs[VertexId];
    
    return Out;
}

Ahh, that makes sense. It should also be possible to extend for an arbitrary number of vertices that changes at runtime. Then this might be interesting for a lot of people.

Yeah correct, think I will visit soon maybe do some stuff on it. Maybe cloth maybe particle driven rope simulation that is more lightweight.