Trying to implement LIDAR / Point Cloud graphic features, new to UE4 and game engines

EDIT (7/15/15): Need help rendering efficiently. Latest progress:

Let me preface this by saying I’m a complete novice at UE4, working on game engines, and creating 3D graphics, but I do have a bunch of other programming experience that has been super helpful in learning UE4.

That said, I’m trying to make a graphical effect similar to this:

I can accomplish this so far:


In my First Person Character Event Graph, I’m doing a Line Trace By Channel over 2 for-loops to create the spherical projection (91,648 total points). If something is hit, I do Draw Debug Point at that location and change the color as a function of time of impact.

When I do the full projection, the game freezes to do the traces. My understanding is that this occurs because the Line Trace is an expensive process. Consequently, after the projection is completed, the frame rate plummets to an unplayable state.

Is there a better way to achieve this effect? Is there a way to better optimize what I have?

Thanks in advance!

I feel like there’d be a way to do this using postprocess materials. Just use SceneDepth to get the depth value of the pixel; it’s not a 1:1 correlation with the Time value of the trace but it DOES report absolute distance of the pixel from the screen. You can use the depth value to color the pixel and discard/blacken any pixels too far away.

The tricky part (at least for me) would be discarding and merging pixels to reduce the resolution from 1:1 pixel mapping to something low-res. It’d probably be possible with the Screen Size setting in a post-process volume; simply Set that to 25, apply the postprocess material that maps depth to color, and you’d be very close.

So if you need to do it from just a first person perspective, then this is pretty easily done in the post process with the depth buffer as Rhythm says above.

If you need to do it from a perspective other than the one that is doing the traces, I recommend using a Light Function that can cast shadows.

The light function would paint a grid of points through the material, not draw points where there are shadows, and you can color the points based on their Absolute World Space distance to the Actor Position.

Thanks for the reply.

The Depth Expressions page (Depth Material Expressions in Unreal Engine | Unreal Engine 5.3 Documentation) lists SceneDepth as used only for translucent objects. Would it work in my scenario?

I’ve tried PixelDepth – instead of SceneDepth – with this material set up (using the tutorial implementation here Post Process Materials in Unreal Engine | Unreal Engine 5.3 Documentation):

But applying the material to the Global PostProcess object on the map simply paints the entire camera view red. Changing the denominator of the division operation doesn’t seem to affect anything either.

Any ideas?

SceneDepth can be used in either post process or translucent.

I’d just like to point out that doing anything 91,648 times in blueprint is going to be super slow, especially if you are doing it in realtime. Just the for loops themselves are slow.

Here’s my latest progress:

And here’s the corresponding post processing material network:

My issues now are:

  1. How can I apply this for just a single instance (e.g. mouse-click or button-press) and a unique world location similar to how I had it before? For example, I want to be able to go up to some meshes, “flash the LIDAR”, see the colored pixels on the meshes based on the scene depth (i.e. like the pixels were projected from the first person camera), and be able to move around the 3D space without the pixels leaving the meshes where they were projected. Basically, the same way my old implementation allowed me to see the points “locked” in space while I could move around at will.

  2. How can I blend across the full RGB space? It seems I can only blend 2 RGB colors. Previously, I was linear interpolation HSV from 0.0 to 359.0 to get the full ROYGBIV color spectrum.

Thanks for the continued help!

So after some research, I’m starting to think a Post-Processing material isn’t going to cover what I want.

I’ve included a short clip to demonstrate what I have:

http://.gfycat.com/GeneralWarmBlobfish.gif

(Direct link in case it doesn’t display: GIF | Gfycat)

I’m using the pre-packaged First Person Template and turning off all the white cubes that are placed in the level.

The graphical effect is all done through the First Person Character blueprint. You can see the game freeze at the “ping” while it calculates and displays. You can’t really see the framerate drop because the gif quality is so bad (couldn’t find a way to make it better quality).

I desire to make this graphical effect faster and denser. The “ping” doesn’t necessarily have to be real-time (although I would prefer it to be), but I want to be navigate around the overlayed geometry in real-time at a good FPS. I hopefully want to get to something like this:


(player would have “pinged” the scene at the blank disc in the middle, then would have moved in the world to current view being seen)

Any ideas? Perhaps sprites or instanced static meshes (I unfortunately don’t know what these do or how to use them, but I know what they are and I’ve seen them thrown around in some of my research)?

Thanks in advance.

Ah, I see… So you don’t want this effect to update when the player changes position, but to cast a coloration on the environment in the form of a static point cloud which then remains until the player pings again?

Actually, what you’re doing with traces makes good sense, then.

What I would maybe advise you do is try to break up the traces… If you search the forums you’ll find someone who created a macro which processes a ForLoop across multiple ticks. So you could use this to process, say, 500 traces every tick. That SHOULDN’T be so slow as to cause an appreciable framerate drop, I wouldn’t think (traces are fairly cheap as long as you aren’t trying to process 90,000 of them in a single tick!)

If your game is running at 60FPS, that would make the entire “LIDAR ping” operation take about 3 seconds, which isn’t too long. Visually, rather than appearing as a framerate hitch, it would appear as a “radar sweep” type effect where the point cloud materializes in vertical/horizontal bands that sweep across the screen.

You would have to take a couple extra steps to make this work right (store the player’s location at the time of the sweep as a vector variable and use that value as you iterate, otherwise the process would be broken if the player moved while the sweeps were occurring, and of course making sure the trace you use can’t touch any part of the player himself).

The more pressing question, and I have no answer to it, is what sort of performance cost is incurred by the actual DrawDebugPoint operation. God forbid each point is generated with a single draw call, nothing you do will save your framerate from the cost of 90,000 draw calls.

Thanks for the replay, that was a great idea. The macro was super helpful!

Check it out in action (from left to right in that little area: bush, chair, UE4 material previewer, rock):


I’m getting happy with the resolution of the effect, it’s starting to look promising on complex geometry:


Now I just need a way to efficiently render the points. My ideas are:

  1. Instanced Static Meshes. Instead of using a draw debug point, I could use an instanced static mesh of, say, a small cube. From my understanding, using an instanced static mesh is efficient because it doesn’t create a new instance every time the mesh is drawn. Is this feasible, and if so, how to implement?
  2. Procedural Mesh Generation. Could it be possible to take the points of projection in 3D space, store them in an array, and create a single mesh using the projection points as vertex points? Would it then be possible to display that mesh as just vertex points?
  3. Sprites? Aren’t 2D graphics less expensive than 3D graphics? Can I simply display a 2D square at each point? It’s essentially what I’m doing now, but I’ve read that the debug points are an expensive process.

Any help, ideas, insight, or examples?

Thanks in advance.

Using an instanced static mesh might work well, though obviously the tri count is the concern (90k * 8 verts per cube = total vert count for the LIDAR mesh, none of it will be culled due to how instanced meshes work) but it’s worth a shot. I use instanced static meshes for people in stadium seats (just planes, so 4 verts) and while my count isn’t 90k it’s definitely in the tens of thousands and my framerate impact is negligible with simple shader instructions (matte color, unlit).

I think PP material is the way to go here. Might be worth checking out stuff. He has a scanning PP material that might be of interest.

That sounds something like I would like to try. Do you have any examples you can recommend, or can you post an example of your implementation?

Wow, that looks, pretty close, I’ll give the PP material another try.

Why not just capture the depth buffer, Scale it down, and take it to a shader to splap some points onto the scene?

I needed something quite similar. Thanks for the tips~