Display ~100,000 "points" without destroying performance

I’m attempting to implement a LIDAR-type graphical effect, similar to these examples:

&d=1436669394

&d=1435690792

I’ve solved the issues of projecting the rays onto the geometry, but I now I need a way to effectively display/render the points. Here’s what I have thus far:

This achieves my desired goal, however it completely kills my performance. I’m new to game creation and UE4, so I think my knowledge is limiting what I can try. Right now, I’m simply calling “Draw Debug Point” after performing a “Line Trace by Channel”:


The line traces occur through 2 for-loops and produce 91,648 total points in the spherical projection.

I’ve tried displaying the points as sprites, static meshes, and instanced static meshes, but all those functions tanked my performance much harder than “Draw Debug Point”. That said, I likely could have implemented those functions incorrectly. I’ve also tried using a Post-Processing Material, but I don’t think that’s the way to go from the research I’ve done.

Any ideas? I’m not sure what else to try.

Thanks in advance.

Isn’t that achievable with a Post Processing Effect…?

Or shading at least.

How would it be done with a Post Processing Effect?

The closest I’ve seen is this: Post Process Sonar Pulse Material - Rendering - Epic Developer Community Forums

But even that and others that I’ve experimented with don’t keep the shadows when I move away from the “ping” location.

That’s a really cool effect! The only optimization I could think of is lowering the amount of points at the bottom, where you hardly need any points, I believe.

The problem is that you are doing line tracing every frame and that’s why it’s so expensive (If I am not mistaken). You need to fake the effect, rather than actually make it.
This should be achievable with smart usage of Post Processing and Shaders, but I am not expert on either.

, I will just say that that effect is awesome, I wish you the best of luck in figuring it out.

The trace was expensive because I was trying to do it all in one tick, but I found a macro on the forums that allows a for-loop over multiple ticks. That allows me to lose minimal (i.e. unnoticeable) performance when I do the “ping”. So I don’t think that’s the hold up.

From what I’ve found online, it seems like “Draw Debug Point” is an inefficient drawing/rendering method, so I think that’s where the hang up is coming from.

Thanks! I’m optimistic it can be solved. I’ve seen it done before in real-time with jMonkeyEngine, so I’m sure UE4 can handle it. I just don’t have enough 3D graphics or game engine knowledge…

The pictures you’ve shown really looks like shading. Unless I see someone do it in code, I refuse to believe that it’s rendered like that in real-time, but rather faked with shading or post processing.

aight, im just gonna throw in my 2cents :slight_smile:

i’d say try it with instanced static meshes (with cubes as the meshes), but instead of putting ALL points into one instanced mesh, bunch them into several ones. like 10k per batch, and generate the batches in a way they’re close to each other (like every 40 degrees from player perspective or something).
reason/thinking behind: iirc all instanced static meshes are rendered in a batch, and so are not occluded by cam frustum; i guess thats why you’ve seen no performance increase using it before (basically rendering all meshes around you, even though you can see only a fracture; the ones in the 90° cam frustum). so breaking those up should allow for both batching them together and still occluding most of the unneeded ones…

makes sense? ^^

Yeah, I can confirm that you can just throw thousands and thousands of instances into a level with essentially negligible performance impact, and I do this with 64 separate actors each of which generates a huge number. My advice would be 36 instanced static mesh components, each at 10° intervals, spawning an instance of itself for each point in that chunk. A simple material parametrized to calculate the instance position relative to the center point world space vector parameter and build the color based on that would be inexpensive, and using a simple cube with an unlit shading model would be quite performant.

That is pretty sweet. I am with the crowd that says you should deal with this in the shader, aka materials.

The exact implementation will be up to you but I am betting you could use a material parameter collection that updates based on player location and based on that location all materials update accordingly. So this could be one time, or real time.

If you look in my latest map generation thread, find the point where I first got the map bent into a spherical shape. That is all in the material, the whole map is actually flat hexagons. And I can scroll around and the map gets bent in a sphere based in the difference between the player location and any give mesh.

So for your scenario, you are going to want a similar setup where your material parameter collection updates based on player location, or really any vector, and each material would activate the lidar effect as required. Also your mat param collection will also need a value to act as the toggle to turn the lidar on and off.

It sounds like you already have the math worked out which is the hard part. Now you just need to get that into a material function and put that function in all materials you want to be affected by the lidar.

Just an idea, but using something like this you could make in-game explosions create a “shadow” of where a person was standing before they were blown up.

It very well could have been faked with shading or post processing. I can’t link to a video of the implementation I saw (I saw it being used while I was working in a robotics lab, the graphics of the robot and LIDAR visualization were done with jMonkeyEngine in real-time), but it’s similar to this: https://www.youtube.com/watch?v=_KknQpzIflU&feature=youtu.be&t=87 (Note, I have absolutely no idea what graphics environment is being used here, but ours used a very *very *similar view using jMonkeyEngine)

Could you provide a blueprint example? I’m not sure how to put them in different batches/chunks. I can’t find any understandable examples on how to use instanced static meshes from searching around the web.

Here’s what I tried:

And here’s what it’s producing (very, very, very slowly…):

Could elaborate on what you mean by a “material parameter collection”? I don’t understand what that is.

So I would need to apply this material to every mesh in the level, not a global Post-Processing Material, correct?

I’ve experimented with a Post-Processing Material before, but it didn’t come out very well (couldn’t figure out how to “ping” it in one place to get the shadow effect):

&d=1436333314

Sorry for the confusion, the Material Editor is just super confusing to me.

That is a pretty cool idea I hadn’t considered. I’m mostly interested in having a player navigate the “sensed space” without being able to explicitly see it.

You add a SINGLE Instanced static mesh component, then your loop should run an Add instance on it over and over. What you’re doing is essentially the same as 100,000 individual static meshes, since you’re generating a new instanced mesh component before making the instance!

An ISMC is like a mesh template that you can produce transformed copies of in a single draw call. You want ONE ISMC, many instances.

But since each ISMC is a single draw call, rendering all the points as one MIGHT be less-than-optimal since off-screen points can’t be culled. Adding 36 draw calls is a pretty minor performance hit, so I suggested doing a single ISMC for each 10° interval of hits, but I’d proof-of-concept it using a single ISMC pushing all the instances before messing with that setup.

As for the visual impact, use a single VERY small cube as the mesh; looks like you’re using a large mesh for what should be a point. I’d also go ahead and make a simple unlit green material to use for debugging; later on we can work out a distance-based coloration but it’s better to use a simple material rather than the WorldGrid default which is actually quite a conplex material and would cost a lot to render across 800,000 verts…

Material parameter collections allow you to update values in a material from blueprint. You make the param collection in the content browser, and then it is kind of like adding variables to it. Then you call it in the material and set it in the blueprint.

You will have to look for some tutorials and use the docs, but that is really the easy part.

In the material, it will work like normal except you filter it though a section where your lidar math is being accomplished. This is the same place you will call your param collection which in your case will provide the vector, toggle, etc.

Getting somewhere with ISMCs:


The problem now is a reverse of my original problem. “Pinging” the scene is super slow, but once the sweep is finished, I can move around the environment at good FPS.

Also, the material isn’t showing up, and searches haven’t yielded any appropriate solutions. I have it set as an unlit green color, and here’s how I’m calling the ISMC and adding the instances:


Any ideas?

Also, I don’t understand the bolded sentence, what do you mean by “off-screen points can’t be culled”?

Going to try this next, but I’m still not understanding:

  1. I invoke the Material Parameter Collection (MPC) in the Material blueprint and set the values of the MPC in the (in my case) FirstPersonCharacter blueprint?
  2. This material that is using the MPC would need to be put onto every static mesh in the level?
  3. Would I leave the LIDAR math in the FirstPersonCharacter blueprint or move it to the Material blueprint?

You probably need to mark the checkbox in the material that allows it to be used with Instanced Meshes.

  1. Yes.
  2. Put into every material that you want the LIDAR to work with. If you haven’t already made any master materials, now would be a good time to look into those as well since they save tons of time.
  3. Put the math in the material. The blueprint would just contain whatever logic is needed to determine the vector and other variables that you are setting for the MPC.

I would go with a different approach.

  1. Make a single mesh out of a grid of let’s say 1024x512 cubes.
  2. Unwrap each of your cubes in such a way that if you have a 1024x512 texture each cube would be mapped to a single specific texel of texture.
  3. Make a render to texture material in UE4 and render out world space position of what you see from first person camera. To get a better accuracy you could just render position in local space of camera or only depth information and recover position in later shader.
  4. Now you can plug this “render to texture” into your grid of cubes material and offset each of them by value from texture.
  5. Extra step to get it working more like what you see on your reference LIDAR image is to scale each cube individually in proportion to it’s individual distance from camera. Or pre-calculate scaling factor when you render your scene into “render to texture”.

This should be super fast. Perhaps it would be even faster if you use a geometry shader and construct cubes on fly, so that you pass into videocard only an array of vertices (centers of cubes) with UV coordinates.

Very cool effect!

What you are trying to achieve has many parallels with lighting algorithms in general, in particular shadow mapping. You are probably familiar with it but I would still like to break it down because I see a huge possibility for performance gain. In shadow mapping you:

  1. Render the scene from the POV of a light source into a texture, the shadow map, rendering depth information only
  2. Render the scene from the POV of a player, whilst:
  • For every surface point visible to the player, you sample the shadow map to check whether that same surface point is also visible to the light source or not.
  • If the surface point is visible in the shadow map, you consider the surface lit, or else its unlit.

For the LIDAR effect, as I see it you only need to know for each surface point visible to the player:

  • Whether it is visible from the ping location to determine whether to draw a point
  • The distance to the ping location to determine the color

This is the same input as for step 2 in shadow mapping. You could see it like this: your ping location is actually a special light source. Except instead of combining a lighting factor with the surface color, you draw a color based on the distance (or black of its not visible). So my advise would be to use the same approach for shadow mapping for both the visibility info generation (step 1 of shadow mapping) and rendering (step 2 of shadow mapping). In step 2 you can apply some tricks like only rendering parts of the surface to get dots.

The performance gain for the point generation will be huge, like when you compare normal (forward or deferred) rendering to physically based ray tracing. You don’t need the accuracy of ray tracing in your case.

When I say “points off-screen can’t be culled” what I mean is this:

the way occlusion culling works is that if UE determines that an object is not visible (either being behind an object or off camera) it doesn’t bother to render it. Since all instances of an ISMC are treated as a single draw call, they are treated as a single MESH, meaning if you can see a single point in the cloud, the engine will attempt to render ALL 100,000 cubes. This is a waste of processing power since the main use of this is to splay the cubes out around the player in all directions, and for many uses he would only need to look at like 30% of them (since all the points behind him and off to the side wouldn’t be visible). It’s sort of wasteful spending all that GPU power drawing cubes behind the camera every frame.

BUT, and I’ll be honest, I didn’t think about the performance cost of spawning ISMC instances in realtime. My ISMC actors spawn their instances via a construction script and it DOES take quite a while to run (full minutes, in fact, if they’re tweaked too heavily) and while I considered the performance impact of RENDERING all those meshes, I certainly didn’t think about the performance impact of GENERATING them with the LIDAR sweep itself… hm.

I just tested a modified version of my crowd-spawning actor which, rather than constructing with all of the instances, spawns them after 25 seconds have elapsed in-game; I got a frame lag of about 1 or 2 seconds, spawning 126,000 instances all at once. Granted, my machine is pretty powerful, and the actors I’m spawning are only flat planes rather than cubes (BTW, are you spawning UE4’s default cube mesh? It has far more verts than you need; make your own 8-vert cube mesh, that might help)… but it still seems to me that if you’re dividing up about 100,000 points over MULTIPLE ticks the frame hit shouldn’t be that large spawning the instances. If I reduce the size of the object so that it only spawns 12,000 or so (i.e. 1/10th of the size) the framerate only drops by a couple ms for that frame…

What I’ll suggest to mask this problem is to tweak your loops-per-tick algorithm. If you spawn the ISMCs in 1000-point chunks, your FPS hit should be very minor (assuming you’re targeting 60+FPS; if you’re going for 144+ FPS it might be a problem) and the effect itself will still only take about 1.5 seconds, which is a perfectly nice-looking way of handling the effect in-game in terms of its comparability to an actual radar sweep.