Scale Particle Size by Camera Distance to avoid Massive Overdraw?

I feel like I should know this, but has anybody had success scaling the size of a particle billboard based on it’s distance from the camera? if it was a sprite it probably wouldn’t be too tough, but beams are another story.

Ignore the debug UI - The two screenshots below show why I need this, the material currently blends between a thick and thin line in the Material, based on Camera Distance. This is so the line still shows up when far away, but doesn’t take over when you’re close to the screen. The problem is, as you can see in Wireframe view, the sprite sheets pretty much take over the entire screen when they’re close to the camera, which is almost all the time. This is a silly amount of overdraw and needs reducing.

Anybody know if this is possible as-is? If not, I’ll have to create a custom Cascade module of some kind.

The Material, to show you what I mean about the thickness blending.

What you could do is use the lod system. Kind of backwards… but you can use lod states and distances to change the amount and size of the particles. It would be nice if the camera fade function faded the particle cards out them selves rather then just lowering the opacity in the shader.

Why not just change the radius of the particle bounds based upon a distance percent relative to the screen resolution?
Or using the scene depth to modify the beams thickness?

Have you tried these? If so, what were the outcomes?

At least with Scene depth I would assume he would have the same issue as the camera fade function now. It will fade the beam out based on depth… but if you go into wireframe mode the particle size would still be the same. Unless you can modify size of a particle with blueprints connected to a material function.

I was referring to using scene depth as a vertex weight map for displacement.

I thought about that, but honestly haven’t tried it with Beams, since I don’t know if Particle Sprites respond to WPO stuff do they?

If they do what’s the quickest way to do it? Abs Pos + (Vertex Normal * ScaleFactor) maybe?

I can’t do that because the Particle System is constantly at 0,0,0 - so the distance from the beam sprites themselves doesn’t matter. They also can’t change how many beam segments are created, since 64 are required for it to align the beams to the prediction spline for the Satellite. Otherwise when you zoom out far enough, chunks will be missing, so the amount needs to stay constant.

The idea is the have the particle visible at all times, but the thickness has to change based on how far away you are, otherwise its just not visible at large distances.

I thought about that, but honestly haven’t tried it with Beams, since I don’t know if Particle Sprites respond to WPO stuff do they? It would definitely be the easiest way to do it if they did.

If they do what’s the quickest way to do it? Abs Pos + (Vertex Normal * ScaleFactor) maybe?

I would perhaps look into bypassing beams entirely and using a node like “Spline Thicken” where you drive the thickness of the beam directly. You actually would be using the tangent X vector since most simple strips would have flat normals meaning the faces would just move not really expand.

Ah that’s interesting… By bypassing them, you do mean simply removing the InitialSize module of the Particle System? The reason I use beams at the moment is because I can plot 64-points using a Spline Component, then match the Beam Particles up to the spline based on their indices (set target / source location & Tangents) - as a bonus I get interpolation between the points too should I need it (for larger orbits).

Currently I get do this using just One emitter in a Particle system, how would I feed the correct Tangent Vectors into Spline Thicken to get that result? Is it simply VertexNormal (Cross) UpVector?

In case you are still interested:
Note Since it is a flat surface (particle sprite), you can calculate the angle and offset the vertex normal based upon its rotation and position. Once again this is probably not the best way to do it. Just a quick thought I had to get around a potential problem.


Unfortunately neither solution worked at the moment. Spline Thicken is a difficult node to understand… I tried toying with it, but the biggest problem is that any Particles with WPO applied seem to not draw in Wireframe mode, so it’s hard to tell what’s happening.

EDIT: Solved it. I think I’ve basically replicated what Spline Thicken is doing. I use the UV’s to distinguish the verts at the top / bottom of the Spline, and the Tangent X-Vector as the direction to multiply the offset in. It’s currently backwards, but it appears to be working perfectly.


Looks like you got the basics working. I feel you on the wireframe thing, It can be tricky to wrap your head around that. FWIW in those cases I usually make the material translucent and put a gridlike texture on it or I actually just check the wireframe box in the material.

The one issue I’ve had with spline thicken occasionally is that it depends on the source mesh. Often if the UVs are flipped, you have to use a negative number for the “Width” or “Height” params otherwise they will be camera facing backwards. Unfortunately I haven’t really been able to think of a way to make that more reliable without making all the materials need to be 2-sided which is a waste.

Ah perhaps that’s why half of the spline occasionally goes invisible… I’ll check that out tomorrow.

It’s an extra draw call for 2-Sided materials is that correct? Or is there other overhead too? (Aside from the obvious cost of not being able to cull those polygons I guess)