Map Render Target texture to object material UV

I’ve tried several different things to no avail so far, so I’m hoping someone can point me in the right direction. Essentially, I have a niagara system rendering to a RT through a scene capture 2d. This part works and the RT contains an additive rendering of the particles. What I’m wanting to do with that is use that to provide a new normal and fake the depth on the particles to give it a semi-fluid look. The problem I’m having is mapping the texture from the RT back to the pixels of the particles. I thought using the Screen Position (Viewport UV) would give me what I’m wanting, but it doesn’t appear so. Any suggestions?

Are the particles organized in any way?

If you’re spawning them in a cube or grid shape and that’s making it’s way to the RT, then you could use the amplitude of the RT to move some particles to another location, nearer the camera etc.

The particles are like a spout, rendered in an additive fashion (so it is more white the thicker flow because more particles in that area). So think of looking at a firehose, it would render from the same location as the player camera, but would house the fake depth I want to use for each pixel when rendering the actual particle system to the game camera. Does that make sense?

So it’s like pointing a hose directly at the camera, giving more concentration in the center?

Then you would want to process the particles nearer the center in some fashion? ( offset position, change material, or make it as if one material were applied as a whole to the particles ).

Yep. So the Render Target texture contains the “hot” areas that would need to be darker. The goal is to fake depth and normals based on that data when rendering the particles. My problem is when I try to get the RT data that applies to a specific particle when rendering it, I can’t find the right coordinate space.

The inspiration comes from the GDC 2010 article around screen-space water rendering method if that helps.