Absolute world position -> Draw material to render target BP not working

I have a material that uses absolute world position (including material offsets) and in my custom blueprint i have a function to
“spawn” and instantly draw it to a render target.

Now i have an idea on why this doesn’t work possibly because it hasn’t had time to calculate the function but
my question is… Is there a workaround?

In order to return a World Position, the material needs to be on a mesh, same for Object Position, Vertex Normals etc. Since it is not being rendered to a camera, Camera Position and Camera Vector will not work either. As it is being drawn to a render target, it only has a texture coordinate. There’s no workaround for that fact, but there may be a way to achieve the effect you’re after. What are you trying to do? Maybe I can help.

In order to return a World Position, the material needs to be on a mesh, same for Object Position, Vertex Normals etc. Since it is not being rendered to a camera, Camera Position and Camera Vector will not work either. As it is being drawn to a render target, it only has a texture coordinate. There’s no workaround for that fact, but there may be a way to achieve the effect you’re after. What are you trying to do? Maybe I can help.
[/QUOTE]

I’m trying to render “liquid splats” to a render target in a material in real time, i was inspired by Blueprint drawing to render target but i found that it depended on UV’s which i can’t have since i would like the “liquid splatter” to be consistent across multiple objects (liquid splats were stretched) so i needed a solution that used world position instead of UV channel.

what my current setup is this:

The outcome is correct and completely what i want:

But as you can see i cant have multiple splatters and even if i could it would run into performance issues, what i would like is to render the outcome to a render target and reuse it in order to overlap multiple splatters ( but as you pointed out i can’t )

How many splats do you see yourself needing? One approach would be to record the center location of each splat as a pixel in a CanvasRenderTarget2D. Then, instead of passing vector parameters one at a time, you can pass a lot of them as a texture parameter, and loop over it in a custom node. This also means you can have multiple materials using the same texture parameter. My post in this thread will get you something pretty close to that, except use the custom node to control color, not world position offset. In that material I was able to get 100 locations, changing constantly, updating with tick, working fine. The largest factor impacting performance wasn’t the material, it was recording positions to the array with tick in blueprints, and 100 calls to drawline in the CRT2D. If the locations of your splats are remain static, and you only update the CRT2D as they are created you could probably loop over ~1000 locations without degrading performance too badly. I also imagine there’s a way to organize a couple CRT2Ds so that locations outside of your view could be excluded, but I’d have to give that a bit more thought.

Edit: Oh yeah, instead of using "Texture2DSampleLevel(Tex,TexSampler,float2(float(i)*0.01+0.005,0.5),0.0).xyz " you could use “Tex.Load(int3(i,0,0))” to simplify things a bit.

Hmm looks interesting but i didn’t really want a “hard limit” or degrading performance over time (adding more spheres) i would like an effect similar to the video game splatoon.
I was thinking i could feed in the absolute world position node as a vector parameter and set the input through blueprints rather than waiting for the material to calculate it by itself (which it can’t do) but i have
no idea how absolute world position works in order for my to create a C++ or BP function?

I have found this thread which is almost exactly what i would want and RyanB talks about unwrapping and rendering all the meshes to a single texture atlas and then using UV coordinates on the “world texture” but that would become a memory hog really fast with a decent resolution (unless using level streaming and splitting the map into smaller chunks?)

If you just want to accumulate spheres that affect the whole level, you could accumulate them into a pseudo volume texture with certain bounds. will be releasing a content plugin fairly soon for that.

is correct that doing the unwrap to bake will be fighting resolution if you want your whole level to do it. That method is more intended for individual character hitmask effects, which I went into some detail about in my GDC talk which should be posted soon.

For now there is deta

http://shaderbits.com/blog/authoring-pseudo-volume-textures

Was thinking, there should be a decent space-time tradeoff here. Your world texture shouldn’t require a very high resolution. As long as your level geometry isn’t overly complex and no two adjacent pixels in the mapping reference surfaces with large differences in world position, bilinear sampling should produce reasonably accurate results to construct your paint map. Also, splats with noisy edges should be better at hiding any inaccuracies than perfect spheres. In a game like Splatoon, with two teams, only two bitmasks are required to represent painted surfaces. A render target has 32 bits to work with.

I’m not sure what pixel format render targets use by default(might have to look at the source code to find out), so it may be simple or it could take some fiddling with binary casting in hlsl to retain full precision. Still, it should be possible to divide the world texture into 16 partitions, sample it at 16 times the resolution to construct a two color paint mask, and pack it all into a render target with the same footprint as the world texture. Then when rendering, you can apply a blur of a few pixels to soften the transition between paint and underlying surface.

Sorry for not responding sooner (my post didn’t submit), but the method i settled on was to just go with the original solution (draw material to render target) the reason why i didn’t go with that all along was because i didn’t like the trade-off of keeping track of each mesh in a data base and assigning a “scale by” variable, but i see now that that trade-off is far better than any alternative. Although i still have quite a bit to go because OnParticleCollide doesn’t return a hit result (which i need in order to effectively use FindUVCollision).

Thanks to everyone for responding and helping :slight_smile:

Hey its been a few years later, was curious how your project went and what other hurdles/solutions you encountered along the way.