imposterUV ideas for foliage

Hiya

Just posting in the hopes some people have some ideas on how this could be improved.

I have a thread over in the work in progress section to show how my testing for foliage is going…

https://forums.unrealengine.com/showthread.php?97657-Grass-instancing-using-imposter-sprites

20be6613fe057d953faf4623f7ec1c316f5c0c95.jpeg
5cdfd43fc35b69242d0c2b343f756e31157b78f8.jpeg

I’ve had to do some hacking to get some things I wanted in there. For instance, I have only rendered one axis but have separated out the world position offset to give full a camera facing quad. I also can blend between a quad that faces the camera and a quad that faces the camera plane. somewhere in between gives a more natural look to me. A lot of the time it’s not too apparent that the apparent height of the grass isnt changing when the camera rises (well as long as you set a limit on the camera height). The next step is to try and introduce full 360 imposters, which leads me to the following ideas.

  1. Blending.

There is some work that has been done by ryanB on the phase blending between angles. On 360 imposters, I’m guessing this would need to blend between 4 uv sections. that might make things a bit blurry if there aren’t enough samples.

Here’s my idea. It gets a bit crazy and might not work or give strange artifacts. The basic idea is that when I render out the maya images, I do a pass where I render a uv offset pass with would be the uv offsets that are required to result in a warp the next angle. This could be done by using object position but with the camera rotation applied then a transform to view space? This would need to be done for both horizontal and vertical angles. Then when the imposter rotates, we can apply a fraction of the uv offset to the texture lookup on both angles and blend between them to mitigate the blur.

  1. texture space for all angles.

One limitation is that texture space is limited if you want a lot of angles to reduce popping. and another thing is that if you are using a horizontal/vertical pre-rendering then you waste a lot of real-estate on the top and bottom. One idea I had was to investigate a geodesic dome method of calculating the anlges. This way the density of the images would be evenly distributed throughout all angle and you can maximize texture space. the downside is tha the math is a bit hairier. One upside is that you would blend between 3 textures and not 4 as it would be like a barycentric coordinates blend.

Edit…
3) Forgot to mention that I am doing the thing from kite demo where you pull the quad forward and then push back with pixel depth offset. this helps with Z fighting of the quads and makes it seem that the blades intersect a bit…

Is there anyone out there that wan’t to collaborate and make the best imposters we can?!

Your thoughts on blending sound like using motion vectors.

There is actually a version of the Imposter function in engine called “Imposter_MotionVectors”. It was checked in a few versions ago but is is still somewhat experimental as I haven’t had proper time to make test content and put it through its paces. It is based off the implementation of the Flipbook_MotionVectors which had a bit more time spent on it.

The render to texture blueprint actually supports rendering the motion vectors as well but it is not well documented yet. Should work the same for Flipbooks and imposters.

Re: angles
Yes the full 3d ones will waste half of the angles since you likely will never see grass from below the grass. It is probably much easier to simply clamp the vertical angle response than to re-engineer to a different format. Also, the space at the top is not quite as wasted as you think. From the top, it would encode rotations from the camera looking straight down. You may think of it as wasted since those frames will all be simple 2d rotations and thus are essentially copies, but with sprite mapping, this would cause the imposter not to rotate at all when rotating around that Z axis view without additional math that knows how many rows each Z angle has in the sheet.

That said, it is possible to remap them using an octahedron or something. I have all of the math for that (both in BP and materials) so it wouldn’t be too hard to do the mapping, just counteracting the unwanted rotations would be a challenge.

Using “Distance field Alpha” alone may work great for grass where 90% of the apparent look is coming from the opacity mask, not the base color contrast. This requires some photoshop steps to convert your alpha to a distance field (and I have a material for it a well heh but its slow as its realtime). Then its just like using subUV blending for the opacity which automatically created morphing.

Finally, I would suggest thinking about restricting your imposters to only the Z axis rotations (Called “Fixed Z” on the imposter function), and then have a separate texture sheet that is just a flat card of the grass as viewed from above. You can use UVs or vertex colors to blend to that other texture to avoid another material section/draw call.

This might not work as well as full 3d imposters but based on some experimentation done on the kite demo, using a grass mesh and LOD that has a top down bake in it really helps the grass maintain its thickness from a distance, and might replace most angles from the imposter without much difference in appearance. Hopefully that made sense.

Cheers Ryan.

As soon as you said motion vectors I had a duh moment because I’ve actually coded that stuff before (using particles but same idea). I’ll see what I can spit out of maya. Did you do anything to render out accurate vectors in terms of world->uv space or do you just have a mult factor and do it by eye? One thing I’'m taking into account is that perinstance random is affecting the sprite size and hence the camera pull and pixeldepth offset mults

Here’s a update video to where I’m up to…

I havn’t checked out the distance field alpha yet so I’ll definitely give that a go. Is that something on the material function or is it a material property? (I’m at work at the moment so will have to wait until tonight.

You have any good resources on ray intersection with a geodesic dome and mapping that to an index space? I had a quick google last night but didn’t get that far.

I’m not too big a fan of the idea of a top sheet as I’d like the transition to be seamless and I think some nice fidelity would be gained from having full 360 (top hemisphere only) grass imposters. If used with the motion vectors, we could being the number of rendered angles down and use the all rgb channels to store greyscale images and get it all into hopefully 1 8k map (I’m doing the colour in material from greyscale images and mapping the black/white to dark green/light green which helps fake shadows).

I’m not sure if the following is what you are looking for, but give it a look. It’s a paper called “real-time realistic rendering and lighting of forests”

https://hal.inria.fr/hal-00650120/document

especially look at the Appendix to that paper that can be found here, all the math should be there:

I’ve implemented this approach (not in UE4) and it worked quite fine even though the shader can get quite complicated.

Wow that looks perfect thanks for the link. The bonus is that it look like one of the guys who wrote the paper works in another department here at my work. I’ll have to send him a quick link. :slight_smile:

Those papers are really great, thanks for the link!

I should have probably also mentioned that I actually tried to use the technique from the linked papers for grass rendering, but at the end I gave up (or rather put it aside as it wasn’t that urgent for my project at the time). The main problem I had was that I couldn’t really produce nice continuous transitions between different imposters and the grass always looked kind of blobby. This may be solved by the pixel depth offset, but I haven’t tried that yet.

Would it be possible for you to share your progress? The code in the appendix seems straightforward enough to implement but having an established starting point always helps.

That second link looks very much like what you would get by using an octahedron, but octahedrons can be calculated without any arcosin which is expensive.

Not sure if it was added for 4.11 or just after, but check if your material editor has a function called “UnitVectorToOctahedron” or “OctahedronToUnitVector” could at least be a starting point.

If not, they were just custom nodes ported from deferredshadingcommon.usf


float2 UnitVectorToOctahedron( float3 N )
{
        N.xy /= dot( 1, abs(N) );
        if( N.z <= 0 )
	{
               N.xy = ( 1 - abs(N.yx) ) * ( N.xy >= 0 ? float2(1,1) : float2(-1,-1) );
        }
        return N.xy;
}

float3 OctahedronToUnitVector( float2 Oct )
{
        float3 N = float3( Oct, 1 - dot( 1, abs(Oct) ) );
        if( N.z < 0 )
        {
                N.xy = ( 1 - abs(N.yx) ) * ( N.xy >= 0 ? float2(1,1) : float2(-1,-1) );
        }
        return normalize(N);
}

The imposter function should probably be upgraded to something based on this at some point. Some of the resulting geometry orientation problems could be simplified by using only camera position and not direction to calculate the facing but I’m sure both could be supported with a bit more math.

Re: motion vector rendering. For flipbooks and imposters was known to be accurate since I was referencing the size of one flipbook ‘cell’ in worldspace as 0-1. And then when reading the motion vectors, you simply divide magnitude by 1/NumCells to get a value localized into one cell of UV space.

In practice you also want to boost your motion vector values to take full advantage of the texture range to maximize precision, since no single cell is ever going to traverse one whole cell. I just use a material function that renders solid red once any single channel exceeds 1 which makes it easy to sneak up on the ceiling. This function is in engine and called ‘mark red above 1’ or something like that… just look for the word red.

If you do that (which you always should unless you can afford hdr formats), you need to of course record the scale factor and also divide by it in the rendering side.

The distance field thing is a side effect of simply using frame blended SubUV frames with a Masked material with an alpha channel that is distance fields rather than hard edged. If you think about it, when you blend two gradients and then extract a hard edge from them, that edge moves.

Currently the only material functions that output blended SubUV frames are:

SubUV_Function
Flipbook_MotionVectors
Imposter_MotionVectors

Here is how you make a layer into a distance field in photoshop (this is the most super hidden yet useful feature ever):

Yeah the pixel depth really helps mask the fact that they are planes as it causes them to ‘mush’ together and mixes the z fighting of the individual planes. I think at some level my depth renders and depth offsets might be a bit off as if you look carefully it’s not perfect. but you gotta try these things otherwise you wont know will you? The temporalAA doesn’t help much as it introduces a bit of noise. I think it might have to do with accurate motion vectors from the world position offset there’s not much I can do about that for now. The accurate motion vectors project setting doesn’t help in this case and actually makes it worse.

Cool I think I know why that works. Just adding two distance fields will give you the in between.

that photoshop thing is super cool. I’m definitely going to think of ways to use that.

I am sorry for going slightly off-topic but can anyone give me some reading material regarding motion vectors? I am having a hard time wrapping my head around them in the context of shaders and imposter sprites. Google isn’t being much of a help.

I hope that you understand perspective projection because that helps in the explanation. If you project 3d coordinates onto an imaginary plane in front of the camera, you end up with your 2d coordinate on screen. now if you moved that 3d point a bit then after the projection that point would end up at a slightly different position of the screen due to parallax. what you do is you get those two 2d positions on the screen plane and calculate the vector that is the distance between them. this new vector is the 2d vector on the screen that represents the change in 3d position. you can use that to do blurring in post to make it look like the object is moving in that direction.

In the case of imposters, you are calculating the motion vector that is the vector that points towards where it is going to be next frame/angle (the 3d to 2d transformation). You could use that tp blend one frame to the next by using the motion vectors as a uv offset.

Is that an ok explanation?

It is, that much I understood. I thought that you were referring to some texture that is being baked out and used in the blending. Thanks!

Well yes technically it is exactly that. I would render out a motion vector pass as a texture to use as the offsets. so youre right. I’ll look at the imposter material function later.

Wouldn’t you need a ton of those to account for every direction in every angle? Apologies if I’m being an annoyance but this is a topic of great interest to me and I’m trying to fully grasp it.

For 3d imposters you actually need two sets of motion vectors. One set stores the motion vectors moving one neighbor to the right, another stores the vectors moving one neighbor to below. Then the material has to actually crossfade between the two phases. If there are any bugs with the imposter motion vector, it is probably related to that.

You can pack them into one RGB texture since the vertical or Y axis vectors will only be one channel, since the rotation axis is always perpendicular to the viewing angle when rotating around the Y axis. Of course that may not be true if you switch to a dome layout so you will probably require 2 sets of 2 channel data so either DXT5 or uncompressed VectorDisplacement format.

The imposter function reads the motion vector texture as RG = x axis vectors, B = Y axis vector

Think of it more like a series of instructions of how to morph from A to B. Then the next cell tells it how to blend from B to C. It is not storing all combinations just a linear progression.

I see, now it makes sense. Thanks for clarifying guys!

Super cool thread - Ryan, do you have any examples at all you could share of how to wire up the imposter_motionVector function?

I’m working on a bare bones imposter generator for houdini, so I can render out volumetric objects (clouds) with lighting baked in. I’m counting on the motion vector stuff / blending to get me across the line, now that I’ve got the basics in place:

I’ve dug into the function and had a look around, but I won’t lie - its daunting :slight_smile:
Any further examples would be a massive help! Awesome work.

Thanks.