The parallaxocclusionmapping node only recieves a texture object into the hieghtmap texture input. I was trying to get a vertex paintable mask of a moss material to add to the parallax value/height value. You cannot perform math functions on the texture object. Is there a way around this?
I also remember running into this issue when experimenting with POM. Don’t remember ever finding a solution, so I’d like to know as well.
I think content examples do show an example of blending two textures with pom.
You need to blend textures, that are sampled after the pom node, thus you would need to place two POM nodes for two layers, and lerp the resulting textures.
http://image.prntscr.com/image/221060eed76446209e75154fbb0290b1.png
As you can see, it also doubles POM performance cost.
There is a much more optimized way of blending up to 4 POM layers, but that requires basic understanding of the code behind POM.
Couldn’t you just pack POM maps into the R, G, B, and A channels and then use Lerp “masks” based on whatever differentiating rules you wanted to control what POM goes where (i.e. angles/heights/surface types) to control which Heightmap Channel is referenced?
I.E. use a Make Float 4 node and have lerp masks determining the 1/0 values for POM with the texture-packed POM T2d channels as references to essentially make a custom POM Heightmap.
I don’t think I fully understand what are you talking about.
I think the theory is that per-vertex data could just move the vertex, rather than use the more expensive per-pixel movement.
Also, moving vertices means that you will cast better shadows.
Where can I read more about it?
Can you please give a link on it?
You can do this, if you also control the input going into “Heightmap Channel”. There will be some ‘floating’ type artifacts if you do it like this though. The reason is that the blend point between the channels will only be evaluated once, at the surface level. To be correct, it needs to be re-evaluated again at every iteration and there is no easy way to do that without having the alpha itself as a texture object for multiple lookups.
The ‘floating’ aritfacts can be minimized by using wider blend gradients between the different channels.
The ‘correct’ way to do this is by performing POM on each heightmap separately, and ALSO ray marching the blend alpha map rather than pre-applying it. That is not very easy to do without doing a double POM which is expensive though. And its not possible at all for vertex paint or landscape where you don’t have texture objects.
I ended up using simple max blend on the channel-packed heightmaps. Blend map is also offset at constant mean height value. Not completely drift free, but works acceptably at low height multipliers and low number of steps.
float4 texsamp=Tex.SampleGrad(TexSampler,UV+offset,InDDX, InDDY);
texsamp*=blendweights;
float texatray=max(texsamp.r,texsamp.g);
texatray=max(texatray,texsamp.b);
texatray=max(texatray,texsamp.a);
Quite suitable for terrain.
Eh, missed your post for some reason.
Page 95-107
The article is about QDM, but the blending part is fully applicable to conventional POM
As a side note, it would neat to append several scattered threads like this one into main parallax occlusion mapping thread.
That page is just talking about using Height Lerp to help make a nicer blend between the layers. It is not a 100% solution. Take a look as the material function “Texture_Bombing_POM” to see a similar example that uses heightlerp to blend between offset samples. You can probably use a similar setup.
Your method of using “Max” should actually be pretty similar but that will give a hard edge, whereas using heightlerp should let you control the contrast at the edges. So it will trade some correctness for a softer blend, and can help mix the blend weights nicer.
The gist of the paper I’ve linked is probably in this phrase:
Whole point is doing 1 lookup.
In this case POM cost increase for 4 layers, as compared to single layer, constitutes 9 PS arithmetic instructions for each iteration.
My reasoning for going with sharp blend instead of height lerp lies in fact that with height lerp between layers, areas where several texture mix, will have an unusual height, that does not correspond to any of the textures. It looks slightly unnatural. Simpe max blend is also cheaper.
But yep, there are no issues in using height lerp there instead of MAX.
It is not drift free, but at low depth scale it is pretty decent. It is possible to reduce depth at blend regions slightly to further reduce the drift, caused by blend map.
I don’t know of any realistic way to blend more than two layers seamlessly without mutlipass.
Right now this is what my setup gives me:
Visuals are far from desirable but it is cheap enough to be used on terrain in 4 layer setup.
MAX blend, 2 texture fetches per iteration:
[SPOILER]
[/SPOILER]
With HeightLerp, 4 fetches per iteration:
[SPOILER]
https://youtube.com/watch?v=a_V6n8eZLH4[/SPOILER]
I’ve tried few optimizations, including doing samples in groups of 4, like Ryan mentioned in his post, but overall performance benefit was quite limited. It actually seemed that benefit of vectorizing stuff was nullified by situations where 3 more samples were taken instead of terminating loop.
My next move to do in free time would be to think about a proper way to get vertex colors with offset.
That last version with height blending is looking pretty good!
Are you actually including the blend alpha texture object in your re evaluation of the heights or just using the surface version? it’s hard to tell in the video because the camera only pans sideways.
The use of ‘max’ blending in your first examples confuses me a bit since its letting lower height values stomp higher areas which is the opposite of what I’d expect. I guess that means you have pre-height blended the heights -using the height- as well? When I was trying ‘max’ blending it was basically multiplying Height 1 by alpha and layer 2 by 1-alpha and then using the max as the intersection value and it did not have areas of lower area rendering eroding the higher ones like that method seems to. I will try to get an example soon.
One option I considered for this was something kind of like a mini virtual texturing system where you pre-blend and render the heightmaps and can then perform only one texture parallax.
Yep, in the video, the weightmap is also sampled every iteration.
It confused me also. I have no scientific explanation for it at this point It occurs on particular heightmap for some reason.
Thats exactly what I am doing, but additionally I am doing single offset on alphamap by half height, to reduce the drift(there is quite a bit of it in motion). It is only applicable if the reference plane is something other than 0.5 though.
I just had and idea. What If I sample the alpha map twice before the loop with some sort of know offset, and inside the loop, I would interpolate between first and second result based on relation of actual ray distance to the the shift, that was used earlier? Obvious issues is not knowing in advance where ray ends and loss of detail in between two samples. As alternative, gotta try something like sampling the blend map every fourth iteration.
That is interesting. Might be also a key for easy vertex color painting,where vertex colors would play role of some sort of indirection texture.
Thats a neat idea. You’d be assuming a linear slope between the points but that might be a ton better than the floating artifacts you get from a single sample and much cheaper than checking every iteration. Give it a go!
Had some free time to give that approach a spin.
The performance gain is frankly lower than I expected.
At realistic settings ( 4 min 32 max steps, 8 shadow steps, blend map resolution 512x512) Performance gain of sampling blend map twice before the loop over doing it every iteration is roughly 9% measured in base bass reduction. However as step count increases, and more importantly, blendmap resolution increases, render time reduction becomes significant. With 4k blend map I am already recording 20% base pass render time improvement.
About floating artifacts, they are ten folds better than without sampling the weightmap, yet some drift is present as compared to properly getting blend weights every iteration. I’d even say that it is production-acceptable in most cases.
can you expand on your setup? id love to blend my POMs without floating artifacts!!!
You need to sample heightmaps and weightmaps for all layers you want to blend and perform the blend inside pom loop for each step.