Render Targets - Correctly Capturing and Using Normal Maps

Hello,

I’m capturing some normal information from a standard normal map (normal compression DXT5) to a render target but i’m unsure of the correct procedure.
Firstly it seems like i can only use render targets as a linear color inside of a material, secondly if i set my normal map to normal map compression settings the capture i get seems to be post compression, understandably, i’ve found a few topics about this and i’ve tried various uncompressed methods x2-1 ect. Or if i set my texture to HDRI or default i get pre-compressed render but i’m still limited by the linear color from the render target in the main material.

All avenues produce not quite right results. some closer than others, but i just don’t have the indepth knowledge of normal maps and compression settings to truly debug this.

Is there a proven method of capturing normal map information from a render target accurately?

What if you made a variable texture material and swapped the texture of only the emissive to the normal map? Meaning you have a new Normals Only material that lets you capture just the normal.

You should be aware MIPs will still be used unless you specifically disable them on the texture sample.

If you reference a render target within a material you should get the default rbg,r b g a, rbga inputs. That. Or you can drag the render target to a texture sample node Tex input. Don’t remeber which.
Or, you cab probably try to use a mask node to isolate the channels.

Anyway, If you can give examples of what you expect and what you are getting that would be helpful.

Hey thanks for the reply.

The problem i’m having is less so about capturing the normal map, its more so the format in which the normal is captured.
So i have an actor that creates a render target, and a MID and draws that MID to the render target, using the emissive slot as you stated to get the normal map i supply.

If i set the base normal (base normal being the texture assigned to the emissive in the MID) to normalmap compression settings, the capture i get is post compression.

if i set the base normal map to HDR compression, the capture i get is pre-compression.
But there’s still the limitation that the render targets compression setting can not be changed (i thought you used to be able to) so in the material that i’m using the render target, i only have the option of linear color for the render target which has normal map information in it.
Not that i know that makes a difference other than mip and streaming behaviour.

I’m just not sure which is better, pre compression or post compression captures, both aren’t completely accurate at the moment, as i don’t know how to compensate for either. obvioulsy pre compression would be the prefered option if i could set the render targets compression settings to normal, that way it would behave like any other normal map, at least visually.

Try subtracting a full blue from the render target. Its just a guess based on the visual result really.
Are you re-using this render target in real time or are you just looking to create a texture from it?
in the second case, you can set the render target to pre compress and compress once you create the image.

if instead it is used at runtime, I’m not exaclty sure what to suggest, But… the default reactive water use and write Normal maps, so maybe you can check their tutorials and content example to see how the Normal is handled within it.

I will be using the render target at run-time.
Thanks for the link i’ll take a look at that.