You would need to supply basis vectors of enemy’s eye point (Literally, forward, up and right vectors in world space) and projection parameters**(horizontal and vertical field of view angles)** to receiving material**(Either material of all meshes in the world, that are to be affected, or a decal, projected over whole screen, or a post processing material, whatever suits better)**.
In receiving material, you need to perform change of basis operation on absolute world position of a fragment, and change its coordinates from world space to enemy eye space(It is as simple as using transform and/or inverse transform 3x3 matrix material functions. Even if you are not familiar with matrix ops, poking around 3 basis vectors, that you supplied through parameters, will get you there).
After receiving fragment’s position in enemy’s eye space, you need to perform perspective projection of enemy’s eye space fragment position onto near clipping plane of the scene capture, using horizontal and vertical field of view of said capture, that you have also supplied through parameters.
Then, you can sample scene capture’s render target with projected coordinates, that were normalized and shifted from -1 to 1 into 0 to 1 range.
Afterwards, you would need to compare enemy’s eye space fragment position Z coordinate with the depth, sampled from scene capture render target.
If fragment’s Z coordinate is larger than depth, you’d color the fragment as hidden.
If fragment’s Z coordinate is smaller than depth, you’d color the fragment as visible.
If coordinates, that you sampled scene capture render target with, are outside of 0-1 range, or the Z coordinate is larger than maximum length of enemy’s visibility you’d leave fragment’s color untouched.
Furthermore, you will need to add filtering and biasing into the formula, but basics are as simple as above.