Transforming vertices into orthographic view space

Hiya.

I want to be able to render an object and use world position offset to effectively render it as a 2d orthographic object in 2d screen space.

I’ve tried various combinations of transform nodes but the fact that the transform always has to be a world position offset means its pretty much always going to be in projection space. it there a way to have the matrices in a certain way that will render local space directly into the screen without a projection matrix?

Cheers
Dan

1 Like

Here’s what I have tried. I plug worldposition into a transform from world to local or from world to camera. then I subtract that result from world position to get the offset and plug that into world position offset. But think I’m missing something. is this possible with the current material capabilities? surely its just a matrix transform to draw into normalized device coordinates? the fact that its an offset though makes me think that its hard to do post projection transform.

Think I’ve found the solution.

I looked through the FViewUniformShaderParameters available and did the opposite of WorldToClip which is clipToTranslatedWorld.

So now I have the sphere aligned to viewport in an orthographic way. It’s streyched to match the viewport whicg I expected. Also moving the sphere in X and Y in the world transforms it vertically and horizontally in the viewport which is what I want. These are going to be parented to the camera so I’ll need to add in extra offsets once they get in NDC space and to account for the aspect ratio.

The reasons for all this is to investigate a new way of doing fog of war without having to do the whole sceneCapture2D stuff. I am going to render spheres that are parented to the camera and render them to custom depth. I’ll also push them far into the distance using camera vector so that they render behind everything else. I can then use post processing to create the fog of war.

The post processing stuff works as thats just a world XY UV lookup.

I’ll start another thread once I’ve figured it all out.

So the issue I’m facing now is how to counter world space movement of the object. I’d like to have the object rendered in the middle of the viewport and control the horizontal and vertical offsets as well as scale separately. That to me suggests that the object should be in local space. However, the matrix does clip to ‘TranslatedWorld’ which has PreViewTranslation in it. (https://docs.unrealengine.com/latest/INT/Engine/Basics/CoordinateSpace/)

Subtracting PreViewTranslation from the position before ClipToTranslated world does weird thing as well as subtracting after. I wish we could just have a Matrix Override Slot and create our own from the View.Params!

Now using PrevInvViewProj which pretty much does what I want (some flickering here and there) but the issue is that it is the previous frame’s inverse view projection. Is there a way to get the current frame’s inverse view projection?

Oh man you won’t believe this. No one is listening but I don’t care, I figured out the weirdest fix. Unreal doesn’t expose the current inverse projection matrix directly! I was about to go through and compile the engine to add it to the FViewUniformShaderParameters from SceneRendering.cpp but as I was about to add it I noticed that ClipToPrevClip is provided. Guess what that is?


ViewUniformShaderParameters.ClipToPrevClip = InvViewProj * PrevViewProj;


Well since we have InvPrevViewProj exposed we can just remult it to get the InvViewProj…


mul(View.ClipToPrevClip,View.PrevInvViewProj);


So now I can render meshes to the screen orthographically. The normals are messed up but I’m not interested in them as I am rendering to custom depth anyway.

Hope this can be of use to some people.

I find it interesting. This could be THE way of creating a proper fog of war actually o.o .

Proof of concept. It works.

This is a sphere mesh rendered into custom depth and read in post process. I can now add as many spheres and with a little bit of extra work, I’ll be able to control 2d screen position and scale (have scale kind of working).

This is some really high end stuff that I probably wouldn’t understand without a tutorial. Well done! Maybe you would like to share it when its polished! Total respect. ^-^

Ooh here I am using pixel depth offset with a radial gradient to push the pixels as they are written to the depth buffer. When this goes into the Post process, you can use the depth gradient as a falloff. no manual blurs in hlsl. nice and cheap this is!

17339e1e86a61e70d1cf4841411cc103e7f5c4b5.jpeg

This is really a marketplace thing to sell. I am sure lots of RTS creators would be interested in this as this is the only thing that is not easy to do. :slight_smile:

I’ll think about the marketplace but I’d be happy to do a tutorial too.

One small issue right now is that there seems to be some motion blur or slight lag on the world position offsets that causes the sphere to wobble a bit when the camera moves. I might have to try the custom compile option to see if that fixes it.

So the only stumbling block now seems to be that the inverse view projection exposed to the material through the custom node has temporalAA offsets baked in. For this effect to work the raw inverse viewProjection matrix needs to be passed through.

I’ve submitted a pull request with that added in. Was only 2 files and 2 lines modification to pass it through.
https://github.com/EpicGames/UnrealEngine/pull/2450

I’d be grateful if one of the devs would review it and accept (pretty please)

Very nice indeed!

This is probably a much nicer method than my volume-texture idea. One thing I would comment on, I know so many games that do a great fog-of-war, then they forget to mask out the audio. It becomes a game of echo-location to find the enemy base usually…

I assume the data for the fog is stored somewhere, so the CPU can access it? Or is this all GPU side?

Yeah this is all completely on the gpu. I’ll have to mirror the functionality in CPU. I’ll get to that when I get to the networking side of things. This way there’s no data going back and forth between cpu and gpu though which might be a good thing, although I’ll still need to send accross material parameters to control the placement of the sphere by modifying the world position offset.

Yeah sound is gonna be fun to get into ( i did music tech at thames valley uni). love sound design.

This is gold, I’ve just stumbled on issue that we don’t have actual orthographic projection in the engine. One that you can choose on camera seams to be just a perspective with a very narrow FOV.

So how do you render sphere itself? SceneCapture2d?

There is no SceneCapture2D! :slight_smile: Thats the reason I did this. I didn’t want to use SceneCapture2D. I render the sphere anywhere within the camera so that it doesn’t get culled. I’m working on a way to parent the spheres to the camera. Then I turn of ‘render in main pass’ but turn on ‘custom depth’. I am using the sphere rendered into custom depth and then remapping the depth values to get a 0-1 mask that I can use in post process. I project the mask using world XY.

I’d like to measure the difference between rendering to SceneCapture2D vs custom depth. The docs say that it is more expensive rendering custom depth than not using it but since it is using no materials just depth, then the only cost is draw calls. I’m hoping a few draw calls of spheres isn’t going to cost much. I’ll experiment with having them all components under one actor as well to see if that is less draw calls too.

The only drawback I can see with this method is that the resolution of the custom depth is the same as your render resolution. Also it’s not square so the res in one dimension is more than the other. But I’m not too fussed really as the goal here is to have a soft sphere falloff that doesn’t need high res.

Thank you for explanation! I’ll give this a try. The reason why I was asking about SceneCapture2D is because I want to render a number of masks, around the camera position, into a texture and then project it on landscape. Still need to figure out how to get proper size of projection but doing it in custom hlsl node, like you show, looks much more sane solution than building a custom RHI pipeline just to get proper orthographic camera.

Btw, you could add your sphere into the same actor where you have camera, as component and place it in front of the camera, to be sure it’s not culled. But I guess I’m missing something as this is too obvious.