Download

FOVO: A new 3D rendering technique based on human vision

Gamasutra: Robert Pepperell's Blog - FOVO: A new 3D rendering technique based on human vision (Gamasutra 2020)

Nordic2

Unity-FPS-2

Does anyone have experience with this tech, or something similar? Looks outstanding for wide field of view rendering, and distinctly different/better than a simple postprocess lens distortion.

I’m reluctant to reach out to the company (Fovotec) since I’m just an indie experimenting :wink:

I’m working on a game that suits ultrawide (or ultra-tall) displays, but I either end up with a highly cropped look at normal field of view, or I end up with the falling-into-a-black-hole look at high field of view:

This does the exact same thing and is already built in for free (Just dont crank it up, use a very low value, something like 0.2 for 160 fov, not 1 for 90 fov like the wiki shows or else it looks awful.)

2 Likes

Outstanding, I had no idea! Thanks so much for the tip :wink:

Self correction, FOVO is slightly higher quality and faster, but the difference is so little it just doesnt matter except at extreme panini projection levels.

How can you even tell? This article has like… no information about how it works. It reads like an advertisement.

Its also filed with flat out BS.

Either way…
You could potentially achieve the same thing by re-writing over half the engine.

I mean, if you get to that point… maybe creating your own camera primitive with its own pre-render custom pipeline would be the best / more sound solution.

The rendering side has quite a lot to it.

Increasing the FOV past what is possible means changing nearly everything about how the engine works natively - fustrum culling to boot since it is linked to FOV.

But all of this for what?
To make the end users throw up and get motion sick?

In VR, maybe, this could make sense. Eventually the Head Gear will get to a point where the screens aren’t just in front of your eye in order to allow some peripheral vision.

On the other hand, things are already prone enough to motion sickness as it is. Adding more stimulai is probably counterproductive.
If it wasn’t, you can totally bet that Oculus, Sony, everyone and their grandma would have jumped in with both feet on that bandwagon…

Also consider the rendering cost increase with something like foliage.

You go from 90fov to 160/170. That’s nearly double.

Currently, on a 1080ti you can barely get 16k tris to run at over 60fps 4k for vsync.

The performance increasing fov to 110 already drops quite significantly (and let’s blame the card over the engine on that, it’s essentially a dinosaur by now).

If you also add in the fact that you have to re-do all the math in order to render the scene “right” (which is probably the only realistic part the article bring to light), who knows what performance may end up at…

Could also very well be that after implementing their plugin you have to revert just because of the pefromance drop…
would have been nice to see a demo unreal level in the article to spin up and get some stats from.

Still, notice how they didn’t exactly release any stats at all and rather warn the render that (heavy paraphrased) “cost is unknown”…

I am not a graphics programmer so feel free to treat this as the ravings of a madman: But I think it might be possible to achieve at least the same visual effect without fundamentally changing everything about the engine.

All you need is the ability to render multiple views (in linear projection) and then stitch them together in a single view that can be mapped to any projection you like.

I made a “fisheye” “camera” a while back that did this. Using render targets for the views and a spherized cube mesh to merge them. I got the idea from fisheye quake which does something similar (except they render a full panorama, and they don’t use a mesh because they’re smarter than I am)

The main issue with my approach is that I’m only able to seamlessly merge the final color, not the individual gbuffers. So any screenspace effect that relies on information from other parts of the screen will not work correctly. However… If the engine had an innate ability to merge multiple render views then I think this could be done.

Conveniently… I think the engine can already do this. It looks like that is how the CaptureCube actor works (rendering 6 views with linear projection and stitching them together). The screenspace effects from CaptureCube behave as if it were a single image. Which suggests to me that the gbuffers are being merged.

Edit: I was wrong :frowning: Capture cube shows individual seams. Could have sworn it didn’t but I was wrong.

Its an interesting approach, but the cost of that is also pretty high depending on the scene, despite the merge issues.

The reason I’m suggesting re-writing half the engine is because here performance would really matter. Even more than the regular system given the extra amount of rendered objects.

A post process can acrually filter a linear fov 160 to look half decent, but the cost is high.

Forcing the issue, you can probably use the same formulas used by camera lense filters for correction of pictures. It’s essentially pin cushion distortion…

It would be interesting to see how it works out given the downsides of Panini in UE4.

I did have a laugh at this line though.

It started as a research project at Cardiff School of Art in the UK about 10 years ago when we began to realise that conventional images created with cameras and computer graphics engines weren’t doing a very good job of representing how we see.

Seeing how everything turned film industry and the fixation on how camera’s work.

You’re right but at the same time, the engine has lots of effects that run poorly on 4k/VR/mobile. If the effect is important then you will budget for it, if you can’t budget for it then… well… you just can’t use it.

Just depends on how much you’re willing to pay for a wide FOV with more aesthetically pleasing distortion.

1 Like