NVIDIA GameWorks Integration

https://developer.nvidia/gameworksdownload#?dn=nvidia-hairworks-1-1-1
NVIDIA HairWorks 1.1.1

and Pascal run the tech great, and yes that’s what it’s built for, but there must be some way to offer a cut down, less accurate version for slower hardware. Lionhead’s LPVs were very performance heavy at first as well, now it’s more or less free in comparison, without any change in the hardware running it. Even if the lower spec settings for VXGI are as problematic as LPVs are when it comes to problems like light leaking, the option to be able to do something like that at least opens up the VXGI workflow a bit more. Right now, if you make a scene focused around VXGI, even if it runs great on high end machines, without doubling the workload the scene flat out does not run on even something like the GTX 680 well enough to call it playable. Setting a GTX 780 as the game’s minimum spec in 2015/2016 is unreasonable to ask of the consumers right now.

Even making a change like lowering the minimum mapsize from 32 to maybe 16 or even lower, probably wouldn’t need a significant rewrite to the systems while still lowering the performance hit that it has. The Sci-fi Hallway example included with the build is a great example of why disabling VXGI in a scene built for it isn’t going to work out too well.

I’m all for future proofing the visuals in games, that’s part of why I’m trying to make tech work in the first place, but it is very difficult to convince non-artists on the team that the system is a good idea when their own computers can’t comfortably play the game they’re working on. Not to mention, as you said, VR, which when combined with VXGI will ensure nothing until 2020 can even consider running at the 90 fps that something like that requires (Unless Nvidia has a trick up their sleeve to not incur a performance hit when using the tech in VR). I do want tech to work, been messing with the settings for over a week to try and get it going well enough on lower end systems now that I have a few scenes that can use it well, it’s just a challenge to get it in a state where it runs well for it. For now at least, my solution is just to fill the scene with fill lights that only turn on with VXGI disabled and try to make it look as close as it can.

I reckon that at the current polycount of today’s games an Nvidia Pascal by the ~end of 2016~, say the GTX 1080 Ti ~will~ be able to run 4K with 90fps with VR with VXGI. At least the Nvidia Pascal Titan Y or whatever they might call it.

So the very high-end Nvidia cards of 2016-2017 I believe will handle 4K VXGI VR at high frame rates, particularly with improvements to UE4, VXGI and the drivers. Not to mention tons of optimisations being done for VR right now including Nvidia’s GameWorks VR efforts like “crushing” the edges of a scene (watch?v=1Dn2JKfje2o).

You rightly point out the of the next five years for mainstream PC gamers. I’m still learning UE4 so from a layperson PC gamer perspective indeed it’s great if developers can “fill in the gaps”.

What’s been brilliant about PC game developers is the scalability they’ve (almost) always implemented in games. I’ve always enjoyed how you could play through a game at low, low settings and still get a feel for it, while gamers with the top-notch equipment enjoy all the graphics possible (“really bad console ports” not included, of course).

Can you share how you are working with the scenes with VXGI on and off? I think that’s a very noble approach to whole matter. What’s the method to toggle “force no precomputed lighting” on and off at runtime? I think that’s a better approach than trying to use really low VXGI settings for low-mid GPUs.

So to sum up, at the end of the day the effort in making a game that has VXGI and non-VXGI all-in-one can pay off in terms of advertising and marketing opportunities. The game could (rightfully) be shown as, hey, is how it looks on mainstream graphics cards (and it should still look good), but then, BOOM! with an Nvidia Pascal and better check out full dynamic GI with VXGI. Yes, it goes back to the controversial “The Way It’s Meant To Be Played”… but if you’re a game developer trying to deliver the best experience for gamers, as long as the experience for mainstream Nvidia and AMD is good and not purposefully crippled, why not? I don’t know the industry but I suspect incorporating UE4, VR and VXGI in any game in the next five years will get a lot of support (and let’s be honest, free marketing) from Epic, Nvidia and Oculus.

So all the best, you’re at the very cutting edge and potentially looking at good rewards for your efforts, Lord willing! (That said, there is a very dark side of people being lost in photorealistic VR and not being able to adapt back to the real world… but that is something I suppose an individual developer has to consider themselves).

PS. As for VR Oculus has already specified (IIRC) the Nvidia 970 as ~minimum~ …So for PC VR we’re already in high-end territory. Obviously at point VXGI is not suitable for VR, but high-end Pascals should be capable as I detail (estimate) above.

Unless they completely break Moore’s Law with Pascal, I doubt it’ll be more than the standard 25% gap between the 900 and 1,000 series cards. A GTX 1080 Ti (or whatever they choose to call it, I think it’s still unannounced what numerical system they’re going to use. Might reset it like they did after the 9000 series) will not be able to run 4k at 90 fps in VR with VXGI in anything but a test scene. Even the 980 Ti right now struggles to run 4k in a fully populated scene at 60 fps, it’ll be a few generations of cards before we reach that level. VXGI might be cheaper on the and later cards, but even at lowest settings it certainly isn’t free, and a lot of games in VR will likely still continue relying on double rendering like 3d Vision does for quite some time, doubling the performance hit.

I do double the lighting work for every single scene. For VXGI I get it looking good with that first, then I turn off VXGI and use as many point lights as I need in order to simulate the look as closely as I can, completely defeating the point of dynamic lighting. Nothing that I’m doing, and nothing I can do, will let me use baked lighting as a fallback. The only solution I see so far is to make a second set of lights with shadows disabled to manually simulate the light bounces, and at that point if I’m going through so much effort, there’s little reason to continue using VXGI at all. By manually placing lights, you’re getting rid of the biggest advantage that it offers, the ability to change the content in the scene in a major way without breaking the lighting setup.

Debatable if it is or not without having a lower setting option to scale down to. It’s a lot of extra work doing the lighting for every single room twice, and you always run the risk of having to make compromises for one lighting setup to better compliment the other. It’s not like particles or Waveworks where turning it off just turns off a few visual , using realtime GI fundamentally changes how an artist will go about lighting levels. It can be a great change too, letting level designers have more freedom with how the environments in the game react to the events going on in the scene. I see a lot of great things coming out of systems like VXGI in the future, it just needs to be a bit more adaptable to current hardware for it to be a viable option to develop a game around today.

Thanks for the insight into the on-the-ground reality… If it is difficult to “switch” between VXGI and Lightmass then VXGI and similar for the mainstream is still a few years away, but no more than 5 years I reckon.

One point with Nvidia Pascal is that it’s not really following Moore’s law anyway because they have a lot of architectural changes and are “jumping” from 28nm to 16nm.

Fair enough with , and Volta is only scheduled for 2018. But I do believe that by the end of 2017, the Nvidia Pascal “Titan Y” will be able do 4K VXGI 90FPS in real-world scenes. That’s 2 years of quite a lot of optimisations with UE4, VXGI, GameWorks (including GameWorks VR), etc.* Edit: Don’t forget DirectX12, in 2 years time the speed optimisations will be quite significant.*

So, time will tell, I guess it’s good that everyone is putting in the R&D right now!


Edit: 's a nice (optimistic) post about what Pascal could be:
://forums.anandtech/showthread.php?t=2436009
*
“GP100: 550 sq. mm. die, 13.75 billion transistors, 6144 CUDA cores… In terms of performance, we’ll be looking at no less than a full doubling of the Titan X’s power.”*

I’m personally guessing 30%-50% improvement of Titan Y Pascal over Titan X . If Nvidia totally hits it out of the park then 50%-100% but that’s very optimistic. The 50%-100% improvements would be more in the “advertised” areas like deep learning, etc.

I’m thinking back to the statements the CEO of Nvidia made: ://www.pcworld/article/2898175/nvidias-next-gen-pascal-gpu-will-offer-10x-the-performance-of-titan-x-8-way-sli.html
Not sure what that means for performance in gaming…they said that they try to make Pascal ten times more powerful than in compute tasks, but I can’t translate that into anything useful for me. Maybe because I’m not good at speculating?

Yeah the 10x (1000% increase) in speed is at specific “deep learning” type stuff which I barely understand. The 30%-50% increase is my personal guess based on stuff I detailed above… most notably going from 28nm to 16nm along with Nvidia’s overall architechural change track record (post-Fermi). There’s also the “CUDA Cores” kind of stuff where if you’re talking the Pascal Titan Y having 4000 CUDA cores, their new 3072-bit (that’s not a typo, AMD is shipping 4096-bit cards now) HBM2 memory implementation… certainly for gaming there should be big increases.

There’s also all the GPU compute stuff which will impact developers and media producers. Encoding, rendering, is all being revolutionised by GPU compute. 's a render on my GTX 660M Kepler from Daz3D with Nvidia Iray. For argument’s sake I set the time limit to 5 minutes, 1920x1080 (only post process was grey background and increase in brightness (curves):

is of course nothing compared to the stuff coming out of people with Octane and say, two Titan X, let alone a GPU cloud rendering cluster (it’s considered “realtime” in the latter).

Besides hardware, as mentioned there’s , GameWorks VR and so on, so yes I am very optimistic about Nvidia Pascal hardware ~and related software~. Let’s take Lightmass for example. Sure, the current quality when you crank up the settings is phenomenal. But if you took Nvidia IRay or VXGI or OpenCL in general and made a Lightmass-type baking system, you could get much faster baking.

To sum up anything less than a 30% increase of Titan Y over Titan X in gaming (UE4, 4K, VR, etc) and mainstream GPU rendering (IRay, Octane) would be an Nvidia misstep in my opinion.

Edit: On the AMD side the Dual-Fiji should be out soon and that should do 4K VR 90FPS etc. Will have to be liquid-cooled though, I think: ://wccftech/amd-dual-gpu-fiji-gemini-spotted-shipping-manifest-codenamed-radeon-r9-gemini/

If Lightmass is an option, you’ll always get higher quality out of baked lightmaps than realtime GI. I have high hopes for VXGI, but unless you put it up at 64 cones or higher it can’t match it, and it’s unreasonable to expect it to. VXGI renders once per frame, where Lightmass has hours to process a scene in the highest quality possible. Realtime GI only makes sense in a scene that couldn’t normally be done with prebaked lighting for some reason, such as placing down entire buildings at runtime, or large maps populated with hundreds of objects (like a forest).

Switching between them is impossible as well, since Lightmass uses static/stationary lights, and VXGI only uses dynamic lights. The only real way to switch is to turn all the dynamic lights to static ones and rebake the entire scene, not really an option to do at runtime.

LPVs are a good alternative for a lower performance hit right now, but it doesn’t support spotlights or point lights, making it only really work in outdoor scenes.

If either LPVs added support for the other kinds of lights, or VXGI offered a low performance impact option, then suddenly VXGI would become viable for full projects overnight. The only reason someone wouldn’t use it right now for a dynamically lit scene is that it alienates a lot of the PC gaming audience (and the entire console audience if you’re targeting that).

Hi all, just trying to get familiar with these Nvidia specific branches of UE4, particularly for ArchViz work and making things that much more realistic.

Is there a way to combine the features of the various Nvidia branches of UE4 or are we expected to compile each branch as we need those features? Really curious!

Thanks!

A.

I feel like I’m beating a dead horse at point, but one thing I would like to mention is that a lot of the performance improvements are for individual features, not the system itself. There are certain features of VXGI that only run well on , but don’t improve quality much. That being said, if you already lowered the StackLevels to 3 (less isn’t viable), disabled storeEmittanceInHDRFormat, lowered the MapSize, AND set the voxelization to use the lowest LOD on your meshes without reaching optimal performance, then you need a second solution. VXGI is intensive, but there are a lot of optimizations that can be made to make it run faster, including some within your assets, such as disabling it on certain materials or even reworking your meshes a bit to assist the voxelization process. I guess what I’m saying is that taking full advantage of VXGI is like taking full advantage of the . You can do it, but it’s a pain in the ***.

Hi,
at the moment I try to create my first VXGI Build after a long testing period. But unfortunately the Build always fails. Are there known problems with ´s VXGI branch? At the moment I am using the Unreal Engine 4.8.

Best regards,

Check out: https://github//UnrealEngine/tree/4.9.1_NVIDIA_Techs thats all the gameworks techs compiled into one.

None that I am aware of, haven’t touched the 4.8 build in god knows how long. I would try the 4.9.1 build if I was you. I use that same one and I am unaware of any issues.

Thank you very much for your answer. I will try the 4.91 build. :slight_smile:

That is exactly what I was looking for.

Thanks so much!

you are !!! is there a way to you?

Thanks and yes. Check post for details: https://forums.unrealengine/showthread.php?53735-NVIDIA-GameWorks-Integration&p=391404&viewfull=1#post391404

I have tested the build 4.91. Unfortunately I am not able to build my VXGI projects. I get always error:

aef69c5f299f6172b828fe25dd5a20da635b326b.jpeg

At the moment I dont know what I can do to solve problem. I have no experiences with the UnrealBuildtool…

Is that with NVIDIA’s VXGI branch? As I have tested packaging with my branch (the all merged GameWorks branch) and it works fine (I did have to make a couple of tweaks, which I will commit soon, but none for VXGI as far as I can remember), have been working on a BP only project all week using the engine, packaging each night just to ensure. No issues so far.

Strange… I tested version 4.91 with two of my VXGI projects. I get always error. I will try to compile the build again. Perhaps will help…

Epic Dude! Im working on a project and i wanted to do something just like . Mind giving me a few pointers on how you got that to work? (I’m relatively new to game dev, and even more noob at UE4)