I think I heard Scorpio is holiday 2017, it’s likely Sony would release around the same time. Either way, it’s not very soon, at least not those two. The NX might not even compete with them, I heard it’s likely coming early next year. If they threw in equivalent hardware to Scorpio, it would cost a fortune which goes against Nintendo’s philosophy so far.
Dynamic GI like VXGi already brings my 980 to it’s knees in small scenes. I don’t think we are at the point where game ready dynamic GI for large worlds at 60+ fps is an option (even 30fps is a stretch). There will likely be more methods of faking it before we see it happen, which probably won’t be until a 1070/80 is considered an average gaming card.
Nvidia’s techs are always heavily un-optimized (sometimes on purpose). So I’m not surprised. So you can’t use it as a standard or barometer.
Quantum Break is one of the only games with TRUE dynamic GI and it runs on consoles, an $80 GPU.
Also AMD 480 is only $199 and outperforms the N 980.
Epic’s SVOGI ran on a single GTX 680 that is 3.1 TFlops. And we have consoles coming out with 12 GB Ram 6 Tflops and 5.5 flops AMD 480 costing $199.
There’s literally no excuse. It has to be done.
I used VXGI as an example because of it’s level of quality. There are cheaper solutions with less light bounces or alternative techniques out there, but they are still very performance heavy regardless what GPU you have. That’s my main point, an average gaming GPU is between a 560 & 770 (even that is a high estimate) or equivalent AMD. Adding this feature into your game is a lot of work for a very limited user base.
I want it to happen don’t get me wrong, everything I create uses dynamic lighting so I’m not arguing against it at all and I really hope to see DFGI (or similar) feature become standard soon. I just don’t think it’s feasible for the type of games that would benefit from it the most at this time (dynamic open world games).
It’s totally possible, but even with the new consoles it’s just not a priority for Epic right now, since none of their projects would benefit from it. Of course, you’re free to use LPVs, but they’re hardly an ideal solution without Lionhead working on it anymore.
If you want to see an example of how good realtime GI can potentially be though, look at what the CryENGINE V has done with SVOTI (which is basically SVOGI renamed). It runs great, hardly any notable performance impact, and just works out of the box more or less with any scene you throw at it. But, this is all around an engine built for dynamic lighting in the first place of course.
VXGI all things considered does actually run well, assuming you’re running it at the bare minimum settings that it offers. On a 900 series or newer card, the performance hit is about the same as LPVs in that regard, just much higher quality. However, VXGI’s biggest drawback if you’re targeting Neo and Scorpio is of course that it doesn’t run on consoles at all. It’s a PC only effect, so you can’t build a multiplatform game that relies on it in some way. Of course, in 5-10 years or so, I would expect things like VXGI to become industry standard, as VXGI cranked up all the way does actually rival lightmass in terms of quality.
At the moment, at least for my game though, dynamic lighting without any GI to begin with already gives me difficulty hitting 60 FPS in a scene with a lot of vegetation and such. Lightmaps and prebaked lighting in Unreal right now are pretty much second to none, nobody can argue that, but the engine does need a bit more attention when it comes to realtime lighting as a whole, not just in regards to realtime GI.
First, the doubling of performance on the high end cards is not actually enough to compensate for the tenfold increase in complexity that volumetric GI needs. A few more generations will let you do it, assuming we don’t also have to render 120 Hz VR scenes in dual 8k stereo …
Second, even if the highest end can do it, the majority of game players have significantly lower-spec hardware. In fact, the majority of people who pay for games in the world use only built-in “processor graphics.” (Intel or AMD.)
Third, even the console refreshes aren’t all that fast or big, and don’t have the required capacity for large GI levels. And any game released will need to also work on the pre-refresh hardware versions, in order to not cut off their own sales channel.
I agree that real-time GI will be fantastic, and I love all the various research and experimentation happening. That’s why I’m a realist, and believe that we’re still a few generations off from the mainstream GI break-through.
I wouldn’t really say that. There’s a lot of stuff the engine is lacking, such as the ability to bake lights down if you want to. It’s very rare that you’ll come across a scene that needs 100% dynamic lighting, often you can get away with baking 90% of the lights and just going dynamic on the things that absolutely need to move or change. Things like the UE4’s stationary lights are great for dynamic scenes that you want to save a few frames on, provided your lights don’t ever need to move and hardly overlap. When I used to use the CryENGINE for a lot of environment art, the one feature I wanted the most out of it was the ability to bake down lights, specifically for interior lights that would stay constant in the scene.
In a perfect world, we would have a system where we could have a higher amount of stationary lights and maybe a handful (3-6) of realtime GI lights to handle anything they can’t cover. Whether or not that’s possible, I can’t say, but one can always hope.
Quantum Break does not have dynamic GI. It’s has dynamic stuff that work in screenspace. Like: screenspace local GI, ssao(line sweep), SSR and screenspace raycasted occlusion. These effects just complement low resolution probe based static GI.
like jwatte, I also disagree.
you have more powerful cards now, but they will want to render 4K and/or VR at 90 FPS.
devs will also want to use that power for higher polygon meshes, higher resolution textures, and better quality for pretty much anything.
you can’t just take the performance increase of a new generation of GPUs and put it all into one feature
the issue is still that there is still not a good, robust, scalable and flexible solution that can be applied to an array of game types (like UE4 is meant to), that doesn’t take up significant performance.
and IMO the limiting factor is research and man hours. but the feature is still considered luxury and not a make-or-break so I doubt it will get significant effort assigned to it
VR is a gimmick. barely 5% of the games will be VR so i don’t get where your coming from.
Less than 1% of games today are VR. Again i don’t get where you’re coming from.
Secondly, hardly anyone have 4k TVs or monitors today. Lastly have you seen games today? they have high enough polygon and textures. Its lighting that is holding everything back.
The division shows you that you can throw a bunch of objects on the screen today. Look at the character models of games, Tomb Raider, QB, Ryse, Uncharted, the order 1886, they are crazy impressive.
The un-believe-able part is that we are early in the generation.
But culprit is lighting. no one has yet to match watch dogs 2012 E3 edition. Its by far the most impressive video game demo ever showcased and the reason is the lighting.
you can’t be serious. VR might be a gimmick or not, time will tell. what’s true enough is that companies are pushing for it. There’s several companies spending big money on consumer versions of the hardware, and several for mobile. Epic already has made 2 techdemos for VR and made the UE4 editor work in VR.
in my eyes VR will most likely be gimmick but I have tried it and I understand if people like it. if there’s a market it will keep existing. just because you ignore it doesn’t mean it will not exist.
what’s your source for saying barely 5% of games will be VR? please enlighten me, because I don’t think Facebook, Google, Sony, Samsung, HTC and LG are spending millions to try to fight over a quota of mere 5% of the gaming population.
the new “PS4.5” and X1’s Scorpio are coming to support VR and 4K at this current gen, not just to make games prettier.
The GI solution has to work in a ton of situations, which is why a solution that a specific game uses is not necessarily a good solution for UE4 to add, most of the time those solutions only work for the type of situation that the game uses them.
Just wanna chime in here and say that VXGI is essentially the same as SVOGI. It’s usable in a modern platform so long as you’re not artistically greedy with it. If you start getting ridiculous with the resolution, or if you start using Multi-Bounce and translucency then you are going to pay a heavy cost. VXGI is really good, it’s also relatively well optimized and essentially open-source. So basically, we have SVOGI already.
Aside from that, just because the 1080 exists doesn’t mean the margins have suddenly changed. Publishers are still going with the 600 series cards for minimum specs even on titles released this year. The problem is you can’t just say “well those users won’t get DGI then”. You either have DGI in a game or you don’t, there’s no in-between.
Epic was working on Surful-Disc based GI, but they had to stop because other things required more attention. I’m sure they’ll come back to it. And sorry, but screen-space GI is not a solution - so quantum break doesn’t count IMO, especially as an engine solution.
As DV says above, the challenge of making a GI solution for a single game is vastly different to making one for an engine. An engine needs to be re-used for thounsands of varieties of projects and the tools need to work for a big chunk of those platforms.