A question about real time lighting vs CG lighting.

So video games have come a long way since the PS1/N64 days. We can generally get graphics that are acceptable looking while running at 30 or 60fps.

However, there’s still something about the lighting in games that looks way behind even that of old CG movies. I’m not even going to touch upon texturing, polycounts, anti-aliasing etc, since those all have an easy answer.


I know CG lighting takes a lot longer to calculate than real time. But I want to know what exactly is happening during that calculation process that games obviously have a harder time of matching?

I was thinking, could it be tied to rays? Basically, is a lightsource in CG sending rays to each pixel on an object and shading them much more appropriately? But if that was the case, how come per pixel lighting in games still doesn’t look as good? Is there more accuracy involved? I imagine there are more scientific/complex techniques involved. For example, lighting calculates the decay of light as it travels from a long distance. But then it’s possible to have something like that in games too (inverse square).

By the way, the posted picture of Toy Story, while it looks ugly today, still took time for Pixar to re-render on a massive renderfarm.

http://watchingapple.com/2009/09/pixars-blistering-rendering-speed-for-toy-story-3d/

(I’m not a 3D programmer or anything like that, I’m just expressing guesses)

Well I would say that performance is a great problem. You are still bound to hardware limits, and scenes within a movie have way less changing parts as you have with a game and have pre-defined scenery.
Reflections are actually accurate, while in modern engines you have to use SSR or reflection captures, which are both more or less good. In UE4 the reflection captures are static and only work with pre-baked light.
Another issue is global illumination, we don’t have a proper solution yet. Lightmass is ok, but has it’s limitations. No shaped lights (i.e. no rectangular lights; as far as I know), emissive materials have only one bounce and the lightmap resolution is limited - cranking it up causes a massive spike in VRAM usage.
There are still issues with shadows, as shadow maps also have limitations and ray-traced shadows only work with static meshes (I wonder why BSPs don’t work).
Particles are still textures more or less and volumetric particles are still nowhere near as the ones within 3D applictions like Houdini.

I wished I could tell you how it’s actually tied to the technical part of light rendering, but I would be the wrong person for that. I still hope that it’s a bit informative.

Greetz,
Dakraid

You can use perpixel lighting/shadows in games today just like you said but unlike in precomputed cg you cant bounce and trace ray. So its imposible to create realisticly looking reflections refractions and caustic effects also GI is mutch more aproximated than in movies.

However, until recently, Pixar didn’t use GI in their lighting, they would do really complex lighting setups with many many lights to get the look that they were going for. That’s something that Unreal can actually do, many lights as long as no more than 4 affect a surface. The main difficulty is shadows, getting high detail shadows and shadows from more than one light source takes a lot of computer power, that’s why there’s the light baking.

With the deferred renderer, you can have as many lights as you want affect a particular surface (screen pixel.)
However, shadowing is harder – you can’t have as many shadow-casting lights as you want.

Old-school CG used ray tracing, so it got nice round shadows, penumbras, and reasonably accurate reflections (including caustics, specular, etc.)
Re-evaluating the scene from the point of view of the bounce is relatively speaking cheaper in a ray-tracer than in a projective renderer like typical game renderers (like UE.)

Thanks for some of the answers, guys.

I actually wanted to avoid bringing up Global Illumination, because as someone mentioned, that was used long after the first Toy Story.

As for number of lights giving you CG quality? I think that’s both a yes and no. One of the first things I did in UE4 was import a character and trying to light it using 3 dynamic lights. It definitely gave nicer results than using 1 ordinary light. But then I remember Epic made the Samaritan demo and they claimed they had 123 lights in it, but it still doesn’t look up to Toy Story quality.

Regarding Ray tracing, I’m guessing that’s a better culprit. Lightmass proves you get better lighting with just one lightsource over multiple real time ones. However, Toy Story again didn’t use that. It was actually based on a scanline rendering (because ray tracing the movie would have taken forever on those computers back then). So there’s something there that games still have to learn from.

It’s hard to compare that stuff to Pixar movies since they are completely different visual styles. The main differences compared to a game would be things like anti-aliasing and depth of field, which aren’t up to the same quality level

But also its the sheer polycount.
I remember having seen a 3d model of Buzz Lightyear which had close to a million triangles. One frame of ToyStory took 1 GByte storage space for the geometry.(crazy back then :wink: )
Although it was only rendered in 1536×922, it used 48 bit color depth.
In terms of lighting they could use exhaustive raytracing :

Another source says :

Afterall, there were/are some approaches for realtime raytracing: http://www.wolfrt.de/
However, they require off the shelf but specialized hardware…

It would already be a nice option to have UE4 build the static lighting with just one CPU core, instead of multi threaded…

Phew, I think I got quite some things not right. ^^ But nice to learn something new.

I have a hard time believing that they had 1GB of assets in a frame, rendered uncompressed files can be pretty large, but geometry wouldn’t take much space and they didn’t have nearly the complexity in textures that we have these days.
Poly count in games is pretty high, depending on what you’re doing most things can look smooth.

Hmm, dont know. I guess it dependeds on the content and 1 GB was maybe a value… I remember a report in a (serious) magazine saying “1 frame would fill your entire 1 GB HDD”. And with a 1 GB harddisk you were the king of the hill back in 1995 :slight_smile:
A lot of the geometry data was spent on the grass and other foliage, modeling every single straw. Which was a painstaking process as tools like SpeedTree were still science fiction…
Of course they had “simpler” textures back then, but what they couldnt achieve with textures and shaders, they had to build in geometry, thus the high polycount…
Im curious however, but could not find information about, what the 117 Sun stations had in their belly and how fast 117 current workstations would copmplete the job…

Maybe they meant 1GB of RAM which would have been a lot back then.

Hey, Let me try to explain this.

Darthviper is right up to a point: in the past the offline renderer of choice for vfx (film and animation) has been prman (better known as renderman, although the renderman name refers to the specification rather than the actual program) for its quality in rendering shadows, subpixel displacements, motion blur and antialiasing.
Everyone’s wrong in inferring that the benefit came from rays though. Prman until “recent” versions was a scanline rasterizer, much like game technology nowadays.
What’s different is that when offline we can have more time to compute for quality. DOF was not based on depth algorithms but using jittered samples. Motionblur was caluculated pretty much in the same way but using time samples instead of simulating aperture. Antialiasing could be performed really well using rasterizing technology and clever algorithms.
Also, a real winner was that you could program any shader in rsl (renderman shading language), thus offsetting particular light interaction algos on the shader and using the rasterizer to accelerate that. Exactly how game engines do now.
Deep shadow maps provided penumbra using clever rasterizing schemes that made every cpu cycle count.
GI was pretty much faked with lights. I remember when i first saw an ambient occlusion render and felt like it was some alien technology.

Raytracing was not really widespread until a almost decade ago, when computers became able to calculate the complex ray interactions that came with that methodology.
Right now raytracing has become somewhat the norm, along with renderman compliant renerers that integrate the best of both worlds, albeit with more difficulty when using them.

Another key factor is compositing. After rendering there are an infinite number of tweaks and fixes that will be performed on a render, even on an animated feature. Stuff like glows, haze, blooms and balancing the various properties of the render itself are done in 2D with a compositing app (with time comp has become somewhat 3D as well, so the line is blurring).
And let’s also mention the fact that everytime you know what you will see in the screen you optimize just for that, and use all sorts of tricks to fake. A game is a lot less forgiving.

But i think most of all is presentation. If you see a game you can see it from a player’s perspective. It behaves like a game, hence losing all of its photoreal appeal just by not simulating camera and movie language. Try watching a trailer of “The Order” on any games site. That game looks almost like a movie, because it’s strictly tailored to behave like one. Camera angles, light, mood, postprocesses are there just to emulate that. Ofcourse the player loses a lot of its power and freedom, but the result is indeed stunning, and i can surely say that 7 years ago we would have really struggled to make a thing like that render in less than 2 hrs per frame offline, and with a lesser result, and then it would have to be fine tuned in comp for a lot of iterations and a lot more time.
But now we get it on 30fps on a ps4.

Bottom line is, at least for me: UE4 is pretty capable of doing CGI right now. You just need to compromise and deal with the technology to understand its limitations and exploit its good points. No, you can’t do avengers with it. Yes you can do a very pretty cgi cutscene or a short movie.
Main difference would be antialiasing, dof, motion blur shadows and reflections. It’s a lot if you just read the list, but we in vfx faked those features a lot in the past in a similar way, so i reckon it would be doable to do something very good looking now.
Maybe not at 30fps though. More like 2-3fps…

Well are we actually talking about in-engine rendered movies etc. or scenes where the user has control?
If we are talking about the first case, then take the CryEngine for example. If you go for the high end hardware you certainly can end up with amazing results.
Now Trailers are known for not portraying the reality (Watch Dogs for example), so if we are talking about the latter then trailers are out of the way including pre-recorded gameplay demos from the publisher.
Also I wouldn’t say that “The Order” is almost up to movies. Ryse is closer to being there, but “The Order” has to follow behind. (but that just might be my impression :))

Creating a convincing presentation in cases where the player has the greatest freedom is the hardest. The artist have to make models and animation look good from any angle, while with limited player input the artists have a bit more control.
This also raises the question how much freedom to we grant the player? The less freedom the player has, the more control we have. The more control we have the better we can tailor the presentation.
Maybe we will end up in the future with games that have as much freedom as GTA5 (for example) but are indistinguishable from movies, but that would require much more work and more advances in technology until we really can achieve those results with this much player freedom.

Greetz,
Dakraid

PS: Feel free to directly tell me if I’m wrong, so I don’t make false assumptions over and over again ^^

For the achievable quality of static lighting, this should make no difference.
I guess a lot of the “lesser” quality of game engine visuals over offline rendered CGI are the optimizations. For example: irradiance caching.
It speeds up the lighting build process by interpolation. But interpolation gives you a kind of eye-balled result.

Take an offline renderer like POVray for example. Since the geometry is defined functionally, there is no such thing as polycount.
The quality is exclusively related to resolution and iteration depth. You just reach a point where further improvements become imperceptible.
I once rendered a sequence with a few simple geometry objects and materials but a complex light setup. With 60+ hours per frame, it felt like it took forever, but the result was amazing :slight_smile:

A good example for that are the Rebel Assault games from LucasArts. Since you had very little control about player movement (excaltly none), they could get away with faking everything with sprites (which were superior to polygons at the time).

I did a little more research and I found something that could explain some of what Toy Story is doing.

For shadows, games could get close by rendering very high resolution shadow maps. Examples,


256x256


1024×1024


15350×15350

“Blocky shadows” is definitely a trait that gives off a “video game look” whereas CG from the 90’s always had high quality shadows. Even throwing in a simple object into 3DS and rendering it with a simple point light will still yeild very high quality shadows. You can see the huge difference as you increase the shadow resolution till the difference becomes palpable. However, the highest quality shadow eats up a ton of memory and impacts frame rate, greatly.

This was done on a HD 5770 at a screen resolution of 800 x 600. Now imagine doing this for just 10 objects. At 15350x15350, that’s 4.4GB of memory spent on shadows alone! :eek:

That doesn’t mean no game could do this though. I think if you made a cutscene, with maybe just 2 people in a very small room with few objects, it could be done. Also, because shadow quality is based on how close the camera is viewing something, far away shadows could technically “cheat” and be rendered at something much lower res to save on video memory.

At least this answers the “shadow” component of Toy Story. The actual lighting involved still seems like a mystery to me. Is it just possible the texture,shadowing, and anti-aliasing quality is just really really high? Or are the shaders responsible for each Toy incredibly incredibly long and complex?

Yes, it does. Kind of.
Another thing is the “lossless” geometry description. A sphere is a perfect sphere, defined by a radius, not a set of triangles.
This image was created with POV-ray and took about 5 minutes to render. Peak memory usage was around 300 Mbyte.
4d27e261c580f1b18dcf09f060c4c7d14deb1464.jpeg

In a raytracer you have a lot more control over light propagation. Plus, raytracers really bite the bullet and calculate each shading point.
UE4 does irradiance caching which does some interpolation.
From that point of view the “lesser” look of UE4, compared to Pixar stuff could be describes as optimization artefacts due to the unwillingness to wait weeks/month for a static light build.

Well, as stated above, they are 800000 machine hours complex.