Hey, Let me try to explain this.
Darthviper is right up to a point: in the past the offline renderer of choice for vfx (film and animation) has been prman (better known as renderman, although the renderman name refers to the specification rather than the actual program) for its quality in rendering shadows, subpixel displacements, motion blur and antialiasing.
Everyone’s wrong in inferring that the benefit came from rays though. Prman until “recent” versions was a scanline rasterizer, much like game technology nowadays.
What’s different is that when offline we can have more time to compute for quality. DOF was not based on depth algorithms but using jittered samples. Motionblur was caluculated pretty much in the same way but using time samples instead of simulating aperture. Antialiasing could be performed really well using rasterizing technology and clever algorithms.
Also, a real winner was that you could program any shader in rsl (renderman shading language), thus offsetting particular light interaction algos on the shader and using the rasterizer to accelerate that. Exactly how game engines do now.
Deep shadow maps provided penumbra using clever rasterizing schemes that made every cpu cycle count.
GI was pretty much faked with lights. I remember when i first saw an ambient occlusion render and felt like it was some alien technology.
Raytracing was not really widespread until a almost decade ago, when computers became able to calculate the complex ray interactions that came with that methodology.
Right now raytracing has become somewhat the norm, along with renderman compliant renerers that integrate the best of both worlds, albeit with more difficulty when using them.
Another key factor is compositing. After rendering there are an infinite number of tweaks and fixes that will be performed on a render, even on an animated feature. Stuff like glows, haze, blooms and balancing the various properties of the render itself are done in 2D with a compositing app (with time comp has become somewhat 3D as well, so the line is blurring).
And let’s also mention the fact that everytime you know what you will see in the screen you optimize just for that, and use all sorts of tricks to fake. A game is a lot less forgiving.
But i think most of all is presentation. If you see a game you can see it from a player’s perspective. It behaves like a game, hence losing all of its photoreal appeal just by not simulating camera and movie language. Try watching a trailer of “The Order” on any games site. That game looks almost like a movie, because it’s strictly tailored to behave like one. Camera angles, light, mood, postprocesses are there just to emulate that. Ofcourse the player loses a lot of its power and freedom, but the result is indeed stunning, and i can surely say that 7 years ago we would have really struggled to make a thing like that render in less than 2 hrs per frame offline, and with a lesser result, and then it would have to be fine tuned in comp for a lot of iterations and a lot more time.
But now we get it on 30fps on a ps4.
Bottom line is, at least for me: UE4 is pretty capable of doing CGI right now. You just need to compromise and deal with the technology to understand its limitations and exploit its good points. No, you can’t do avengers with it. Yes you can do a very pretty cgi cutscene or a short movie.
Main difference would be antialiasing, dof, motion blur shadows and reflections. It’s a lot if you just read the list, but we in vfx faked those features a lot in the past in a similar way, so i reckon it would be doable to do something very good looking now.
Maybe not at 30fps though. More like 2-3fps…