Yes, but then global illumination is my favorite area of research. And lighting is really the missing component from modern games. Sure animation could be better, but simulation tools that just require more brute force are already out there, nothing to do about that. Materials are already as good or better than offline CG had a decade ago, there’s quite good approximations of everything except anistropic materials running in realtime. Texture resolution and mesh quality is already beyond what was available a decade ago in CG, Gollum had 5k polygons in The Two Towers, main characters in games can reach over 100k polys now.
Even image quality, EG Anti Aliasing and whatever, are getting pretty good. But lighting is awful, and remains so, for example this was created years ago:
Half Life 2 models, and textures, and animations, but all composited correctly with nice offline lighting. Global illumination is hard, very hard. It’s easy to say “oh, you won’t notice this artefact” or shadow acne, or overly hard shadow edges, or that diffuse lighting is too low frequency to require a high frequency solution. But heck, just look at what path tracing can do for Minecraft of all things:
The truth is the biggest noticeably difference between Hollywood CG and games today is lighting. They can use many lights, or pathtracing, or etc. while games get to research, for example, a hybrid pathtracer/many lights solution (DFGI) and hope it will eventually look good enough and run fast enough for a single short range bounce only useable in the right environment.
Unfortunately, for that reason, most R&D people I’m acquainted with don’t do GI a lot. Not that I can blame them, re-writing motion blur for the fifth time is straightforward and provides improved and useable results. Trying out a new advanced lighting/GI method takes a long time and usually ends up with a result that’s too slow, or show too many ugly artefacts, or has too many restrictions on use, or a combination of those and more.