UV's for light maps

It’s good to hear everyone’s thoughts on this. I’m starting to think some of the more complexly dressed sets that we’ve seen in youtube format maybe have had a lot of non-light mapped furniture that has been left dynamic. I could be wrong, but that would at least work for generating rendered videos. It wouldn’t be a good option for VR walk-through or building levels for a wide audience. I’m sure many of you like me, have built scenes that will only run on a really beefy machine just to see what that barrier is. I spent like a week and a half of unwrapping various furnitures that I might need, and while I got a lot done. I didn’t feel like I got very far. The speed gains in rendering time, tweaking lighting and materials certainly gets beaten down by the process of generated GOOD uv’s for light mapping. What’s worse is that many people would most likely be performing this intensive process on commercial assets they’ve purchased from a stock site or a vender so that work isn’t really re-saleable on an asset market.

The conversation about capitalizing on on real-time interactive walk-through’s keeps coming up, and it’s a tough one. The toolset is very close to being such a good match for the work, but the devil is in the details. It seems like there is room for a lot of misconceptions in taking on one of these jobs. How easily the assets can be re-purposed and for what types of use cases. What kinds of systems can run what you build, will it actually run decent in VR, can you actually import a supplied model or will it have to be rebuilt in modular form?

I do think there is a huge application for building these interactive demos in high visual fidelity. An architectural firm might not be as interested in the ability to turn lights or running water on and off, but interactive firms who are developing space interactions could really take advantage of such offerings.