I was just curious how everyone here was approaching generating proper UV’s for all the furniture needed to put together a decent Archvis Environment? That seems to be the real time sink in this work. Some collections out there have UV’s generated, but a lot of common furniture packs don’t. I know I’ve spent a good amount of time preparing different assets, but I’ve seen post on here with way more furniture populating a set than I’d attempt. I just curious if the bulk of what we are seeing is being unwrapped by hand or if I’m just missing something.
For me, it’s making arch-viz in ue4 almost impossible. It’s just too hard/time consuming to uvw unwrap our high poly models (especially when we buy pre-made assets, which happens all the time). I’m going to stick to vray. The major appeal of Ue4 to me was cutting render times/costs and having good visual effects and making movies with matinee but I think it’s doesn’t compensate for the time sink of the lightmapping. I’d prefer to let my computer render a scene overnight by itself than spending 10 hours uvw unwrapping everything manually.
I’ll keep and eye open for UE4 tho because I think it’s a fabulous software. I really like the interface, the blazing-fast viewport, the material editor, the tutorials, the support, etc. I know Otoy (Octane render) is making a plug-in for real-time path tracing in Unreal Engine 4. This could be interesting if we can use it to make matinees or something.
The automatic lightmap UV generation on import feature in UE4.5 works pretty **** well. The whole lightmap business was also a turn-off for me arch-viz wise prior to 4.5 but it works 90% of the time now.
If it doesn’t work though, just bring it into 3DS max, and auto flatten it. That also works about 80% of the time, even with complex geometry like high poly blankets, and it doesn’t take long at all.
If that still doesn’t work, then you’ll either have to use a different mesh or switch that particular furniture to movable instead of static (then tick the light as static option in it’s properties so it still casts shadows from static lights).
I personally don’t notice a huge difference in light quality between a movable vs static object after building.
I definitely feel that UE4 is getting more efficient than static Vray renders too for a bunch of reasons. Primarily of which is that you don’t have to click render everytime you need a new ‘picture’. And unlike vray, you can apply post process effects like DOF blurring on the fly without needing to rebuild anything!
Eg (my WIP)
nice scene! i’ll probably give it a little another try eventually! I think people are over abusing DoF effect in most ue4 scenes tho…is it to hide some artifacts/light bleeding or purely aesthetic? Probably both hehe!
It is possible and it works pretty well I must say. Of course vray, is better when it comes to rendering quality, but UE4 gives you something different. It’s the interactivity. Take this example: https://www.youtube.com/watch?v=eTt7AGIpV2I You can also use Oculus Rift for better impression. I think it’s worth spending time on unwrapping.
I haven’t tested automatic lightmap UV generation in UE 4.5 yet, but if it works as JLO describes I need to check this out asap
Simply flatten mapping my uv’s never gave me good results tho. Not too sure how you end up with clean, accurate shadows by doing it this way. Usually we would need to have as less uv islands as possible, placing them carefully , etc. That what is time consuming to me… Maybe i’m trying too much to get perfect shadows (or i’m utterly bad, it’s quite possible hehe) but when you come from offline rendering it’s hard to adapt. I tried many times to unwrap a high poly chair and there was ALWAYS something wrong once imported in UE4.
Have you tried Unwrella? http://www.unwrella.com/
Definitely try it out! It’s actually quite intelligent now (most of the time)
Strange, whenever I tried to flatten my walls or ceilings or floors, it never really worked properly. I flattened some furniture and it generally works out pretty well, but for the most part I’ve relied on the auto UV’s lately unless it stuffs up.
Can anyone speak to the ability of 3ds and something like unwrella to Modo? I do the bulk of my work in C4D, but have slowly been making a shift to Modo for it’s great modeling tools, but also it being much more apt at UV than say Cinema. I know there isn’t a magic bullet out there. But I still feel there is probably a king of the hill.
I’m quoting Tim Hobson (Unreal Engine Support) : ''If you have a specific need to have your LMs set up a certain way it’s probably best to manually create those. It’s good to remember that the LM is being generated based off the UV islands of the texture UV. So the layout will be organized based on those UV islands. ‘’
You see, this is my problem currently. It’s not the lightmap itself, it’s the texture uv. Because the auto lightmap is based on the texture uv, so you gotta have a good one to start with. For arch viz, we cannot use low poly models, it looks bad. Let’s say I have a 12 story building, with many many parts. Should I unwrap every part individually? Gonna take forever and cannot use flatten mapping cause there will be a million little parts. Can you unwrap all at once? gonna be even worse. How can we possibly do that? I’m talking about buildings have 300-400k polygons, if not more. If there are too many islands, the UV LM is going to end up with bad shadows. I also have furniture with 80k polygons, turbosmooth, etc. many small parts like screws, etc. Pain in the *** to unwrap, litterally. I could use ‘‘game’’ assets, but they will never look realistic enough imo.
We’ve only seen quite simple geometry so far in arch viz projects. It’s mostly small interiors with some (sometimes complex, I guess) furniture.
If we had a solid dynamic G.I solution, my god, it would be nice!!! Still gonna keep an eye open for this plug-in.
I’m not sure if we can sacrifice alot of visual fidelity for some interactivity. Sure, interactivity is cool, but do you think there are many potential clients for this?
For your type of setup I can see where the problem would be for sure.
To discuss this a little further as I want to be clear, when you’re making your Texture UV it does not have to be relegated to the 0,1 space. You can have your UV islands exists outside this area. There are some caveats that need to be considered with this UV if you plan to use it for your Lightmap generation within UE4.
Let’s take a spherical or cylindrical object for instance. You would want to split some edges to get a nice pelt or flat mapping for these objects to be usable for generated lightmaps. If you don’t split the edge on a cylinder, for instance, the generating lightmaps feature will still show an overlapping UV for that object because it will not split those edges for your mesh (at least not yet, hopefully something will be added in the future to alleviate this issue).
For something like a building you may want to consider a modular workflow for the structure. It can take a little more time to work out but the gains can out-weigh the negatives if you plan to use static lighting and want the best LM resolution without having to pump up the resolution in UE4.
If you or anyone else have any issues/concerns feel free to ask and I’ll do my best to help out!
It’s good to hear everyone’s thoughts on this. I’m starting to think some of the more complexly dressed sets that we’ve seen in youtube format maybe have had a lot of non-light mapped furniture that has been left dynamic. I could be wrong, but that would at least work for generating rendered videos. It wouldn’t be a good option for VR walk-through or building levels for a wide audience. I’m sure many of you like me, have built scenes that will only run on a really beefy machine just to see what that barrier is. I spent like a week and a half of unwrapping various furnitures that I might need, and while I got a lot done. I didn’t feel like I got very far. The speed gains in rendering time, tweaking lighting and materials certainly gets beaten down by the process of generated GOOD uv’s for light mapping. What’s worse is that many people would most likely be performing this intensive process on commercial assets they’ve purchased from a stock site or a vender so that work isn’t really re-saleable on an asset market.
The conversation about capitalizing on on real-time interactive walk-through’s keeps coming up, and it’s a tough one. The toolset is very close to being such a good match for the work, but the devil is in the details. It seems like there is room for a lot of misconceptions in taking on one of these jobs. How easily the assets can be re-purposed and for what types of use cases. What kinds of systems can run what you build, will it actually run decent in VR, can you actually import a supplied model or will it have to be rebuilt in modular form?
I do think there is a huge application for building these interactive demos in high visual fidelity. An architectural firm might not be as interested in the ability to turn lights or running water on and off, but interactive firms who are developing space interactions could really take advantage of such offerings.
From what I’ve seen, only a few arch firms are interested in moving forwards with technology in terms of visualisations. The industry just seems to be a bit stubborn and a lot of them are quite fine with just using Vray and conventional static renders. Hell, some of them are still resistant to using Autodesk Revit.
I’m working on my graduation project with an architectural firm and UE4 and they were more than happy to supply the CAD model of one of their current projects but unfortunately it came from Vectorworks which I had no familiarity in so it took a bit of shuffling around to get it to work. The model wasn’t very complete either, they don’t particularly pay attention to interior details like skirting boards, door frames, doors etc so all of that had to be modelled by hand (as per the customer’s finishing schedule). I had to extrapolate and model the terrain data from the surveyor’s CAD file as well which was a whole other issue.
But yeah, my firm/client wasn’t too fussed with the interactive/blueprint component of UE4 like switching materials or turning on lights or closing doors. They’re much more interested in:
- How real and accurate you can make the environment
- How efficient it is in terms of workflow
- How fast can you create an environment
basically they’re not interested in ue4 lol…
I think it real time viz are more interesting for big projects, like a museum, an airport, a stadium…where there’s a sense of ‘‘grandeur’’. Not too sure if the 800sq/feet condo is THAT interesting to explore in 3d. Also, exterior scenes are more impressive in real-time than interior, I think.
You’re right, it might not be exciting to explore a random residence to most people. But for a client who hired an architecture firm to design their dream home it probably is. Having the ability to walk through your new home before it’s even built is really appealing to a few people. Especially since it can also help them see things about the design which might not be so easy to see in normal static Vray renders.
To me I’d probably be more interested in exploring an apartment/condo/mansion than an airport or stadium because they’re both kind of repetitive buildings? A museum/zoo/aquarium would be amazing though.
The killer feature that will make UE4 really suitable for archvis without the lightmap mess is a good real time GI. I hope this will get the job done soon: http://www.geforce.com/whats-new/articles/maxwells-voxel-global-illumination-technology-introduces-gamers-to-the-next-generation-of-graphics
Amen to that realtime GI could potentially open UE4 to more archviz people.
I think the real potential of UE4 for archvis is the interactivity. But I’m not talking about walking around and observe the place from different angles, I’m talking about moving furniture, drastically changing colors and textures, adding and removing objects. If you set all your meshes to movable in order to get that kind of interactivity, the quality will drop a lot. It needs a good realtime GI system to deliver good results.
Let’s hope that VXGI will bring a nice future for us
I would like to know how Lumion handles the imported geometries. In Max we don’t need to add the second UV channel when exporting in Lumion.
for real time bruteforce GI you need 4 Titan gpus and for complex scene you have to wait several minutes for each frame.
I work every day with Octane.
I don’t know if Otoy will develop an automatic baking pass, this is the only way to get fast interaction, forget to have noise free spectral lighting at 60fps.