I boosted up a lot indirect lighting and general “feel” of the scene using the post process volume all over the place, and the results are very nice ( and tweakable in realtime which is really handy, once you solved the problem of the light leaks.
I would rather use tricks rather then have a 100% Vrayish as possible results which takes 50hours to render.
I would also like to include a of the RealtimeVR:Biotic which explain the lighting process which they use…nothing out of space, but this could help
I am sure that or someone else will come up with a solution in the near future for more “optimized settings” so we can get decent quality with decent build times. I guess that these boosted up settings like .15 static scale that increases build time a lot also increases some calculations(time) which does not really affect a noticeable difference. In other words , we are probably wasting a lot of building time, and we might not even know it. But then again I might be wrong.
However if this is the way how building times we be I will probably have to start looking into building a home render farm, something like has. Seems kinda expensive but it would probably worth it. I am curious if with that horsepower it took him 40-50 mins to build that apartment how long would it take with a single i7-4790k.
I have a Dual-Xeon 2670 v3. 32gb RAM. My latest work, apartment, tooks about 20-25min to render. But the scene Riviera House tooks 5 hours. Maybe in this site PassMark Intel vs AMD CPU Benchmarks - High End you can compare your CPU speed with mine.
The parameter NumHemisphereSamplesScale it’s not enough to avoid lighting problems. You have to play with PhotonSearchRadius in some cases. You can speed up your rendering time with a good balance of these two parameters.
As I said, I’m going to publish a pratical guide for this process in the next few days.
Thanks, I’ve been over that list last night comparing performance and prices. Before actually taking a decision to buy that I would make a new topic and get more details on what would be best option for UE4’s lightmass. Does the number of cores have better impact then Ghz, would it be faster with 10 older CPU’s or 2 new ones etc. ( that kind of stuff). From what I know so far RAM has no impact on speed, you just need enough to load all that data.
But before that I want to see what comes out of this topic, conclusions on how to tackle lightmass in different environments, build time optimizations etc.
Muchos gracias for taking the time on the upcoming guide!
If we could make the swarm agent able to use cloud computing (amazon , etc) that could solve alot of problems. It would be very effective because we don’t need to render 36 000 HQ frames that would cost so much money (e.g rendering a movie) We just need a quick boost in computing, could take just a couple hours. what takes 22+ hours on our pc could be done very fast when you have a huge farm full of xeons. We wouldn’t even need to optimize/change lightmass. Let’s just crank everything up and spend a couple $ to get a quick build. Time is money!
This must be possible because…
-We will need to use static lighting for a while because of VR. Forget dynamic G.I
-We will need to be able to produce large scale projects to be competitive on this market. Large scale project with the same quality as small interiors.
I don’t want to be negative but sometimes, well a lot of the time, I don’t think Unreal is going to be the solution for Archviz long term unless they can get away from lightmaps. In a real world scenario it’s just too much labor for the fees. I kind of think I’m wasting my time with the software. Just my gut level thoughts.
Currently the Archviz “Queen” is Lumion…is so easy to use that is almost embarrassing and righfully so, because it does what it does and the results are very very nice…you also have a viewer which can be sent to the customer and he’ll be able to check everything by themself which is a pretty good solution.
No VR for now but I think they’ll add this feature in the near future.
UE4 have stunning features and can be adapted to do anything, but if far from being a 100% bullet proof solution for ArchViz, mainly because of the time it takes for the lighting, which is the critical point 99% of the time.
There are realtime solutions for GI but far away from being usable on a daily base and especially to show to the customer most of the time, and of course it takes a beast of a rig to run something like that…
I wonder if with Enlighten the realtime GI is something which is well implemented or is something which is still in early stages…
I guess it really depends on the project. By too many labor do you mean unwrapping for lightmap uvs?
For my projects I tend to just use automatic generated lightmap uvs which work really well about 90% of the time.
After that it’s just a matter of making the materials (of which you could make libraries for future projects), making a light setup and doing some interactivity (which you can both store in a template).
But maybe we should discuss this on another topic, because it’s getting a bit offtopic and we should save this thread for awesome lightmass explanation
If you see my previous post. I think #66 on the previous pages there is nothing stopping you from doing cloud rendering now with lightmass. Of course you have to know how to set on the virtual machines and install the required programs much like you would if you were setting on a LAN render farm. There is also of course a bit of networking involved with setting up VPNs etc so all the remote machines can see it other, or at least the coordinator swarm machine. Only thing is as you can see from my screenshot the swarm machines(Cloudswarm1-4) actually ended up doing very little work compared to the local machines. It does work better for bigger scenes but we are still seeing that the cloud computers end up finishing their workload very quickly compared to our “slower” local machines here. It would be really nice if they could re-engage and get some of the remaining work from the machines that are still processing. Essentially auto-balancing.
In summary it still seems you may be better off with a very high CPU density in one to few machines similar to dual xeon setup. Still testing though… Will post more results soon.
Those leaking artifacts look like insufficient photons. Try jacking up DirectPhotonDensity, IndirectPhotonPathDensity, IndirectPhotonDensity, IndirectIrradiancePhotonDensity in BaseLightmass.ini by a factor of 10 or so.
I am still looking into the Skylight-only case. I found a bug in the adaptive sampling, such that it was not doing much with IndirectLightingQuality > 1. Also I implemented portals and I’m testing those out. These things take time though, it will be some weeks before I have good results - mostly as I have to make sure no quality or build time regressions occur in many maps.
IndirectPhotonEmitConeAngle is used in combination with IndirectPhotonPathDensity to channel indirect photons into small windows and doors where they are needed the most. In a scene like that (indirect lighting of local lights not doing much) I don’t know why it would matter.
The portals don’t change the brightness, they just tell Lightmass where to look closely for lighting. In this case with all the skylight coming in small holes they make a big difference. In a more open scene they wouldn’t be any good because the openings to the sky are large.
Do you guys not want to use directional light at all? is there a reason for that?
Also, does uncompressed lightmaps makes a noticeable difference? I have not seen much difference but wondering if with cranked up settings it’s better not to compress them…
[DevOptions.PhotonMapping]
; 400 gives a smooth enough result without requiring a very large search
NumIrradianceCalculationPhotons=800
; Allocating most final gather samples towards importance samples gives a good result as long as there are enough first bounce photons
FinalGatherImportanceSampleFraction=.8
DirectPhotonDensity=700
; Same as DirectPhotonDensity, since currently direct photons are only used to create irradiance photons
DirectIrradiancePhotonDensity=700
[DevOptions.StaticLightingProductionQuality]
NumDirectPhotonsScale=10
; Decrease direct photon search distance so that we will have more accurate shadow transitions. This requires a higher density of direct photons.
DirectPhotonSearchDistanceScale=.2
; Need a lot of indirect photons since we have increased the number of first bounce photons to use for final gathering with NumImportanceSearchPhotonsScale
NumIndirectPhotonsScale=32
NumIndirectIrradiancePhotonsScale=4
My scene has Directional Light apart from Skylight:
I have cranked up Indirect Lighting Intensity to 5 for both.
Most of the artifacts including Light leaks are removed with these settings and Light building times are also at acceptable levels (My scene has lot of complex objects still it took around 50 mins to 1 hours).
Since, I did these experiments with commercial project that I am currently working on, I am not posting screenshots. I will try to post screenshots of Test Level everyone is using here.
P.S: These are not final values, I am still trying to figure out