Download

Raytracing & Light Baking?

**Step 1: **
Ray Tracing is turned on in PostProcessVolume for ALL (GI, Reflections, Translucency)

Step 2:
I click “Build” all

Question:

  • After the Building is done for all, can you turn off Ray Tracing because all the information is baked for the final game?
  • Or do you still need to keep it on within the PostProcessVolume for GI, Reflections, Translucency, etc.?

Thanks for clarification & further explanation!

Lightmass doesn’t use any of the new ray tracing features (yet). So what you see with ray tracing isn’t going to be accurate to what you’ll let after a lightmass bake. It might be close enough to get a rough idea, but they are completely separate systems. Reflections don’t get baked into lightmass.

Sooo… can you say… when using Raytracing… there is actually no more need for any light baking, right?

Raytracing is entirely real-time, right now that’s the main advantage of it, not having to use lightmaps while still getting good lighting

But, some people will still want to use lightmaps and the raytracing tech can be used to speed that up, so that will come at some point.

So that means… when you turn on Raytracing, you don’t need to bake anymore?
Is this a clear yes? :rolleyes::eek:

Not a clear yes, although that is the goal eventually. At the moment, the quality vs performance is not quite there, so some pre-computed things may still be in order, like bake Sky Light for occlusion, but use RTGI or SSGI for the sun.

But depending on what you’re doing, you may be able to do just dynamic lighting with raytracing features and not need to bake at all. Just depends on your use case.

the use:
I would like to use Unreal to create Animations (offline). If my frame rate would go down to 1FPS - fine.
I prefer maximum quality, and I would like to Raytrace everything. I don’t need a game engine speed, or VR speed.

**For that goal and use: **

  • Do I still have to consider Lightmass?
  • Do I still need to bake?

Thanks a lot!

Well if you don’t care about performance, then by all means use raytracing :slight_smile:

Thank you, appreciate it. But what is still unclear.

Why is the engine telling me all the time, I need to build objects?
And I have to build lighting?

If I am going to raytrace everything?
Is this still something you have to do?

Or does this become obsolete, because I am raytracing everything?

we will see raytraced baking soooooooooooooooon

What does that mean?

new build lighting system

@BernhardRieder Currently, when building lighting for static lights or stationary, the CPU cores are used for the calculations, the new system they mention at the slide will use GPU instead, so the process would be faster and it will use the same algorithm used by the realtime raytracing, which will make it looks like VRay and similars. I didn’t see an ETA for it thou.

More specifically it’s about using the raytracing cores to accelerate light building, they’ve already got a GPU light baker, but it’s something that’s a lot faster with the raytracing cores

do you know this dxr baking is depend on lightmaps or not ???
i hope this new technology works without lightmaps !!!:(:frowning:

@NilsonLima](User Profile - Unreal Engine Forums)
@darthviper107

DXR and light baking are not directly related. You can use DXR with light baking, or you can use it without.

DXR light baking is not integrated into the engine yet. DXR light baking will likely depend on UV maps, but we’ll see when it actually comes out.

Baking lighting will always be the better option when it comes to performance, with the downsides of having to bake, using extra storage for lightmaps, etc. But you can always check Force No-Precomputed Lighting" checkbox in “World Settings” and not use baked lighting.

@Farshid I will try to explain, with my best english (which is not my native language):

There is a fundamental difference between game engines and renderers like VRay, Arnold, and alike. All game engines (lightmaps are not an UE4 exclusivity) must consider the best processing times in realtime for gaming. So, a complex lighting scenario with different types of lights and also with static and movable objects can be challenge to produce a single frame with the necessary speed to be useful for gaming. So, people like to game at 100 frames per second (FPS) means that all lighting calculations are done around 100+ times per second and output to the screen. Static renderers and interactive renderers don’t need that type of throughput time.

So, in order for game engines be that fast they resort to tricks and tweaks in the scene for this to happen. Some of these tricks are even more suitable for some applications in desktop, mobile and VR, because they all have differences in processing power or needs a specific performance constraint (VR needs 90 FPS per eye to be considered a good application - google for the reasons).

So, for gaming, we have different lights to be considered in shape and behavior (static lights, stationary lights and dynamic lights) and also objects which will never move and some that not only moves, but they also deform. In the case of non movable objects, the trick for having shadows calculated fast for static and stationary lights is called lightmaps. The lightmap is a hint for the engine to quickly compute the shadows and the lightmap resolution will control how smooth or sharp that shadow can be. For movable objects, static lights does not produce any effect, but stationary lights will affect them.

What remains is when we have dynamic lights, which in this case the light source changes position over time and can change its intensity aswel, which will produce a completely different shadow. Dynamic lights will need to use only the geometry to create the shadows, which is expensive, so there are lots of optimizations a game engine will need to do in order to extract performance in these situations: only objects affected are the ones near the source, far objects if non-movable will still rely on the lightmap to produce the shadows, since the light source is far, etc.

So, often in games, you will see scenes with a mixed approach on the lighting just for performance reasons, which depending on the processing power might be good in terms of visual, but hardly will be realistic image like a scene that can be made with a static renderer for a static scene. The static renderers like VRay, Arnold, etc you will need to rebuild the frame shot in case you change the camera angle and some renderers have an interactive ability, meaning while you move the camera it keeps computing the frame data until the movement stops and it can compute to the maximum amount of time or number of samples predetermined.

Now, we are getting with realtime raytracing (possible because of new NVidia hardware) the ability to faster do not only shadows, but also reflections fast enough for games and these specific cases will still use a mix of old and new techniques to extract performance. The new interactive GPU light baking will behave according to the settings you will define and use the realtime raytracing hardware to do it faster, and by its nature, you can rely only on the geometry and materials to produce the shadows and reflections, meaning no need for lightmaps. Even with strong hardware, the amount of time used to process the frames will increase, but as it is done on GPU, will be fast enough to allow similar interactive cycles and keeping fidelity. So, this feature are more for archviz then gaming, so you can expect the pipeline to be similar to what VRay and others requires.

So, the conclusion is that lightmap will be required for all techniques where you need to save time and you are responsible to choose where it will be used for that purpose. Otherwise, it will rely in the geometry and materials and expend more time per frame to compute the scene.

PS: with the increased power to be seen in the graphics cards from now and the next years, I can’t see a reason for someone who model and texture objects for use in archviz, to not make the lightmaps properly, because those objects can become a source of income if sold for game projects purpose aswel. Just a thought.

Archiviz & 100% Raytracing:

I am also interested in using Unreal with the Raytracing options. I mean, Raytracing everything (GI, AO, Reflections, Refractions/Translucency, etc.)
For that case, I don’t need 100FPS - I am already super happy if I am I able to render ONE Full HD frame within 100seconds. :smiley:

I am also confused about Raytracing & the need of Baking?

If I am going to use Raytracing for everything, why would I need to bake something if I am not interested in 100fps and if I am not interested in creating a performance optimized level that can be used for gaming or VR?

And one more topic:

  • If you are using 100% raytracing, do you still need to unrwap everything?
  • And does the engine still require uvw’s for every single object?
  • Or does this become obsolete when using Raytracing?

Is there any simple Video Tutorial from Unreal, that shows currently a workflow to use Raytracing on everything?

Something simple, like a Studio Setup that shows a product render and covers the Project Settings, the Lighting, Shading and Rendering fully Raytraced.

That’s the real deal & the core topic when it comes to Arch Viz & using the Unreal Studio Version.

Thank you so much! :slight_smile:

If you’re doing real-time raytracing then you aren’t baking to lightmaps. Realtime raytracing does not use lightmap UV’s since it’s not baking the lighting.

@remozseo I know people come from different background (industries) and sometimes it might get some confusion regarding terms in use. I do my best to keep their use correct and sometimes it takes more words to explain things, since we are not able to just show in the form of how-to. Lets go:

  • regarding the “need of Baking” -> The term is sometimes used to just tell the process of rendering a scene snapshot. In a gamedev scenario it is used to build static shadows from static and stationary lights, which in the end will “bake” or produce textures to represent those shadows and also reflections. In realtime raytracing consider it as just the whole process to produce the scene snapshot like in VRay or Arnold.

  • regarding the “need of unwrap everything” -> There is two different process involved in mesh creation after you model them. One is important and it is called Texture UV Unwrap, and it tells the engine and the renderer how the texture is layed out on top of the mesh. It helps to avoid texture stretching when you have a complex shape for the mesh and too many textures applied on its surface. The better you do the job, the better the result will end up on the final appearance, avoid seems to be shown etc. The other important thing, only useful for gaming workflows (for the optimizations purpose I mentioned in my post above) is the Lightmap UV Unwrap, which sometimes is just a copy of the Texture UV Unwrap which the engine can do by itself, but won’t work for all cases. The Lightmap UV Unwrap is not necessary for Raytracing, since this one just cares about the geometries and materials in the scene.

  • regarding the “engine still require uvw’s for every single object” -> If it is for the Texture layout, then the answer is yes, even when using Raytracing. If it is for Lightmap layout, then the answer is yes, if the purpose is gaming, otherwise is no if Raytracing.

Dealing with Texture and Lightmap UV Unwrap process is not complicated for simple shapes, but things like characters, machinery, then the effort can become quite daunting. Luckily there are tools which simplify the process, others are still experimental and I know some people using this: (check the videos)

twitch.tv videos and/or

https://youtube.com/watch?v=0aGXmyt1mRE