McLaren Car Configurator Rendering Techniques - June 2nd, Live From Epic HQ

WHAT
, Zabir and Jeremy will join us to talk about the rendering techniques used in the creation of the McLaren Car Configurator that was shown at GDC 2016. High precision vertex normals, custom cubemaps and clearcoat normal map examples are just a few of the exciting features they’ll be exploring and explaining. Make sure your eyes are ready for this level of graphical intensity before viewing!

WHEN
Thursday, June 2nd @ 2:00PM ET - Countdown]

WHERE

WHO

  • Community Manager - ]()
  • Sr. Technical Artist
    Zabir Hoque - Sr. Rendering Programmer
    Jeremy Ernst - Lead Technical Animator

Archive:

Omg, this is so exciting.

I am very interested on the workflow and rendering techniques needed to for photorealistic car rendering. Clearcoat relfection is actually what I feel makes or breaks the car material.

Great to see Unreal team is able to share this with the community.

Awesome!

This is great! Will this be uploaded anywhere after streaming? - It’ll run at midnight for us here in BKK and I’ve got work the next day unfortunately :frowning:

Awesome!

I’d love to learn more about how you managed to switch between certain materials, have you already released informations about that in the past or is there something to look out for in the future?

Ace :slight_smile: I’ll be tuning in.

All Streams are available as VOD immediately afterwards in the Twitch Archive and a few days later (after Editing) on YouTube.

Question for Zabir, are there any plans for occlusion culling of dense meshes such as the car? If so, what are the possible approaches that would be tried?

Question about the second clear coat normal tha maybe @Zabir.Hoque or @RyanB can answer as I’m not at my Unreal Computer at the moment…

Is the second normal available as a texture in Post processing? If its a GBuffer pass, then we could use it to render custom data to that pass? My idea is being able to use it like a ‘custom depth’ like thing for special uses. if that is the case, does the material have to be set to clear coat mode to use it or can all materials access the second normal?

It is possible. Any material that is translucent should be able to access it.

The one catch is that it is always going to normalize and convert your input V3 into an octahedron V2. You would need to perform your own “Octahedron to Unit Vector” conversion, but thankfully there is a function for that already. You should get back your original value, but only normalized vectors will work.

To access the gbuffer inside of the custom node is a bit clunky right now, as you have to actually have a 'scenetexture: " node of some kind hooked up to the material graph in order for the shader to have access to the screen space data variable type. You can just clamp it to 0 and add it to something and it won’t have any visual impact (but it will cost you sadly).

Once you have that you can use code like this:


FScreenSpaceData ScreenSpaceData = GetScreenSpaceData(UV, false);

return ScreenSpaceData.GBuffer.CustomData.a;

Where UV is an input with “ScreenPosition” node hooked up. That would return just the custom data alpha.

The octahedron gets stored as:

				GBuffer.CustomData.a = oct3.x;
				GBuffer.CustomData.z = oct3.y;

I just realized I also confounded the issue by storing it as a delta octahedron. That means it converts the ‘regular’ normal to an octahedron and only stores the difference between them. It was done that way since otherwise, storing very smooth normals in the customdata was resulting in significant faceting since it is only 8-bit.

Here is how it is converting:

You can just use the actual 2 lines of code that the deferred pass uses to reverse those operations:


const float2 oct1 = ((float2(GBuffer.CustomData.a, GBuffer.CustomData.z) * 2) - (256.0/255.0)) + UnitVectorToOctahedron(GBuffer.WorldNormal);
 			const float3 ClearCoatUnderNormal = OctahedronToUnitVector(oct1);

That should give you back the original input, assuming the input was a normalized vector. Note that that code won’t work in the custom node though, but you can reverse engineer it and use the material function for the octahedron conversions.

There are some issues when extreme normals are used in the underlying as the delta octahedron is not wrapping 100% correctly. It was determined though to be barely noticeable when using the type of assets this was made for but it might be an issue if you are passing data. The wrapping is probably fixable but would add some cost (ie it needs to flip the Y if X>1 or something like that probably). Most likely it needs the same checks that occur inside of the octahedron functions themselves. If that turns out to be important we can fix it.

If only normalized vectors get passed through then if we wanted to pass through RGB then could encode RGB into a cubmeap like inverse LUT and then look it up in the post process.

The ideal thing would be to have custom buffers to render whatever we want not in the pass much like the clear coat output.

What do you mean by ‘clear coat output’? Clear coat just uses the gbuffer custom data, it doesn’t have its own output path. If you just mean the ‘clear coat bottom normal’ node, that is just passing a new input to the material system which doesn’t have anything to do with the gbuffer itself. The base pass has to still write that input data into the limited number of gbuffer channels.

Yes you could encode your colors into a cubemap, but it does seem like at that point you have quite a few hoops to jump through.

The only reason there are not more buffers is because each new one adds significant rendering cost, so they would be ‘ultra deluxe’ buffers that have to be disabled most of the time and it is hard to get rendering people to want to complicate the whole pipeline to support code that won’t be running in our games (and hard for them to argue to work on a task like that even if they wanted to).

Sorry yes I meant clear coat bottom normal. And yes I see how it’s pasing into the material system through the custom data. The bottom normal is still a render buffer on the GPU thought right? Not sure on the implementation though. But they are accessible in Post Process which is what I’m getting at. It would just be nice to have control over those buffers, their bit depth, resolution and what goes into them, even if its just called customGenericOutput that can be accessed in post processing.

Cheers

The problem is there are no free places to plug in any of that. All of the channels are spoken for in some way.

“The bottom normal is still a render buffer on the GPU thought right?”

Not its own buffer no. They are being written the gbuffer.customdata. Custom data is a channel that is used for stuff like this. Hair uses it as well. You can only use them if you know you are only going to write to them in one place. Ie, you could go write your own input node and write stuff to custom data but you might break hair and clearcoat and a few other things.

Here’s how it could work from looking at the code…

Create a output node similar to…
UnrealEngine/Engine/Source/Runtime/Engine/Classes/Materials/MaterialExpressionClearCoatNormalCustomOutput.h

…and call it MaterialExpressionCustomOutput.h (as an example)

in /Engine/Source/Runtime/Engine/Private/Materials/MaterialExpressions.cpp, implement the simple Constructor, Compile, GetCaption and GetInput functions.

In /Engine/Shaders/BasePassPixelShader.usf… call that function (only for lit and unlit as other shading models use the customdata)

line 840…

from this…


#if MATERIAL_SHADINGMODEL_UNLIT
	GBuffer.ShadingModelID = SHADINGMODELID_UNLIT;
#elif MATERIAL_SHADINGMODEL_DEFAULT_LIT
	GBuffer.ShadingModelID = SHADINGMODELID_DEFAULT_LIT;

to this…


#if MATERIAL_SHADINGMODEL_UNLIT
	GBuffer.ShadingModelID = SHADINGMODELID_UNLIT;
        GBuffer.CustomData.rgb = CustomOutput0(MaterialParameters);
#elif MATERIAL_SHADINGMODEL_DEFAULT_LIT
	GBuffer.ShadingModelID = SHADINGMODELID_DEFAULT_LIT;
        GBuffer.CustomData.rgb = CustomOutput0(MaterialParameters);

Ah yes I didn’t know you meant with the shading model restriction to default lit. I for some reason thought you wanted to write to the custom data outside of the base pass itself (you mentioned “Not in the pass” above at some point). other usage case. Doing it like that is just like storing subsurface color.

In fact you could prototype this for now in content by simply using a subsurface shading model and darkening the color and then reboosting it in your material at the cost of some precision. Then you wouldn’t be affecting the appearance much by using it.

Oh and fwiw “MaterialExpressionCustomOutput.h” is actually the container class that makes it possible to easily create these nodes without writing every single function out (written by Jack Porter for the landscape grass system).

I couldn’t watch the stream live yesterday but just saw the recording of it. RyanB mentioned they used vertex shader on tires to get the bulge on ground connection. Any to give out any more details how you ended up doing that?

Thank you!

Just wondering, does anyone know when and where will the development team releases the carbon fibre and car paint materials mentioned in the stream?

So on the steam I didn’t give a good answer, so here it is a little bit more coherently:

The occlusion systems that are traditionally used are still a benefit here - primarily GPU occlusion queries of the bounding boxes. This gives us a conservative list of the static meshes that are actually visible. This was primarily what we relied on for the demo. This approach obviously leads to false positives, so to improve this we could expose proxy meshes to better approximate the mesh bounds. And ideally we’d auto-generate these meshes to not increase the artist workload.

Alternatively it might be worth looking at building a BSP/KDTree just on the vehicle model itself since it is nearly as complex as a level itself (however since we’ll mostly be viewing it from the fringes different optimization/pessimism would be warranted).

The last approach that comes to mind is to let the GPU generate the draw lists it self and just have the CPU issue the occlusion queries and a single MultiDrawIndirect() (i.e. Vulkan/D3D12). This would eliminate the usual burden of the latency associated with occlusion queries and allow the GPU CP to drop the meshes that fail the occlusion tests.

So all that being said, we don’t have any active plans to implement these. As the Vulkan/D3D12 rhi implementation becomes more baked, we’ll likely be looking at the last option more.

I am also interested in those materials … when do you plan to release them?