I’ve got some questions about getting assets ready for Unreal.
Building Models
In general, what are some things I need to consider when building a asset?
----- Build using Real World Scale?
----- Building Collision Geometry in 3D Max vs. using Unreal Collision?
----- Exporting Quads, or Triangulated Mesh?
----- Second UV Channel for Lightmap?
----- Does the FBX size matter? Does the FBX get packed in the final build of the game, or does Unreal repack it as a UAsset or something? (An example, exporting a FBX with only the final model, or a standard FBX with Material Information, Groups, Shading Groups, Helper Objects, Camera, Animation)
----- Are there things I need to consider when making a model that will be interacted with in Unreal by a Player or AI?
----- What Texture Passes should I use for Unreal, and what’s the best way to pack them? (2 textures?: RGB+Opacity(A), Normal(RGB) + Roughness) Do I really need spec and ambient occlusion?
----- Are Tileble Textures more efficient?
----- Is using Displacement Map ok? Are mid-level PCs, PS4, and Xbox able to comfortably run displacements?
----- Does Texture Encoding matter? (jpeg, png, tif, targa, psd)
----- What causes FPS to drop? I know it’s the rendering, but what specifically? Complex model? Shader ? Large Texture?
-Always build in real-world scale
-you can create more accurate collision meshes in your 3D program, but it’s annoying to do
-everything gets imported as tris, so quads don’t matter
-first UV channel for your material, second for lightmaps, can’t be overlapping and must fit in the 1x1 UV space
-the FBX size matters for importing into UE4, if you have many meshes or a large mesh then it will take longer to import. Make sure when you import that any automatic options that you don’t want to use are turned off since that’s part of what can make it slow to import (automatic collision/lightmap UV’s) The data in the FBX will be packed into the UAsset file.
-for textures, you can use a texture and its color channels however you want, though for a project it’ll probably help to do things all the same way.
You’ll need your diffuse, roughness, normal, and possibly metallic textures
-Yes, if you can tile a texture rather than using a very large texture then it will work better, you can use a mask in your material so that you can restrict a tiled texture to a specific part of the UV’s if you want. Using more materials will slow things down, so when you have an object you want to try and use just a single material on it.
-you can use displacement, you would then control how many subdivisions you have in the graphics settings
-all textures get converted to a compressed format that the engine uses, however, some formats get interpreted in a different way, for example PNG contains transparency in a different way than TGA, the empty space has to have a color and PNG will automatically fill that in with something and it might cause a fringe around the texture when you use the alpha, with TGA you can define the color in the empty spaces so that it won’t do that.
-the biggest thing that impacts performance is draw calls, every separate object is a draw call and every material on an object is an additional draw call. So what you want to do to avoid that is try to combine things that are simple meshes and use the same material. Combine objects that are in the same area, and remember that each object uses a lightmap so if you combine too many things then it won’t get enough detail in the lighting even if you use a large lightmap. Also use systems like the Foliage system which allows for many small objects (like grass/rocks) but it combines it automatically so that thousands of objects become one object. Usually it’s better to take a hit to memory than it is to increase the number of draw calls.
For most models, auto generating is fine. Things that the player will frequently bump up against may need custom collision.
If you are baking normal maps, you want to triangulated your meshes first. If you are using tiling normal maps, it doesn’t matter.
I use the same packing as Substance, a RGB for the Base_Color, a packed RGB using R=Metallic, G=Roughness, B=AO. If you don’t need metallic for a particular model, you can use the metallic texture for anything else (generally masks).
[TABLE]
Base Color
sRGB
Base Color
OcclusionRoughnessMetallic (Green Channel)
Linear
Roughess
OcclusionRoughnessMetallic (Blue Channel)
Linear
Metallic
OcclusionRoughnessMetallic (Red Channel)
Linear
Ambient Occlusion
Normal
Normal
Normal
Making smart use of tiling textures can save memory and increase the texture resolution the player sees. Generally you’ll use both unique and tiling, sometimes both on one model.
Thankyou very much DarthViper107 and ZacD. I have a few follow up questions if I may.
1- You made reference to “Combine objects in the same area”. Do you mean combine in 3DS Max, or combine into one blueprint?
2- As I understand, the Ambient Occlusion can be baked into the Diffuse RGB by multiplying it together. Is there a reson I would choose to keep it separate and plug it into the Ambient Occlusion slot in the Material? My guess it that the Material does the same internally. (Multiply the AO by the Diffuse)
3- Do Textures take up a Draw Call, or is it just the Material? Do slightly complex Material networks affect FPS?
4- Say I have 1 Material that’s being used by 4 models. Each sourcing a different part of the texture. Does this get loaded once into memory, or multiple time depending on how many models use it?
1-Usually you’d combine them in 3ds Max, it depends on what the objects are, so for example you’d want all of the walls of a room to be one object, but many of the props would be individual objects.
2-I don’t use the AO pass, I’d just ignore it or put it in the diffuse map
3-The material on each object is a draw call, the textures just use up memory. A more complex shader does have a bigger hit on the GPU, you can see a shader complexity view in the viewport
4-Each object would have a draw call for the material, but the textures would be loaded once into memory.
No. Instructions are the number of times something has to be done whether it’s per-pixel or vertex. Per-pixel instructions are usually the bottleneck as far as instructions go as most of the shader work is done per pixel. You should be cautious with those as every pixel of the object using that material needs to go through the number of instructions to be rendered, even more so if you have a lot of sub-pixel geometry. Vertex instructions are typically much lower than pixel instructions, but compared to the per-pixel aspect, vertex instructions are computed for the entire mesh regardless of how much of it is on screen.
For clarification though, there really isn’t a “bad” number of instructions. Based on the above, you can kind of figure that a high per-pixel instruction count might not be so bad on something far away(it’d be wasteful, sure), but it all comes down to the type of game you’re making and the platform you’re supporting. Profiling helps a lot.