UV Wrapping and Baking into UE4 Steps?

Hello UE4 clansmen!

I was a little confused about the steps into getting a sculpted model from ZBrush into UE4…

Isnt the purpose of the uv unwrap step to generate the normal maps??
When I look at normal maps all of them are basically the same shapes as the UV Unwrap
so I thought the UV unwrap served as the “base shapes” for the normal maps.

So why does this tutorial say to get the normal map one has to…

1)“Have a High Poly original and a Low Poly copy. Now you must UV Unwrap the Low Poly Model, before you can bake to it. Once the Low Poly model is UV Unwrapped…”
2)“You have to uv unwrap then bake high poly model to low poly model to generate your normals and your AO maps etc.”

Then what was the purpose of uv unwrapping the lower poly model?

My sofar understanding of the steps to modeling out a character and bringing it into UE4 has been broken…

Usually in Zbrush you bake your normal maps to the lowest geometry level, I don’t think you have the option to bake to a mesh that isn’t your lowest geometry level. If you want to use a different program to generate your normal maps, you would export out your high poly mesh and your low poly mesh to that program. The low poly mesh must be UV mapped since that is the mesh that will have the texture. The high poly mesh is just there to have the modeled detail, it doesn’t need to be uv mapped or textured.

So the reason why the tutorial said to make a low poly model is because modeling programs like Zbrush only bake to the lowest geo level?


Im still not quite sure why the high poly model is still relevant? I thought the purpose of baking was so that the renderer doesnt have to re compute all the files in a high poly model with many layers over it?

So in essence what it does is “merge” layers in (photoshop terms) so that we have a smaller model with the "faked’ look of a higher poly model right?

If this is the case why would we combine the high poly model into the fray? Wouldn’t that defeat the purpose of the size and need to render a larger model all over again that we were trying to solve with baking?

Still dont get why both high and low poly are needed together _


2)And by…

UV mapped meaning it has to go thorugh the process of UV unwrapping?

Texture meaning the realistic skin pores, blood veins etc.

And modeled detail meaning the forms of the nose and lips, etc but without the texture such as skin pors, blood veins etc?

So–here’s what this is all for, to give a more general explanation

First thing to learn, what is a Normal Map?
Normals are the direction a surface is facing in a 3D program, a Normal Map is an image that uses colors to define the direction a surface is facing. It literally corresponds to an XYZ rotation–for example, Red=X-axis, Green=Y-axis, Blue=Z-axis.
Some programs interpret them differently so you might have to invert the Green channel if it looks like it might be facing the wrong direction.
Normal Maps are used as an alternative to Bump maps. Bump maps are pretty easy to understand—a black and white image where it makes it look like a surface has depth. Normal maps don’t require a depth calculation which means they are faster to render which is why games use Normal Maps.

So here’s the issue—your high poly sculpted model has too many polygons for a game to use. So you would need to have low-poly model is what you would use in your game, maybe it’s 50,000 polygons. However, the issue is you still want to have the details like the wrinkles/skin pores so that’s what Normal Maps are for.
The low-poly mesh will display the Normal Map, which is why it needs to be UV unwrapped before you bake the Normal Map. So once you have your low-poly mesh set up, the software will put the low-poly mesh and the high-poly mesh in the same location and then it will calculate the angle differences between the surfaces of the two meshes and render it to a normal map texture using the UV’s of the low-poly mesh. Once it’s done, you’ll get a normal map that you put in the material that you apply to the low-poly mesh.
This means that all the details that you have in the high poly mesh will be in the normal map and in the game the low-poly mesh will look very similar to the high-poly mesh, but since it has a much lower polygon count it makes the game run faster.

What I was talking about with Zbrush–if you want to bake a normal map in Zbrush your low-poly mesh has to be a subdivision level lower than your high poly mesh. Neither can be a separate mesh that you import. To give an example–in 3ds Max I can select separate meshes to use while in Zbrush the low poly mesh has to be a lower subdivision level of the high-poly mesh.

I see,

So my concept of baking wasn’t entirely incorrect, it was just the angle I was approaching the subject was off?

So the ideology would be…

When we UV Unwrap the lower poly mesh we are getting the “shapes” for the Normal maps.

Then we use the high poly mesh which has all the sculpted details on it and (Heres where I was off before)
bake it’s detail not onto the low poly mesh but onto the UV Unwrapped “Shapes” made from the low poly mesh, to create the normal maps.
(After this point the high poly model is no longer necessary)

Finally after you finished working all the texturing into the normal maps, then you can bake/merge
all the normal maps onto the low poly model so that the renderer can just render one thing without all the unnecessary calculations

Am I getting it…now?

Also, in high budget productions such as The Hobbit with WETA, do they need to use normal maps?
Since they have so much high quality technology, would they just use the high poly models straight out of Zbrush with all the detailed pores on the model itself
and even have the skin painted on the model itself, because there isnt a need for maps and be conservative with polygon count?

Since ZBrush seems a bit complicated and
since I am going to be working on texturing in MARI and Substance

Is it easier to UV Unwrap and bake normals in MARI and Substance Vs ZBrush and Autodesk?


Lastly, I’m still not quite sure I understand this…

Do you mean that you can’t import another mesh because ZBrush only works with subtools?
And if you try to open another mesh it will replace the current one that you are working with?

In selecting seperate meshes…ZBrush can do that?
I usually just select/click on the other subtool?

That’s pretty much correct—UV mapping is a second set of coordinates for a 3D model, it defines how a 2D texture image is mapped to a 3D object. UVW is equal to XYZ they call it that just to differentiate it from the XYZ coordinates. So the UV’s are just the flattened out coordinates of a 3D model so that you can put a flat image onto it. It has to be done so that the program can create the normal map, if you search normal map on google images it’ll come up with lots of blue colorful looking images–that’s the normal map. The angles that change the direction of the surface are represented by color values.

For visual effects in movies they sometimes use normal maps, but mostly they use Displacement maps–they are black and white images which control depth, kind of like bump maps. Except that displacement maps actually push the vertices so the effect is not an illusion. When they render, the software will actually subdivide the mesh like it would in Zbrush and then use the displacement map to push the vertices so that it has all of the high quality detail. They still want to use low-poly models while they’re animating since it will be faster, and there’s situations where you’d want low-poly meshes to speed things up like cloth simulation.

Zbrush is the program to learn if you want to become a character artist, it’s not easy to learn because its UI and workflow is very different from other 3D programs, but it has the best tools once you learn it. Mudbox is a much much easier program to use, if you just want a simple sculpting program, it also has better texture painting tools than Zbrush.
Mari is the standard these days for painting textures, but it’s also got a learning curve as well. I would focus on learning Substance/Substance Painter. Especially if you want to get into the video game industry it will be of great benefit to know Substance.

As far as creating your UV’s, the tools in Maya/3ds Max/Blender will be a bit better for doing that. Substance is only for making textures/materials so you can’t unwrap your UV’s there, Zbrush has some tools but it’s not as good as what you get in a 3D program like Maya/3ds Max/Blender

Anyway, the workflow generally goes like this:

Create your low-poly model in your 3D program (3ds Max/Maya/Blender)
Unwrap the UV’s/create your UV’s in the same program (3ds Max/Maya/Blender)
Export to Zbrush for sculpting
In Zbrush, subdivide the mesh and sculpt the fine details
In Zbrush, bake the normal map
After sculpting, export the high poly mesh and low poly mesh back to your 3D program (3ds Max/Maya/Blender)
Bake the normal map using whatever baking tool that your 3D program has

Export to the low poly mesh to Substance Painter and paint your textures (you already have your normal map at this point)

1) As of now, I’ve been mostly sculpting purely in ZBrush and Autodesk’s UI is actually more foreign to me.

I usually sculpt high poly model first with the help of Dynamesh, so if thats the case would a workflow be…

-Save high poly project in ZBrush
-Retopologize highpoly in Zbrush to a low poly

-import low poly into Autodesk to UV Unwrap(Should be easiest in Autodesk?)
-import high poly into Autodesk as well to bake normal map onto the UV Unwrap on low poly model(3ds or Maya, I have both)

-then go to MARI for and PS for skin texturing on normal and displacement maps (and Substance + PS for everything else besides skin)
-then when done finally merge/combine all the maps to the low poly model by baking in Autodesk

-Then move onto rigging
-Finally import to UE4
-Shoot the cinematic

Is this an efficient workflow?

BTW) Where does shading come into this?

2) If WETA mostly uses displacement maps then how do they get their color and texture other than bumps and whatnot from displacement maps?

3) Are displacement maps also useful in next gen game engines such as FF 15 and U4?

4A) Whats the difference between normal maps and displacement maps?

Because in “Painting a Realistic Skin Texture Using Mari By Henrique Campanha”
he utilizes displacement maps.(Lesson 13-14)

4B) Is what he outlines in his workflow also good for cinematics in UE4? Like it wouldnt run slow or anything?

4C) Besides the Displacement map in L.13-14
Are the rest of his maps(Deep SSS, Glossy, Specular, Diffuse, Bump, etc)
all under the category of “normal maps”?

Lesson 02:
Texture Cleaning and Preparation How to prepare textures to be used inside Mari. Cleaning Highlights and producing an albedo­like texture.

Lesson 03:
Bit Depths Technical background behind the concept of bit depths and their importance.

Lesson 04:
Diffuse Map How to create a diffuse color channel from a set of photo references.

Lesson 05:
Shallow SSS map How to create the one of the essential SSS components.

Lesson 06:
Mid SSS map The second element needed to build a good SSS skin shader.

Lesson 07:
Deep SSS map The third SSS component and its colors.

Lesson 08:
Deep Mask Controlling deep map influence.

Lesson 09:
Primary Specular map The main specular map, regions and intensities.

Lesson 10:
Secondary Specular map Enhancing facial features and reflections.

Lesson 11:
Glossy map Tight vs broad reflections.

Lesson 12:
Bump Map Augmenting high frequency details.

Lesson 13:
Displacement textures Preparing Surface Mimic’s* textures to be used inside Mari.

Lesson 14:
Displacement Map Creating a full displacement map based on textures from previous lesson.

Lesson 15:
Eyes Diffuse map Iris and Sclera color map.

Lesson 16:
Eyes Specular map Giving life to the eyes.

Lesson 17:
Eyes Bump map Adding volume to veins and iris.

I think the main thing that’s confusing you is your getting two different work flows mixed together realtime (UE4) and VFX CGI (films). The latter you appear to have experience in you wouldn’t be using ZBrush and asking these questions otherwise. The real hurdle you face is quite a few features won’t work from say Mari the tutorial you are using isn’t aimed at game engines which have some fancy tricks to make stuff work in realtime and have limitations unlike people with render farms.

This could help you or ruin your day. Creating Human Skin | Unreal Engine Documentation

Yes, if you wanted to you could sculpt from scratch starting in Zbrush, you could even do your UV’s there to though like I said the tools in a 3D program like 3ds Max/Maya/Blender are a bit better–I wouldn’t get one of those programs just for doing UV’s though, but you really should know one of those types of programs besides Zbrush. The best use of Zbrush is to sculpting–but if you really want to it can do UV’s and even texture painting though like I said it’s not that great in those areas.
For rigging/animation you would absolutely have to export to a program like 3ds Max/Maya/Blender
Displacement maps can be baked in the same workflow that Normal maps can be baked, it will be one of the options when you do your baking

Just a note–displacement maps/normal maps are not part of texturing, texturing would be painting things like color/metallic/roughness maps. In Substance Painter, it has some features that can help with texturing by looking at the normal map, like if you want worn edges on something that’s metal it can find the edges in the normal map and use that to figure out where it needs to apply the weathering.

When you’re done texturing, you’ll have several image maps that you would use for UE4:
-Diffuse–this is just the color without shading/highlights/etc.
-Roughness–this controls how sharp reflections are
-Metallic–this defines which parts are metal
-Specular–most of the time you don’t need this and you would just leave it at the default value
-Emissive–controls what parts need to look like they are “glowing” like lights on armor, they don’t actually emit light, they just glow
-Normal Map–changes the direction that surfaces face so that it looks like you have small details like wrinkles and scratches without having to model them in and have a high polygon count

If you want to use a displacement map in a game, you can do that: 1.12 - Tessellation Multiplier | Unreal Engine Documentation
It’s a feature of DirectX 11 where it can subdivide a mesh so that you can use a displacement map, it does it dynamically based on how close you are to the object.

Shading is controlled by the code in the engine that creates the material. In UE4 it is coded with a Physically Based Material: Physically Based Materials | Unreal Engine Documentation

I highly recommend as a beginner that you don’t try to learn Mari yet, like I said it’s not that easy to learn, you would be better off learning Substance or using Photoshop. The downside with Photoshop for texture painting is that you can’t paint directly to the mesh, you can only paint to the texture so it can be difficult to know where you’re painting. The workflow for Photoshop is that you would render a template of your UV’s in your 3D program and then open that image in Photoshop and paint on a layer on top of that.

Hey thanks for the input,

Yeah I can see what you mean.
Do you think you could point out specifically what part (2-17 listed above)
in the “Painting a Realistic Skin Texture Using Mari By Henrique Campanha”
tutorial that wouldnt work in creating a cinematic in UE4?

From there Ill look for ways to replace what cant be used.

The only thing I was worried about in his tutorial was the displacement map but since darthviper said that displacement maps can be used…

Then im not quite sure where else the problems are in using his workflow and result in UE4.


Hey darth thanks for the consideration,

My Intro into Sculpting in ZBrush teacher told me first day of class that we will spend 2 weeks learning Mud Box so that we could ease our way into ZBrush as it is too complex.
The only reason I joined his class and was going to pay thousands was for ZBrush.
The next day I left his class and started learning ZBrush on my own.
By the 2nd week I was proficient in Zbrush.

During the time I am learning ZBrush, I am also learning music programs such as Cubase and Kontakt along with purchasing various VSTs such as SpitFire with Hans Zimmer Percussions(those used in Pirates of the Carribean) and music theory through practice in the guitar and piano so that I can produce the music for the cinematic as well.

Following this I am also working on the concept art and story planning.
Storyboards will come after.

The only reason I can do all of this though, is because of all of you in these forums.
Im not quite sure if you guys understand, but the main reason I can learn at the speed I do without schools and teachers is because of you guys.
You are my school and you are my teachers even if I don’t have the funds to continuously purchase tutorials.

I want to say that it’s going to be a tough journey, but I’ve already steeled my resolve and have a way to back myself financially, so I’m gonna get through to making this cinematic and fulfill my dreams.

I will learn both Mari and Substance as I am already proficient in PS.
I am also learning ZBrush, 3DS, Motionbuilder and Maya later on
along with Marmoset and UE4 afterwards.

As I move through the steps I will learn what I need.
I work well when I learn as I work on the project.
But I wont sacrifice quality so if I am not getting the look I want I’ll continue to look for it and learn as needed.

With that said, the current problem is most likely getting the look that FF 15 and Uncharted 4 have on their characters.

As @Themanwithideas said above,

the workflow in the MARI tutorial won’t work out in UE4,
so to solve this problem one step at a time,

What exactly won’t work and what other method can I try to make it work?
Any tutorials, resources, or programs needed to learn?

Or any new programs or workflow that you would recommend that can achieve the Uncharted 4 and FF 15 look?

Also just asking for your opinions on the matter…
Do you guys think displacement maps are used
in the next-gen game engine cinematics, like Uncharted 4 and FF 15?

If so, is it a good idea to use displacement maps in creating cinematics in UE4?
(What would be the pros and cons in a UE4 workflow?)

A tip on all this stuff–keep things simple. For me, early on I got distracted by all the different programs and plugins and stuff. I cut things out and these days I mostly use 3ds Max and Photoshop. I highly recommend keeping the number of tools you use just to the essentials. For example, if you learn 3ds Max then you don’t need to learn Maya, you don’t even really need Motionbuilder either. For games, it will be much more useful to learn Substance since it’s specifically designed for that. I would keep your tools down to something like 3ds Max, Zbrush, Substance, Photoshop and you could do as much as you need.

As far as other stuff goes–if you want to do these things well then you need to focus on one thing, you’re not going to make high quality music if you’re also trying to do high quality 3D art. For example, the people that made the characters for Uncharted are very skilled, it involved more than just one person to make a character–they had the Zbrush artist but also had a shader programmer and separate riggers and animators, plus programmers to add special features like some complex physics. You might be able to do something of that quality someday, but not if you’re trying to do absolutely everything

Alright Ill keep it simple with 3ds, zbrush, substance, PS.
What about MARI? Was there a reason you didnt include it?

Concerning the philosophy of doing things well by focusing…
That would stem from wanting to do “quality” work in a shorter span of time correct?
(Since we are talking about high quality music and models and having many people to decrease workload and time.)

For me, this is a dream and is long term.

I enjoy the process not just the result of the music, story, and design.
If I ever came to a result on something like a test, then I would never be doing this.

The fact that I can always explore always create something different and new, something enjoyable is what I am after.

What is considered high quality if one is doing it for ones own enjoyment will always be what one is satisfied with.

Someone can define WETA level film work as “high quality”
and for another they could say RWBY is “high quality” because it tells a beautiful ongoing story or they simply prefer the style more.

If I wanted efficiency then sure a pipeline where everyone is highly-specialized would be great!

I remember when I went to GDC and asked many professional artists what is it that they wanted to do, what their dreams were.

Many of them couldn’t really answer, they had forgotten along the way.

As they work on big titles and make amazing work, art becomes something more like an office job,
a specialized pipeline like a cog in a machine.
One of the professionals even told me its a stable pay check.

So if that is the case where is the enjoyment, the self creativity?

Many people outside of the art world see artists as very daring members of society.
We are already the black sheep that have strayed from the typical path of becoming a doctor, lawyer or businessman.

Yet here we are with this conundrum that the truth is we are still these businessmen and office workers, just in a different skin.

A question to one stuck in such a conundrum…

What happen if a team or individual is to rush due to deadlines and finish to achieve the result?
Well, then we move onto next one.

What happens if one is pushed by deadlines because of monetary reasons?
You are forced to overwork, not enjoy the process, and ultimately produce lower quality work
Do harm to the body to do high quality work which in the end…
Then what? We move onto the next project

If in addition to that, if our projects are great visual works but lack depth, then what is the point of working on them?
At some point it becomes more for monetary gain than for the art itself.

Iain Mccaig referenced this conundrum through the example of
the majority of contemporary blockbusters lacking story.

The producers of the films themselves dont have any originality and demand from the writers that they must work in a frame that is bound by what appeals to the audience commercially.

Everything looks as it should, a dragon looks like a dragon, the dragon has amazing high quality textures, models, Sfx, etc
However there is no experimentation, nothing going out of the norm, no creativity, no story, and no continuation to anything.

A movie is now a rollercoaster of “feelings”
you “feel” scared when that ghost jumps out at you at 2:02:00
But thats all it is, you dont rewatch b/e there is no story its just a mass of feelings that you already know are upcoming.

So yes, this philosophical train of thought has plagued me for the couple of years.
The me doing everything I do is my answer to it. Hopefully the ones reading can understand where I am coming from.

And yes, in the future when I release my work and if it gets name, then sure if I can get others to help work on it then I will do that.
Afterall it is a quote that “Those who do not go after their dreams, will simply be a part of those who are going after their dreams.”

But I will never push for getting a result over enjoying the process of creation due to monetary boundaries.

Listen the best thing you can do is forget I said anything because it just confused you even more don’t worry about it affecting you because you probably won’t have a problem reread all of darkvipers posts until you understand or until you have more questions. Anyway sorry for being a pain. Also use the programs you want because in the end it’s you who benefits.

Thanks for the input,

I dont think its a worry about anything being a pain.
Being confused means I lack knowledge, and all that means is that I will bump into it again later on so might as well solve it now when I know what could be the potential problem.

Also using programs will just depend on what is needed, if I need to learn one Ill just learn it along the way, and if eventually I find I can do everything in 3 programs then Ill just do that.

So if its not a pain for you, I would greatly appreciate if you could decipher what you meant before like what the problem is specifically that couldnt be brought into UE4.

In regards to darthvipers posts, I understand what he talks about.
He helped to explain it so concisely that I couldn’t not understand it.

In the end, we were just offering each other our outlooks on our philosophies in life.

Mari is more popular for visual effects–Substance is designed a lot to help with making game assets, that’s one of the reasons that it can use normal maps to help with stuff like making weathering effects since most game assets use normal maps and you don’t really see that in visual effects. And like I was saying, Substance is getting used more in the industry so having that as a skill will help you a lot more than Mari if you want to work in game development. Take a look at the Substance update videos, it’s really cool with stuff like particle painting and stuff. The advantage with Mari is that it can handle large assets and large textures better, but it doesn’t have a lot of the cool tools that Substance has, plus it’s much more expensive than Substance.

The reason why I say to try to focus on one thing rather than doing many, I’m guessing you have an idea of what you want to create and it’s probably really cool and would be awesome if you can achieve it. But–just to give an example, lets say you only want a single character that’s the same quality of an Uncharted 4 character–I could see that taking 2 years to do if all you did is focus on that one thing, since you’re starting from scratch and that’s if you work very hard in all of your free time. That’s the issue, even if you have the drive to do stuff, it just takes a lot of time to develop.
I think the ideal with stuff like this is to be able to do the things you want to do, if you’re making your own stuff then you’re going to be happy doing it. I got into 3D because it’s the closest thing you can get to actually doing the things you imagine–I mean, I have stuff like spaceships and I can make them look real–and with the developments of technology, I can fly it in a video game or I can even print it to a physical object in a 3D printer.
So you should do what you want to be happy, but you also have to figure out what it takes to do what you want–so if the thing you want to do is so complicated to do by yourself that you would be working on it for the rest of your life you have to consider that. I think the best thing is to work on a project that you can reasonably finish in a couple years. There’s plenty of things that I can do within a reasonable amount of time that will still be a great achievement.