Character Development - Best Practices - Zbrush, 3DS Max, Substance, Unreal Engine

So this is a very preference oriented question. I’d like to see what Artists and Programmers feel about the best process for bringing in a character.

A quick background - We have two artists fully trained in Zbrush, 3DS Max, and Unreal. But we have not done much combining all three programs together. We’ve done things here and there and feel like we have a grasp, but just want to get some confirmations on our thoughts. Typically we have used Unreal for Arch Viz / Product visualization or we just do still renders of characters/creatures. We know the animation tools very well in Max, so we are sticking with it for our process. The only program we are new to will be the Allegorithmic Substance programs.

So I have a couple questions about the best order that people have found in their workflow. We have our first Character from our Zbrush Artist. She has made a Highly Detailed version of our character, and it is a very synthetic/robotic looking character. We want to use substance to really highlight that manufactured look. So the next steps is to take the high poly and low poly and Texture, Make Maps, Rig/Skin, animate, and Bring into Unreal.

My first question is what is the best order from having your artists model to getting into Unreal. Should we Unwrap/texture and make maps before rigging, or should we rig first?

Second question. Do people prefer texturing in Zbrush, or do you prefer texturing in Substance? Since we want more mechanical/synthetic look I think we would be better to use Substance.

Third question. Do people prefer Making their Morph Targets for facial animations in Zbrush or 3DS MAX? (The Face is very organic in movement)

Final question. Is there any major thing we might be missing that you could point us to?

Thanks for all the help!

You need to make your low poly model of the character and then unwrap the UV’s, then you would bake your normal maps in Zbrush. You would then load your low-poly mesh and the normal maps into Substance Painter where you would texture it. You can texture or rig the model first, as long as you have your UV’s unwrapped on the mesh before rigging it.

  1. Our order is to use the iteration process by fist getting whatever is in the process of design into the engine as soon as possible so that the design can progress towards fit to finish in parallel rather as a sequence of procedures. There are a few reasons why this works well for us but 1) Iteration creates the necessary pipeline with little though as to having to plan out the small details for a unique requirement. 2) Seeing the asset come alive in real time pumps up the crew (being Internet based) that allows others on the team with skills to become involved.

Could go on but first rule is to always get stuff into the edit environment so you can create the necessary connections first and then worry about what has to be done as a team with working assets in hand second.

  1. Well the ideals of texturing in Unreal 4 is changing to the ideals of surfacing, shaders, with feature sets designed to make the decision process easier. Zbrush is very good at harvesting the required detail maps like normal or even height while I assuming Substance Painter is better at adding procedural detailing that would be other wise difficult done by hand. Overall nothing should be ignored as to what a tool has to offer if it’s inclusion solves a single problem with out having to buy into or learn the entire possibilities of a given, branded, application.

  2. We looked at morphs and at present has the same problems as older engine design using what’s no more than a fancy point cache data set and the extent of the performance loss we don’t or would not know until we have a fair number of different sets that usually cause a simple idea to collapse on it’s self just based on the sear loading (aka memory foot print). Usable or not the historical use of raw morphing data, if the need for real time behavior, is extremely difficult to maintain.

Through testing we have found that the use of marker based clusters offers a much smaller foot print that does produce the same results were dialogue can be created using the same work flows used for animations in general and can be targeted to any character that uses the same parent/child hierarchy.

And Final.

Well not so much as missing but overlooked is the human tendency to go with what you know rather how yet another 3D based application can be used based on what would be considered “NexGen” technology. The real obstacle is Unreal 4 does offer a lot of what I would call old school features but at the same time Epic continues to add features, most times on request, as another nut and bolt that once added to the machine that offers possibilities that once was only available to AAA studios.

Just some thoughts.

Thanks for the info, that really helps us on what our next step is. So unwrapping will be our next big step.

Thinking about doing this in iterations could be good for us too. Also, thanks on the insight around morphs, we haven’t done anything with morphs yet, so it’s good to get insight on that.

Thanks!