Mapping Method Advices needed

Hello, i’ve decided to recreate a map from an old game: Facility from Goldeneye.
However, given the game is pretty old isn’t it, i decided to give it a better look, a little sci fi touche.
I then made some static meshes with Blender, like walls portions. I made a portion of 1200x300 Unreal Unit, 800, 400, 200, 100, 50 and 25 to be able to adapte the wall size to the original map… but after few hours placing these walls here and there, encountering some issues of having 10 units missing or less, i’m finally wondering if i should not make the map portions directly inside blender then export them to UE, and then place more decorative SM here and there.

I’m an old mapper, i’ve worked a lot with Hammer (WorldCraft) for HL1 to create map for Day of defeat, but it was mainly BSP mapping, and time has changed ! so i must learn again how to map efficiently.
SO should i map the “foundations” inside Blender ?

Thanks

The most common technique for architecture design is “edit in place” where things are built based on need and since building stuff in UE4 is rather limited building everything in Blender is a very good option. FBX support could be better but is being worked on but anything with FBX support could be used and channeled into UE4.

Well there is something that is worrying me. If i model the whole “architecture” in blender, i will have issue with UV maps. I won’t be able to make a one for all UV. Let’s say i wan’t to apply a material with a pattern like tiles or paneling, it will be applied all over the map without any consistence or order, if you see what i mean. If you have any link to how AAA game devs make there maps…

Well you would not make it all in Blender and export it as a OBFF but rather use Blender to harvest working assets to create a source chain between whatever host application you make use of that you would export from using export by selection.

If anything the problems you think you might have is solved by pulling out a tree from the forest rather than thinking you have to deal with the entire forest in one big chunk.

Example.
https://youtube.com/watch?v=V1SsETZrSLA
Still a work in progress but 100% staged in 3ds Max, using edit in place, and then each building exported by selection, using the name of the object for identification and then exported by selection to create a source chain between 3ds Max and UE4.

Nothing new about this kind of work flow and has been around for years and the difference now with the Unreal engine is it’s a lot more compatible with already available pipeline features that makes it easier for a lot more people to work on a single project with out everyone tripping over each other.

For example.

http://3dmax-tutorials.com/XRef_Scenes.html

If you understand the logic it’s the same thing if you substitute 3ds Max with UE4 as the target using 3ds Max as the host.

As to how AAA developers makes their environments good luck with that, NDA’s and all that stuff, but when it comes to making environments in UE4 individuals with more general 3d skills have the advantage as the duck here is not really doing anything new but rather conforming to the logic of cloud based computing.

If anything UE4 requires a rethink of how things “could” be done as in most cases one can defer what they think is going to be a problem to when it actually does become a problem.

To have seamless textures use worldcoordinate, but this work good only for ceiling and floors.

Some pointers to troubles with custom static meshes:

Making mesh set with seamless textures has one major problem. If you mirror mesh (give it -1 on some scale axis) its UVs will be mirrored resulting in mirrored texture. That means also normal map will be mirrored giving nasty seam. So make textures that have some kind of tiles or seam lines on edges, or avoid rotating and mirroring of your meshes.

If you leave too thin gap between uv islands in lightmap uvs you will get that infamous light bleeding on edges. So if you plan to use 32pix size lightmaps, leave at least 1/32 space between islands.

Mesh without collision cannot be used with physics, it does not work.

You can make nice set of meshes for walls trims, floors. It is quite easy. Just keep them to the grid in blender, watch for their pivot placement. Not sure how its done in blender but 3d max loves to slightly shift pivot and that results in your mesh not staying on grid anymore.

Thanks for the answers ! I was also wondering about Static Mesh usage, how they impact performance. Is it only polycount related?
I mean, if i make a corridor two different way: the first i the corridor with the left an right walls in the same SM, the second is a corridor with the left and right walls again, but this time, the SM is only one side, and then duplicated within the engine to make the other side. Same about lenght, is duplicating the corridor along the lenght different than having only one mesh ?

Yes, these are different. You now have two instances of this static mesh in your scene. Each can have it’s own properties set for LODs, culling distance, materials, etc. If you have one single mesh you lose the ability to treat the other end in any particular way. You also now have a larger mesh that cannot have parts of it culled based on distance.

For the most flexibility using the duplicate method could work better, also you will get a better use of your texture and lightmap UV space for resolution sine there is only part of the mesh taking up that space. This is the essentials behind modular design. :slight_smile:

Tim

That’s exactly what i wanted to hear :smiley: thanks Tim ! :slight_smile: By the way, is there any plan to make an Array Modifier directly integrated into the engine ? i saw the ones from the content exemple, but i think it is something that should be part of the editor. The Blender array modifier is extremely powerful, so i’m try to do something similar with BP but i’m not coder by nature :stuck_out_tongue:

Well the issue as to polycounts is something that has been solved as far as performance goes due to hardware rendering improvements available via the GPU.

Proof of concept.

This is an answer to the question as to poly performance when we begun rebuilding our player models and had concerns but it’s clear to us, based on our game design, that performance requirements has shifted to draw calls and fill rates and more so if aiming for low spec hardware.

The frame of mind is polygons is Yogurt, low on fat, and it’s what you add, like physics, where performance requirements and decisions needs to be made.

Using the Mykonos wip the total polycount for the facades is 47k so even if I had done the lay out as a single mesh UE4 would not even burp and the only reason I broke things down is it made scene management much easier.

As an opinion as to what you are suggesting conforming to best practices during lay out phase is not best practice as it distracts from the need of creating a connectivity between the host application and UE4 that answers the questions through iteration.

Using edit in place though I would be inclined to make the entire environment unique as fit to finish using editing techniques and scene management tools more fitting of architectural design and worry about instancing mesh based on need where what you describe seems more fitting of the already established environment building that conforms to the use of in app brushes.