BSP Brush vs Static Mesh

I just got started using UE4 and something I´m asking myself over and over again is when to use Static Meshes instead of BSP Brushes. What exactly is the difference etc.
I understand that I can use Static Meshes for creating more detailed stuff in external progams like Maya etc. and thats fine, but when it comes to simpler stuff is there any advantage using Static Meshes? The thing is that I really like to use BSP Brushes with their ability to select and manipulate every single vertex (is something like that even possible in UE4 with a Static Mesh?) but I couldn´t manage creating an instance of it in a Class Blueprint so far (in the Level Blueprint there is no such problem), for example.

Not trying to push you away, but there’s quite a lot of discussions on this on other forums, since the technology has been around for quite some time. Do some digging on Google, there seems to be a lot of good discussions, even one on the Epic Forums.

I’m still pretty new too, so I don’t want to give you any advice I can’t back up, but I have this feeling that using a lot of BSP brushes in a big map would be bad for performance given the extra computations it’ll need to be doing, but that’s just an educated guess, you’ll be better off researching this further.

Hey derMischa,

Entire level in common is done from BSP blocks, that’s called “blocking out” level, BSP helps you to give shape to your future leve, after that 60 ~ 70 percent of BSP brushes should be covered with static mesh. BSP technology is very hunger for computing and optimization, but not static mesh. Unreal engine works perfectly with static and shows better performance with it. So Use BSP for minor pieces of level, don’t use it everywhere for release version of level, just to make shape and then cover it with static assets.

I guess my problem is that I really missed out this developement. I worked quite a lot in level design back in the days of Quake 3 and Doom 3 and thats why I´m just not used to it yet. I searched the forum for it and surprisingly (for me) it seemed that nobody shared my issue :wink: It seems like it´s just because the “change” already happend some generations before.

So do I understand it right that I should prebuild almost everything in, for example, Blender (every wall, door etc)? I also have to save a new version of the Static Mesh for each material I want to use on it? Is there a possibility to adjust the material in UE4 or do I have to do it also before?

BSP is quite slow and especially very limited in terms of control (can’t control the way it is rendered, what type of collision it has, how it handles lighting, and so on). It also offers no smoothing (patches or whatever they were called in Quake) and only supports hard edges.

In general BSP is fine for minor stuff left and right or for prototyping floorplans but you’d be much better of using mostly models otherwise.

This change indeed happened in 2002 already. Gears of War and UT3 are examples of how a hybrid approach was taken, with part BSP, but mostly meshes. I nowadays work 100% in mesh only though. It will take you some time to get used to but it goes much faster once you do.

I have a series of videos coming out soon on my site about building the buildings in Solus with just mesh. You should get a pretty good idea from that how typical Quake’ish style buildings are approached nowadays with modular meshes only.

Hey guys, I have one begginer question.

I’ve make a simple building with geometry tools, with roof etc.
I see that I can texture geometry, but it is pretty hard to apply material to every geometry brush of my buidling.

So better is to transform every BSP brush to static mesh and then apply material etc? Better for optimization and work?

A lot of people nowadays undermine the value of BSP / CSG level construction, forgetting that static meshes have to me modeled and exported first, while BSP brushes are always there and can be combined into wide variety of shapes. No need to wait for an environmental artists (or have one on the team) for level prototyping / gameplay mechanics tweaking until it’s all done and done.

I am hoping Epic will bring BSP / CSG tools to at least Radiant’s level.

You can (and should) always blockout level using BSP and then converting them to Static Meshes. Is faster than the import/export workflow, when you’re satisfied with the general shape of the BSP model you can convert it to static mesh and send to external program if you need more details on the model.

Just don’t leave BSP models in the final level, always convert them to Static Meshes.


Tnx guys, yesterday I was reading some tutorials and seems that static mesh is better than BSP (shadows, texturing etc).


Here is example when is BSP converted do static mesh.

But why I can’t apply material to that mesh?
Do I must do first something with mesh like that before applying material?

For some reason the processes isn’t completely automatic. After you convert, open up the mesh itself.
This is my personal workflow:

  1. open the mesh
  2. check the use full precision UV’s (I’ve been surprised how much of a difference this makes)
  3. click apply (this will fix the indicator that the lightmap is invalid, but it doesn’t actually fix it)
  4. Scroll down to Static Mesh Settings and set the light map resolution to 64 or higher (the default is 4 and that is way too low)
  5. Set the light map coordinate index to 1. (this really fixes the lightmap issues)

Unfortunately I have to do this for every single mesh I create via BSP. I don’t know why we can’t do all of this automatically.

Yes, I figured out this yesterday :slight_smile:

Btw, I have some bug with light map resolution, now it workd pretty well. Now I’m learning how to use multiple textures in meshes created inside UE4 etc.

And now I see that without learning modeling UE4 is uselles :smiley:

So I create a simple house without roof in UE4 (with holes for window etc), and now I will learn 3ds Max to finish that house.

For now, I see that 3Ds max is easier then Blender. Maybe because I’m workind in AutoCad at work :smiley:

If your lightmap is buggy bump up the resolution. 64 might be ok for smaller meshes (or if it will only be used dynamically), but I put 512 on one of mine yesterday. Also make sure you set the lightmap channel to 1. If you leave it at 0 it will be buggy.


Oh yes, just one question: - do you put your textures on models outside engine, and than import finished model, or you just create mesh and edit textures in blueprint in engine and than put in on mesh?

I’m working on some small game for android mobile.

Almost 3 years later and no one could take 2 seconds to answer. The Forums for Unity are vastly superior even if the program isn’t.

What are you talking about? It was answered in multiple ways… For performance/flexibility reasons: don’t use BSPs any more, use static meshes.


I think it varies depending on many obv factors…you think this is bad,LY/CE are pretty abysmal,YMMV .
If the forum isn’t answering, a good bet is answers.unrealengine may be faster, again YMMV .

GOod info here yes, but I think nextworld, may, have been referring to tomislav question , which by now I imagine he found answer elsewhere ;0-0

Since when I google BSP I get a lot of information on the way id tech style engines work (Quake, Half-Life 2, etc) using Binary Space Partitioning. Based on what I’m reading, it makes no sense that BSP should be slower than static mesh. In fact, if you only have static meshes, how the hell does the engine calculate visibility? The whole point of BSP brushes (I thought) is so the engine can use them as solid room walls to determine exactly which part of the level was visible from any other part, and optimise accordingly. Converting everything to static mesh would throw that all out, no? There would be no space partitioning to figure out what to cull!

I mean people saying static mesh is faster than BSP must be right, but it doesn’t make any intuitive sense to me, unless unreal for some reason doesn’t bake them out at all.

I know this is like 4 years now, but it still comes up when searching BSP stuff, and I hate that its not answered.

BSP divides the world into partitions based on wall angles and stuff, but the problem with that is that it is cumulative. So its great for fairly basic, sparse stuff but if you have a lot of brushes, you have a lot of partitions, and some of them get really small.

Imagine, if you will, slicing a level along a wall, then another wall intersects that so you slice that too, then another, then another, then another, but also all their other walls intersect all the others and pretty soon you have a LOT of spaces. Its almost exponential.

For very complicated levels, with good detail, not just grey boxed basics, that gets basically impossible pretty quick. Even as fast as searching the tree can be.

The alternative is static meshes. The reason meshes can be faster is that they arent drawn in full detail when doing checks, they have whats called a bounding box. This is just a box that is stretched to enclose the mesh as tightly as possible, to… well… determine its bounds. Meshes are also encased in a bounding sphere, for extremely fast distance checks.

Firstly, this lets you do view frustum culling, which means you only need to render whats actually going to be in the screen, and you can forget about testing absolutely anything thats behind you or whatever, you can even do that as a binary search.

You can then do a massively parallel, and absurdly fast distance check on all the remaining visible objects. This is what GPU’s are for, huge parallel operations like this, an innovation not available at the time of ID’s bsp graphics revolution.

Unreal basically checks against a depth buffer for this purpose, to allow for gaps in walls for instance.

By having the hardware test against these very simple, basically just 4 vectors, bounds, it can instantly determine what is and is not occluded, then test slightly more rigorously on the square bounds, and then finally draw the appropriate static mesh that it already has in GPU memory, acting basically as a template for each mesh.

BSP cannot do this, each and every frame must be sent to be drawn as the relevant walls are not known ahead of the binary search. You are basically generating a mesh on the fly, every time.

Static meshes basically keep all the work inside the GPU, where memory is extremely fast, and avoids the CPU-GPU bus as much as possible, sending only very simple numbers where changes occur, such as location or rotation of a mesh changing, which are just 3 vectors, and allowing the device that can draw billions of triangles simultaneously to just… do a simple depth check of a few thousand or maybe a hundred thousand numbers against a depth buffer, internally. The same thing it would do for a brightness comparison of a texture or a roughness mask lerp.

Now, im not 100% sure about ue5’s nanite, but I think it takes this to the next level by also dividing the meshes into binary trees of groups of polys and then when it comes time to draw them does a binary search per poly of visible groups and then down to pixel sized polys, so it doesnt even have to draw the whole mesh, which is very clever indeed. It does something like that, anyway.

But yeah, hopefully that answers the question both for yourself, and for any future explores that come across this post.