A million of anything: not polys but static meshes, points or billboards

Hello all,

This is perhaps my first post, I have searched the forum and have received certain insights into the issue I’m having but find it specific enough to ask this question. I’m doing fractals the Mandelbrot set in particular, and I have used a static mesh for each point sampled in the complex plane. I have also used billboards; the algorithm works in that a 256 by 256 sample grid works ok and I get a 3D representation of the Mandelbrot set.

But when I get to the, for me, fun part in regards to the level of detail, a 1024 by 1024 sample count with a static mesh per sample point, I run out of space on my graphics card. I’m assuming it’s because (even though I am for now just using the default material) of the way materials are handled, or static meshes are setup to be rendered. When I first really cranked up the sample count the editor froze; as I did this in the ‘begin play’ function call, I assumed it was taking too long so I only did a few thousand at a time instead of the complete set and then I hit the former issue.

Is there a way around this?

A million years good luck to someone, anyone, who could point me to a link or give information as to how to get my favorite; one million or so static meshes to represent the Mandelbrot set into a map. I tested it so far with a cube exported as an FBX from ZBrush, with just the default material, and with billboards. But have found the million sample count problematic with both.

Also, I of course would like to give some indication of escape value by color-material, should I use 256 different materials or would material instances do the trick?

Thanx.

That’s going to depend on two things–polygon count and draw calls. You can manage draw calls using Blueprints, to where you can create a large number of low-poly objects and get good performance. But if you still end up with a high poly count then that won’t help.

Hmmm I’m using ‘spawn actor’ which to my understanding is somewhat different than draw calls per se? So … at some point I end up when doing 256 by 256 samples, 65536 actors; seems I should find a way to aggregate the samples some other way, for starters?

Thanks Darth. And perhaps I’m asking either the wrong question or asking the question the wrong way, simpler might be: What’s the best way to get a million cubes into UE4?

You’ll need to create instanced static meshes instead of regular Static Mesh actors, it should be able to spawn a few million with it if it’s just a super simple mesh.

Another option is to create, say, 1024 procedural mesh actors, arranged in a 32x32 grid.
Then, you can generate a 32x32 block custom mesh for each of those actors; i e you’d put 24 vertices and 12 triangles into the procedural mesh for each “pixel/block” based on the values.
The reason you need 24 verts per pixel is that each block is 6 faces, and each face has 4 vertices, and there is no sharing if you have flat normals or unique UVs or such (which you’ll definitely want!)

So, 1024 custom meshes, each with 243232 vertices and 123232 triangles. You can actually share the triangle list between all the instances if you write this in raw GL or Direct3D; I don’t know if that works in Unreal Engine.
The amount of memory is 23123232 == 96 kB per index buffer (so, 96 MB for index buffers) and 322432*32 == 768 kB per vertex buffer (so, 768 MB for vertex buffers.)
You ought to be able to fit this on modern graphics cards with 2 GB of RAM or more. For various reasons, you’re unlikely to fit all this, plus frame buffers, plus perhaps textures, plus overhead, in a 1 GB graphics card.
This makes a lot of assumptions about vertex components and stuff – it may be that Unreal really really want more components for tangent bases and such, and thus the size would grow appropriately. Also, if you can share vertex buffers, using the same vertex buffer for UVs and normals and tangents across all the blocks is totally possible, which saves a lot of memory.
Separately, with enough smarts in a vertex shader, you can instead use a single static mesh that is the data for each pixel, and extend the top side of each block (representing a pixel/sample in the fractal) up by the value of that sample in the shader. This will save a lot of memory, but will require one texture read per vertex in the vertex shader (which isn’t really a problem these days.)

So, many ways to skin that cat :slight_smile:

Mk think a proper plan of attack would be to start with Ricewagon’s ‘instanced static meshes’ and then evolve the code and work my way up through and to jwatte’s advice on shaders.

Thanx Ricewagon and jwatte for the thoughtful responses -> back to coding. Will post some pics of the results.