Voxel rendering and computing on gpu

Hi,

I’m really excited by the possibility to compute voxel meshes directly on gpu with openCL (by this I mean computing a marching cube algorithm for a voxel grid)

Does somebody know if Unreal will offer the possibility to use directly the graphic card or we will always have to use plug in? Or maybe it will be possible to use a procedural mesh that is automatically managed by gpu even if we write c++ code for cpu?

Currently, voxel is difficult on Unreal because we can only use cpu to do the hard work. Of course opencl plugin exists but… it’s not out of the box. Will it be in a near future ? If not, why ?

Any thought on this subject will be appreciated!

Thank you

Hey,

I too get very excited by the possibility of using the GPU to compute Voxels but so far as I can tell there is no implementation as such using OpenCL to do the heavy lifting. The obvious benefit other than the CPU cycles saved is that OpenCL is cross vendor meaning both Nvidia and AMD cards supporting the library can use it.

I cant speak for Epic but as far as i can tell OpenCL is certainly not on the immediate roadmap as an important feature to add as much as i wish it was.

I personally was hoping to use OpenCL to create a dynamic destructible terrain/environment upon which a user can dig or excavate soil etc in a semi realistic manner. Other functions for OpenCL that have crossed my mind is managing a dynamic cuttable grass system.

I hope it is properly adapted rather than a Cuda based approach simply to keep it fair and open for everyone who wants to avail of the power this library provides.

You have source code access. If a feature is missing, you can add it yourself.

For the love of programming, please avoid use of polygons and use raycasting instead. The amount of people that insist on turning voxels into polygons is depressing. That approach kills the whole point of voxels. Also, what minecraft does is not really marching cubes.

http://voxels.blogspot.com/https://youtube.com/watch?v=ij0vw8yTCsYhttps://www.youtube.com/watch?v=U5yE-eaUjyk

Ray casting?

What do you mean by ray casting?

The goal of using polygon is simply because the graphic card is made for this.
Using voxel simply means we can dynamically generate meshes.
If you want to use true voxel rendering, you’ll have a memory limitation soon, as the atomontage demonstrates
We can’t have as many details in voxel rendering techniques as we currently have with triangles due to memory limitation (and probably processing power, but memory is a big deal here to zoom more and more and have high details).

I think he means something along the lines of this where rather than creating polygons & triangles from voxel generated data you use the actual voxel data and render that. (Please correct me if I am wrong NegInfinity I’m pretty interested in this whole thing :slight_smile: )

Your concerns over memory limitations are legit but with some clever usages of occlusion culling and the likes you could probably manage to keep it all within limits and maintain a playable FPS. It probably depends highly on what you are trying to achieve as both can have their uses.

The memory issue is not necessarily about rendering (RAM), but about storing (hard drive) modified (doesn’t mean empty voxel because letting the user reconstruct the terrain is interesting) voxel by the user. If the user can modify a voxel for each centimeter cube, then saving a kilometer cube of modified voxel is just too big even with extreme compression algorithm. If each modified voxel takes 1 bit, then saving modified voxel on 1 kilometer cube represents millions of Gigabytes and in a destructible world, we want the player to be able to affect more than that (classic voxel size is 1 meter. So a kilometer cube is 119 MO if each voxel is 1 bit . And I’m not sure about the possibility to save a voxel as a bit…

But if we want to render a non destructible terrain, I think it can be really cool to render voxel as voxel and not polygons.

Yes, exactly that. You don’t render polygons (or render two “portal” polygons and look at voxel world through them), and calculate scene depth/lightng using voxel data.

The videos I linked in previous post draw voxels without using polygons.

Nope. That is no longer true. You have general purpose computing api for platform, and can render pretty much anything you want using shaders.

No, that is inefficient and narrow-minded use of voxel data.

Voxels, when utilized properly, allow significantly higher complexity than triangles because voxels can be tiled in 3 dimensional space.

Those videos demonstrate direct voxel rendering, without use of triangles:https://youtube.com/watch?v=ij0vw8yTCsYhttps://www.youtube.com/watch?v=U5yE-eaUjyk

If you attempt to do that with triangles, you’ll quickly hit very high poly counts.

In the same fashion, minecraft world constructed in specific fashion can produce very high number of triangles.

The point of voxel data is that, knowing data density, you know precise memory requirements for the scene. No matter what you do with the scene, the memory requirement will not get bigger. Voxel data allows you to easily deal with destruction and boolean operations, which are non-trivial problems for triangular problems.

In contrast with that, destructible scene based on triangles will have potentially infinite memory requirement.

So, when you polygonize voxel data, you throw away the most useful property of the system.

And that’s why you compress the data and avoid storing every single voxel unless it is absolutely necessary. If 1 kilometer cube is filled with air, then you only need few bytes to represent it. Use Octrees for compression and voxel rendering.


My point is - AT LEAST experiment with alternative approaches.

higher complexity means higher memory consumption.

Do you know why it’s possible to do so? Because rendering 1 million time the same voxel group (here the same character) doesn’t consume more memory.
But in practice, there are no such duplications, so rendering the first video in real-time is just impossible if you use different objects because even if you use octree for LOD, you have to save these data somewhere because they aren’t procedural.

Knowing data density means knowing the limit of quality you can get, and to have a better quality than texture, you need a lot (I mean A LOT) of voxel.

Imagine if the kilometer is filled with anything by the player and by anything, I mean literally anything, nothing logic where you can find an algorithm to have an high compression ratio.

Yeah, trying to change polygons for voxel is something really interesting, probably something I would do my masters thesis on.
But I deeply think voxel is not game ready and won’t be until people have (hundred) terabytes ssd.

Finally, if in a game we can accept the fact of not being able to reconstruct terrain, but only being able to destruct it, there is a possibility for voxel rendering. If we have only to save destructed voxel (empty), then we can efficiently use octree and bit flag. For example, chunk lvl 1 (octree) is modified (1 bit), than we use a byte to flag each chunk on lvl 2 modified or not, and so on. And logically, it’s impossible to have disparate destructed voxels because we can’t dig a filled voxel under another filled one.