I think a more performant method of volumetric rendering wouldn’t be raymarching, which requires a lot of pixels at many slices across the model, but a way to use the distance between the front and back of a two-sided volumetric object to calculate the depth and drive different factors in the shader using those two values. I’m not sure of how to technically handle a method like this in UE4’s renderer since you need the front and back face of every single object rendered behind a pixel, but there should be a way to calculate the depth of the object between the front and back polygons to gauge a volume. This way, instead of using very complex 3D textures which few people know how to use, anyone can just model clouds in Z-brush, throw it in the engine, put a material on it, and get a volumetric result right away.
While raymarching might be great for super-advanced scientific visualizations, a polygonal volume rendering technique would be much easier to work with and performant for game development. If such a thing could be done.
If not, I would definitely like to get something like NVIDIA’s Flex materials to work inside UE4’s material editor, either through the use of GPU particles or anything else. It looks stunning!