Your thoughts on and comments to Volume Rendering in Unreal Engine 4.

Hello,

I’m trying to reproduce this but so far I’m not getting something that looks right. Here’s what I’m seeing:

volume.PNG

I double checked all the nodes and code and I can’t identify what the problem is. I’m using the T_Volume_Wisp_01.tga texture provided in the article. I’ll appreciate any pointers.

Thanks!

Chcek that you have the correct size values for the custom nodes.

It seems you have the wrong number of slices set.

Ah! That was it! I had the wrong value for XYFrames. Thank you! It should be 12 and I had it set to 16 (as in the nodes screenshot, which used a different texture).

Hey guys,
Planning on releasing a content plugin very soon that has this stuff all hooked up. In the mean time, here are some sizes and dimensions to consider for pseudo volumes. Note that currently the “create static texture” feature only works with power of 2 sizes.

16 ^ 3:
16 x 1 frames of 16x16 in a 256 x 16 texture
8 x 2 frames of 16x16 in a 128 x 32 texture
4 x 4 frames of 16x16 in a 64 x 64 texture

32 ^ 3:
32 x 1 frames of 32x32 in a 1024 x 32 texture
16 x 2 frames of 32x32 in a 512 x 64 texture
8 x 4 frames of 32x32 in a 256 x 128 texture

64 ^ 3:
64x1 frames of 64x64 in a 4096 x 64 texture
32x2 frames of 64x64 in a 2048 x128 texture
16x4 frames of 64x64 in a 1024 x 256 texture
8x8 frames of 64x64 in a 512 x 512 texture

100 ^ 3:
20x5 frames of 100x100 in a 2000 x 500 texture
10x10 frames of 100x100 in a 1000 x 1000 texture

128 ^ 3:
64 x 2 frames of 128x128 in a 8192 x 256 texture
32 x 4 frames of 128x128 in a 4096 x 512 texture
16 x 8 frames of 128x128 in a 2048 x 1024 texture

196 ^ 3:
14x14 frames of 196x196 in a 2744x2744 texture

200 ^ 3:
50x4 frames of 200x200 in a 5000 x 400 texture
25x8 frames of 200x200 in a 2500 x 200 texture

256 ^ 3:
32x8 frames of 256x256 in a 8192 x 2048 texture
16x16 frames of 256x256 in a 4096 x 4096 texture

324^3:
18x18 frames of 324x324 in a 5184 x 5184 texture

400^3:
20x20 frames of 400x400 in a 8000 x 8000 texture

You can also use some ‘fudged’ sizes like:

12x12 frames of 170.666x170.666 in a 2048.
This gives an effective volume texture of size 170x170x144. I used this in some early example images even though its not a true valid cubic dimension.

Also, I updated the PseudoVolume function to support different types of mip mapping and non-square layouts. This is a new download for common.usf for UE4.14.1:

https://www.dropbox.com/s/f7qm8uzx1kfy7un/Common.usf?dl=0

Here is the full function call:

float4 PseudoVolumeTexture(Texture2D Tex, SamplerState TexSampler, float3 inPos, float2 xysize, float numframes, uint mipmode = 0, float miplevel = 0, float2 InDDX = 0, float2 InDDY = 0)

As before you can still call it as just:

float4 PseudoVolumeTexture(Texture2D Tex, SamplerState TexSampler, float3 inPos, float2 xysize, float numframes)

By default this will look up mip0. If you want additional control, use these arguments (mip switch gets compiled out if you specify it using a constant):

// @param Tex = Input Texture Object storing Volume Data
// @param inPos = Input float3 for Position, 0-1
// @param xsize = Input float for num frames in x,y directions
// @param numFrames = Input float for num total frames
// @param mipmode = Sampling mode: 0 = use miplevel, 1 = use UV computed gradients, 2 = Use gradients (default=0)
// @param miplevel = MIP level to use in mipmode=0 (default 0)
// @param InDDX, InDDY = Texture gradients in mipmode=2

Phenomenal work guys! Really enjoyed this read. Even if a lot of these things are beyond me atm.

Just recently started delving more and more into the TA side of things. This is untested but if we are dealing with greyscale images isn’t it possible to make a flipbook texture with RGB channels and save a texture in each? This way you could have higher resolution and I guess fewer texture lookups. I guess you could just look at texture at X position for RGB, display move to next position look at texture at Y at RGB, display etc.

Yes you can channel pack, but if you want to use it to get more resolution, that would involve taking additional reads of that texture. Or you can pack other things like vector fields into the other channels. One approach is to pack a different kind of noise into each channel so you can blend between them using yet another mask to provide variation.

Hey Ryan, is the reason the “create static texture” only works for powers of 2 because you are enabling mipmaps? I’ve been doing things with the intent that the data coming in will be dynamic in nature, so I’ve been setting the MipGenSettings to TextureMipGenSettings::TMGS_NoMipmaps as a precaution (I don’t know anything about mipmaps really, so I’m only assuming this is a best practice here). Am I safe in assuming it’s theoretically possible to vary the resolution with NoMipmaps set, and what exactly are the advantages to having them on in this use case (speed?)?

To my knowledge, no mip features are working with render targets right now. There are settings called “auto generate mips” that don’t seem to work. I haven’t been able to get the “generate static texture” to work with non power of 2 textures, even with disabling all mip settings in the RT settings. I haven’t looked at the code to see why but its probably just an old assumption that mips are needed or something like that.

You may want mips for speed, for example the more you tile a texture, it can cause it to have poor memory coherence without mips. But you can usually size your render target appropriately to avoid that .

I’ve spent the last week or so trying to generate what your blog posts lay out, and I have it working with variable-sized textures using the UpdateTextureRegions() function from the Dynamic Textures wiki page with UTexture2D::CreateTransient(totalWidth, totalHeight, PF_R8G8B8A8);

It is possible with any package pretty much. You can also generate this kind of texture in UE4.

Hello! I am trying to do texture volumes, I will point out that I am really bad with materials, shaders and what not. Most things here goes far beyond my understanding.

I saw Dokipen’s tutorial ( Unreal Engine Volumetric 3D Cloud System with Dynamic Lighting Overview - YouTube ) and thought that felt feasible and have followed it, though I use the start content default T_Fire_SubUV which is 6X6 UVs.
So I have changed some of the node values, but I too get these lines when looking from the side.
Any help is appreciated.

voltexparticle.png

Hello

I need your help to create my own volume texture. Can I use Houdini / 3Dsmax to create my texture ?

The only way I know how to do that would be to create a mesh with the subUV layout and use render to texture.

So if you want a 256^3 volume, you need 256 slices stacked one atop another with the right spacing, and they need to be laid out in UV space in a 16x16 fashion. Then you can do render to texture and capture your material that way.

Or you could render out the individual frame slices and use an external program to stitch them into a flipbook.

I haven’t actually done it but it should work. I am trying to do all my volume stuff in UE4.

This type of stuff should get easier once true 3d texture support is in the engine. But I am not sure what that pipeline will look like either.

I have posted all the details already. See these two articles:

http://shaderbits.com/blog/creating-volumetric-ray-marcher

http://shaderbits.com/blog/authoring-pseudo-volume-textures

I am going to release these as a content plugin very soon. Been working on it on random times but Horizon Zero Dawn has been taking up my free time lately.

I don’t really do video tutorials other than livestreams. I tend to prefer written tutorials over videos for more advanced stuff.

But I will ping the livestream guys to see if they have any time coming up.

I just got done making that effect in realtime actually but it is really slow. In a month or two I will share details about it but I need to hold of for now so it won’t be part of the initial content plugin release.

Hello ,

Thanks for putting together the tutorials & information on all this. I’ve followed all the steps described in your blog here: Post Page

Generally my results match what you have shown except I see a strange line artifact when aligned horizontally with the slices:
LinesBug.jpg

I’m guessing I have something setup incorrectly in the Ray March Cube Setup function based on your tutorial information, but I can’t seem to figure this out. Any tips on where I could look? Thanks and any help is much appreciated!

I had a similar problem for a bit. I think it might come from a small mistake in how you calculate the row offset. I cant remember exactly how I solved it however. You can see that every n slices it samples from the slice ONE ROW below it. (Or above it)

I’ve been banging my head on this the last few nights, could you do me a huge favor and dig into your copy of this shader to see if you can find the issue? I’m seeing where it does the extra final step in the Density RayMarch function but generally I have a hard time following this code. :frowning:

My project is open source: GitHub - NoobsDeSroobs/VRVizualizer: A virtual reality visualization system for scientific data.