I’m afraid if this is directly related to this thread, but is there anyone who has visualized the each CT slice in addition to the volume rendering? I’m trying to do it but something is wrong.
Here is my question post.
Thanks.
I’m afraid if this is directly related to this thread, but is there anyone who has visualized the each CT slice in addition to the volume rendering? I’m trying to do it but something is wrong.
Here is my question post.
Thanks.
In theory it should work fine, there is nothing way unusual that would prevent it from showing up in PS4, unless the system is considering the frame cost too high and it is turning the material invisible, did you try with a small number for raymarch steps? 32 and up to 64 maybe… quality drops, but at least we can figure if thats the case. I would also ask to @ if eventually it needed any adjustment for Fortnite when the release was made for it?
@Kakushi_52 We did ^^ https://forums.unrealengine.com/unreal-engine/marketplace/95980-plugin-volumetric-space-cloud-and-nebula-starfield-v2
We visualized a CT and MRT scan, if this is of any help. (Using xnView f.i. from DICOM images to make a mosiac sequence - but don’t know, how exact your visualization needs to be)
I have just made a light analysis at your C++ code and shouldn’t the line “dicom_texture->SRGB = false;” be equal “true” ?
The materials seems OK. Let me know if the change above wasn’t the issue.
Thank you for your comment. Unfortunately changing the SRGB flag did not solve the problem.
I am afraid if the 16bit value has been kept when exposed to material blueprint because changing the color mapping method results in the same very weird image like the following…
Both this
https://user-images.githubusercontent.com/8625552/63068235-5884dc80-bf4d-11e9-9765-fbda8e11c78d.png
and this
https://user-images.githubusercontent.com/8625552/63068266-77836e80-bf4d-11e9-97d5-6e3a15ad8536.png
result in the same weird image.
https://user-images.githubusercontent.com/8625552/63068343-c7623580-bf4d-11e9-94c9-5e35c7b8673d.png
And this image looks the value > 0 turns into black and value < 0 turns into white…
The following image is the binarization of the image on ImageJ by the threshold 0.
https://user-images.githubusercontent.com/8625552/63068478-32ac0780-bf4e-11e9-8676-b60f0016bb58.png
Thanks.
Thank you for your comment. It seems that your plugin is for volume rendering, instead of for generating the procedural texture. I have already done reading DICOM voxel values but can not visualize it as a texture…
Anyway, thank you very much.
PF_R16_SINT does not seem to be supported when the UTexture2D with PF_R16_SINT is exposed to Material Blueprints. So I tried with PF_G16 and it seems to work well.
I wrote the detail at the AnswerHub.
here is the material I modified,
and the visualized CT image on Unreal Engine 4.
The image is exactly the same as the image visualized by ImageJ!
I don’t current solution is not the best one because I have to convert between unsigned short and short, and I have to convert the automatically calculated sRGB final color back to linear RGB.
Hey! I ran into this when trying integer textures for DICOM rendering. the trick is to use asint() and asuint() in the materials.
Spent 2 bloody days banging my head against the wall, but then I tried this and it worked (also worked for R32_(S)INT and R8_UINT, as far as I remember).
But In the end didn’t end up using integer textures, as stuff looks just too blocky without interpolation.
But yeah, if you create a texture as an INT texture somewhere in code, then you can read from it in materials like this:
int LabelValue = asint(VolumeTexture.Load(int4(CurPos, 0)));
same with asuint();
Try putting that in a custom node and returning it, should convert the int to a float and make it useable further down the road.
note - int4(pos, 0) is for volume textures, 2D will have int3(pos, 0).
Hope this helps somebody, because I wasted way too much time on this
Also, for people interested - I made a plugin that works with a custom build of 4.22 for volume raymarching using actual VolumeTextures,
Check it out here :
https://forums.unrealengine.com/comm…n-and-labeling
Currently figuring out how to redo it in a way that does not need a custom engine build, seems like it’s possible (and will be a lot more elegant).
Thank you so much for your kind reply! I did not know that singed or unsigned int texture is not interpolated…
However, most of the CT or MRI images are composed of singed integer scalar values, so requesting Unreal Engine to support DXGI_FORMAT_R16_SNORM in addition to DXGI_FORMAT_R16_UNORM (PF_G16) would be the best way?
Otherwise, shifting the signed integer value to unsigned integer value inside the code like what I did, is the only solution?
What I ended up doing is converting all the data to float before even creating the texture. I use the PF_R32_FLOAT to use the non-normalized float, so no need to convert it to useable values later in the materials.
Note - this is a templated, parallelized version, so it runs as fast as possible and is useable for any type of int
Data is the original integer array, NewData is the resulting float array. You can see that NewData is allocated inside the function, so it’s your responsibility to delete it later.
template <class T>
void ConvertDataToFloatTemplated(uint8* Data, int VoxelCount, float*& NewData) {
T* TypedData = reinterpret_cast<T*>(Data);
NewData = new float[VoxelCount];
const int32 NumWorkerThreads = FTaskGraphInterface::Get().GetNumWorkerThreads();
int NumVoxelsPerThread = VoxelCount / NumWorkerThreads;
int NumVoxelsLeftOver = VoxelCount % NumWorkerThreads;
ParallelFor(NumWorkerThreads, &](int32 ThreadId)
{
int index = 0;
for (int i = 0; i < NumVoxelsPerThread; i++) {
index = (NumVoxelsPerThread * ThreadId) + i;
NewData[index] = static_cast<float>(TypedData[index]);
}
});
// Finish the leftovers on the main thread
for (int index = NumWorkerThreads * NumVoxelsPerThread; index < VoxelCount; index++) {
NewData[index] = static_cast<float>(TypedData[index]);
}
}
And final usage would be (let’s say your data is int32)
// Load your int32 data to ImageDataArray and set VoxelCount to the actual VoxelCount :)
float* NewData = nullptr;
ConvertDataToFloatTemplated<int32>(ImageDataArray, VoxelCount, NewData);
CreateVolumeTextureTransient(LoadedTexture, PF_R32_FLOAT, ImageInfo.VoxelDimensions, (uint8*)FloatDataArray);
// Or create a 2D texture with Create2DTextureTransient, depends on what data you loaded, or make the assets
// non-transient with Create2DTextureAsset
Now you have a texture of non-normalized floats with the same values as you had in your integer DICOM.
Note that this is potentially imprecise, if your values were out of the range of ± ~2^20, as floats are only guaranteed to have exact representation of ints within this range.
Shouldn’t be a problem with CT scans though, as here the range of ± 10k is usually enough
Cheers, Tommy.
Glad I found this thread as I am also recently working visualizing MRI data. So far, I have followed the tips from this thread and read Ryan’s blogs about volume texture and ray marching. I also successfully used his custom shader to display the MRI data correctly in Editor. Problem is now I need to make it work in Oculus Go (or maybe Oculus Quest in the future).
I just created a small basic scene with 1 single inverted cube that has the raymarching shader to test. It works when launching inside Editor, I can see the volume and interact with it just fine. But the cube becomes un-textured when launching in Oculus Go. Unfortunately, I am more an artist than a developer, so my coding knowledge is pretty limited, and mostly work in Blueprint, so I can’t figure out why it becomes un-textured in Oculus Go. There is no fancy thing in the level yet, it is pretty basic now.
Unfortunately, I don’t have any screenshot, as the compiler never spits out any error that I am aware of.
So my question is:
Has anyone here successfully created this type of visualization for Oculus Go, or mostly just HTC Vive and other high end headsets. Is it because the hardware doesn’t support raymarching/volume texture? Or something wrong with Android packaging?
Any tip here is super helpful to me, and is very appreciated. Thank you.
Hi Harry, this is because Ryan’s (and most of other people’s too) raymarcher most likely depends on deferred rendering (as in - there are two or more passes when rendering the image).
Mobile devices use ES 3.1 forward renderer, so there is only one pass.
There is a way to preview SM ES 3.1 on windows, follow the following tutorial (I had to do the 2 steps the other way around - first set up the project to use 3.1 and then enable the preview shaders)
https://docs.unrealengine.com/en-US/…rer/index.html
(More recent version here https://docs.unrealengine.com/en-US/…wer/index.html)
Actually just noticed this page - probably that is the way to go to get full info on the shaders in ES3.1
https://docs.unrealengine.com/en-US/…als/index.html
After that, the shader compiler should tell you the problem causing the shader to not compile.
As a first step, I’d try removing the LocalSceneDepth calculations, setting the material to Opaque instead of transluscent and enabling normal depth-test.
Also, if you are using the distance field shadows mentioned in Ryan’s article, ditch those too.
I’ll actually also be porting some raymarching to the Quest in the coming months, so please do share your results
Good luck.
Hi Tommy,
That makes a lot of sense, I will troubleshoot and hopefully can fix the issue. I will update my results soon. If you have any other tip on this matter, please share it too!
Cheers.
@tommybazar It seems the volume raymarch material I am using is not compiled for some reason. It is the M_VolumeRayMarch_Lit_LinearSteps_IBL_ModTexture from Ryan’s plugin. The only I added in is the slide cut feature. I tested this material *M_VolumeRayMarch_Lit_LinearSteps_FN, *it complied fine, but is super heavy in Oculus Go.
I tried your tip, but to be honest, I am clueless on how to edit these codes properly. Here is the current code btw:
float numFrames = XYFrames.x * XYFrames.y;
float accumdist = 0;
float curdensity = 0;
float transmittance = 1;
float3 localcamvec = normalize( mul(Parameters.CameraVector, GetPrimitiveData(Parameters.PrimitiveId).WorldToLocal) ) * StepSize;
float3 invlightdir = 1 / LightVector;
float shadowstepsize = 1 / ShadowSteps;
LightVector *= shadowstepsize*0.5;
ShadowDensity *= shadowstepsize;
PreviewAmount *= StepSize;
Density *= StepSize;
float4 lightenergy = 0;
int3 randpos = int3(Parameters.SvPosition.xy, View.StateFrameIndexMod8);
float rand =float(Rand3DPCG16(randpos).x) / 0xffff;
CurPos += localcamvec * rand.x * Jitter;
for (int i = 0; i < MaxSteps; i++)
{
if (CurPos.z >= SliceNr && CurPos.z <= SliceNr + SliceRange)
{
float3 volsample = PseudoVolumeTexture(Tex, TexSampler, saturate(CurPos), XYFrames, numFrames);
float3 modsample = PseudoVolumeTexture(Tex3, TexSampler, ((CurPos+ModPanner)*ModulationTiling), 16, 256).rgb;
float cursample = saturate(volsample.b + modsample.r * Modulation);
//Sample Light Absorption and Scattering
if( volsample.g + cursample > 0.001)
{
curdensity = 1 - exp(-cursample * Density);
float4 lightingtemp = PseudoVolumeTexture(Tex2, TexSampler, saturate(CurPos), XYFrames, numFrames) * float4(SkyColor,1) * transmittance * curdensity;
lightingtemp.xyz *= (modsample.g * ModGradient) + ModGradientOffset;
lightingtemp.w *= (modsample.b * ModGradient) + ModGradientOffset;
lightenergy += lightingtemp;
//lightenergy += modsample.g * ModGradient * transmittance * curdensity;
lightenergy.xyz += volsample.g * transmittance * float3(0,1,0) * PreviewAmount;
transmittance *= 1-curdensity;
}
}
CurPos -= localcamvec;
}
lightenergy.xyz += lightenergy.a * LightColor;
return float4( lightenergy.xyz, transmittance);
To be honest, I wouldn’t expect anything else. Raymarching is **hardcore **for the GPU. It’s an uphill battle to get it running decently on a RTX 2070 in VR on a PC.
Just try multiplying these three things together to get the rough number of texture samples per frame!
Number of pixels of the volume visible (worst case on the GO - 2560 x 1440 = ~4M)
Number of raymarching steps (let’s say a 100 to be conservative)
I see you’re using 2 textures with the possibility of a third if the sample is dense enough, so let’s say 2.5.
Looking at something close to literally a billion texture samples every frame.
A good and easy first step is reducing the number of pixels. Since the volume texture you’ll be using will usually be a lot less fine (something along the lines of 256^3 or something), it doesn’t really need to be rendered at full resolution. Lucky for you, Unreal already has a pretty neat feature called “Separate Transluscency” that allows you to render materials on a separate buffer that doesn’t have to be the same resolution as the regular viewport. So, try this for starters:
In the material details, tick “Render after DOF” and “Mobile Separate Transluscency” (might need to put the material back into Transluscent, not sure).
In your projects DefaultEngine.ini, set r.SeparateTranslucencyScreenPercentage to something low, such as 30.
With 0.3 * 0.3 = 0.09 instead of 1*1 = 1, you’re rendering ~11 times fewer pixels worth of raymarched material.
11 times fewer raymarched pixels = 11 times fewer texture samples, while other stuff (such as text and lines) stays clear.
And on the raymarched volume, the quality drop isn’t too noticeable.
Another way to get more performance out of this is using Volume Textures (instead of the pseudovolume textures), as Ryan’s raymarcher code is *old *and UE now supports them natively.
If you look a couple posts up, I was linking to my own plugin for that does that. Except right now it’s hacky and ugly and overcomplicated, if you’re not a coder, you’ll not have a nice time working with it, honestly (and even if you are, it’s not a pretty sight). Also it requires a custom engine build.
I’ll try making a clean, simple, minimal working example plugin for this later this week, so you non-coder guys can play with it too (ping me if it’s not here by Sunday, as I’m a lazy slacker and break promises on a regular basis).
Thanks for the quick reply,
The code I mentioned above is actually from M*_VolumeRayMarch_Lit_LinearSteps_IBL_ModTexture, *so I guess Ryan’s code uses 2 additional textures to get the image based lighting, and modulation effect (?) (which to be honest, I don’t really need). But this IBL ray march material doesn’t compile, and I wonder why. If you don’t mind trying the code and see if it compiles for Android on your end, no worry if you don’t really have time for it though, I know we are all busy
For now, I will keep playing with it in your suggested direction, and see how things go. And look forward to your plugin too
Thank you.
I removed that version of the material from the latest plugin. I don’t have a copy here to look at, but if there is something broken with it, its probably the same old Primitive ID stuff that needs replacing.
ie, there is a line similar to this near the top, you could try using this line (all primitive properties must be accessed this way now):
float3 localcamvec = normalize( mul(Parameters.CameraVector, GetPrimitiveData(Parameters.PrimitiveId).WorldToLocal) ) * StepSize;
A similar line needs changing in the Box Intersection function feeding the main raymarch.
That IBL material just used a cubemap for lighting. The 2nd texture was a tiling noise with directional AO and shadowing prebaked in.
I ended up not continuing that particular prototype because the new methods with SkyAtmosphere basically make it obsolete. See the volumetric cloud prototype under the Volumetrics plugin (available on github only so far) for an idea.
@harry1511 Pretty much what Rayn B mentioned above. There is another custom node in the material called “Ray March Cube Setup” which is the Box Intersection function he is mentioning. There are 3 lines with Primitive.xxxx which needs to be fixed in order for the material to work.
@ @NilsonLima Thanks for the reply guys. However, I did change Primitive ID to GetPrimitiveData. The material does compile and work inside the editor. Problem is, it only fails to compile for Android…
oh, this won’t work on android really because you cannot read scene depth. you’d probably have to remove all scene depth lookups from the box custom node and you would lose any sorting with opaque. could work if you dont care about that maybe. but still likely to melt a phone pretty quick.