Announcement

Collapse
No announcement yet.

So Blurred glass material is impossible in Unreal Engine 4?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

    #76
    Ok, added a regular glass shader to my window. But the Post Process Volume still only replaces my entire view with the ScreneCature2D camera's results. Guess I'll have to dig deeper. Thanks for the feedback!
    Eldridge Felder

    Animation/Visualization Manager
    WorthGroup Architects

    Comment


      #77
      Did you activate depth of field in your screencapture postprocess settings?
      GLASSES & WATER SHADERS & CUSTOM PLANAR REFLECTIONS on my Youtube Channel: https://www.youtube.com/channel/UCP2...26Y2bEV4kfl9kA

      Comment


        #78
        Yep. Here is a screenshot. Also the before and after effect of walking through the Volume:

        Click image for larger version

Name:	2016-02-01_FGlass03.jpg
Views:	1
Size:	71.0 KB
ID:	1099695

        Before walking into the Post Processing Volume:

        Click image for larger version

Name:	2016-02-01_FGlass04.jpg
Views:	1
Size:	62.0 KB
ID:	1099696

        After walking into the Post Processing Volume:

        Click image for larger version

Name:	2016-02-01_FGlass05.jpg
Views:	1
Size:	62.7 KB
ID:	1099697
        Eldridge Felder

        Animation/Visualization Manager
        WorthGroup Architects

        Comment


          #79
          You have to check: focal distance, focal region, near transition, far transition (all set to 0), near blur, far blur (set to whatever >0).
          Select Gaussian DOF to have a nice regular blur.
          Check unbound too, so you don't have to walk in to see the effect.
          Set your postprocess material to "aftertonemapping".
          For the rest, I don't know. It should work.
          Something to dig, for sure.
          Last edited by EdWasHere; 02-01-2016, 04:34 PM.
          GLASSES & WATER SHADERS & CUSTOM PLANAR REFLECTIONS on my Youtube Channel: https://www.youtube.com/channel/UCP2...26Y2bEV4kfl9kA

          Comment


            #80
            There is pretty simple solution
            Do you know quick blur? It is when we first blurring vertically then horizontally or vice versa.
            This can be easily done with visual material constructor.
            This is how it looks like (Horizontal blur):
            Click image for larger version

Name:	BlurShader.png
Views:	1
Size:	244.0 KB
ID:	1100691
            Same thing with vertical, just we modifying Y coord instead of X.
            That better to be done in HLSL code with for loop, but I'am too lazy for that
            This is what I get with 4 iterations (falloff looks strange, cuz I just get some decreasing values without calculations):
            Click image for larger version

Name:	BlurShaderPresent.png
Views:	1
Size:	891.3 KB
ID:	1100692
            Material is VERY cheap, it take less than 0.1 ms on GTX 750 Ti with resolution of this screenshot, and gets only 1 FPS to render at 54 FPS in FullHD.
            It has 43 + 33 instructions and one texture sampler.
            I should even say that it is possible to use depth, make this blur dependent on distance.
            But it is really good question: How to use this blur behind HUD/UMG elements?

            Comment


              #81
              It's a bit hard to tell what you're doing as the screenshot has been scaled down, but I assume you're using a material or a render target. In that case what's even more effective than sampling with an offset like that is to use the lower res mipmap if it's available. Try that and you should incur no speed penalty, since if they exist they're being created regardless.

              For a LONG time I've wanted to be able to access whatever mips of the postprocess materials exist by the way, I wish Epic would open them up for us.

              Comment


                #82
                @HungryDoodles: very cool result! And thanks for posting this.
                Do you have a higher resolution screenshot? I think I could guess and reconstruct it but it would be more easy to study it.
                How could you make this blur dependant from depth?

                @Antidamage: does the engine generate mips for the postprocess materials? I already asked this question and had no clear answer about it.
                GLASSES & WATER SHADERS & CUSTOM PLANAR REFLECTIONS on my Youtube Channel: https://www.youtube.com/channel/UCP2...26Y2bEV4kfl9kA

                Comment


                  #83
                  You know what? I think we could fake it with a rendertarget in the same location as the camera doing a low-res snap of the scene. With some settings switched off it'll be much faster than doing a 10x10 blur iteration on the full image.
                  Last edited by Antidamage; 02-17-2016, 04:48 PM.

                  Comment


                    #84
                    I've wrote material with HLSL, that's was not difficult at all I should mention.
                    As before image needs to be passed twice: vertically and horizontally. And separately. But it makes cool effect if these passes used simultaneously.
                    For vertical pass (horizontal commented with /**/):
                    Code:
                    int TexIndex = 14; // Can be either 13... I think
                    bool bFiltered = true; // Can be false
                    float3 blur; //= SceneTextureLookup(UV, TexIndex, bFiltered);
                    
                    //Vertical pass
                    for (int i = 0; i < cycles; i++)
                    {
                      float c = 1.0f / (3.14159f * amount);
                      float e = -(i * i) / (amount);
                      float falloff = (c * exp(e));
                      blur += SceneTextureLookup(UV + float2(i * rx, 0), TexIndex, bFiltered) * falloff;
                      blur += SceneTextureLookup(UV - float2(i * rx, 0), TexIndex, bFiltered) * falloff;
                    
                    }
                    //Horizontal pass
                    /*for (int j = 0; j < cycles; j++)
                    { 
                      float c = 1.0f / (3.14159f * amount);
                      float e = -(j * j) / (amount);
                      float falloff = (c * exp(e));
                      blur += SceneTextureLookup(UV + float2(0, j * ry), TexIndex, bFiltered) * falloff;
                      blur += SceneTextureLookup(UV - float2(0, j * ry), TexIndex, bFiltered) * falloff;
                    
                    }*/
                    
                    //blur /= 2 * cycles + 1;
                    return blur;
                    And material nodes:
                    Click image for larger version

Name:	BlurShaderAdvanced.png
Views:	1
Size:	367.2 KB
ID:	1100786
                    I also used Gaussian falloff (normal distribution) which defined with next function:
                    Code:
                    float gauss(float x, float amount) {
                        double c = 1.0 / (2.0 * 3.14159265359 * amount);
                        double e = -(x * x) / (2.0 * amount);
                        return (float) (c * exp(e));
                      }
                    Where x - loop iteration number, amount - exponential amount (the less this number - the less blurriness will be).
                    Wikipedia: Normal Distribution
                    If talk about physical reinterpretation, then mostly it is how blur materials will distribute light, so it might look very believable.
                    There is a little hack with image brightness, because integral of normal distribution is... well...
                    Result:
                    Click image for larger version

Name:	BlurShaderPresent.png
Views:	1
Size:	449.3 KB
ID:	1100787
                    NOTE: If you want to use this with transparent material, the you need to use Texture2DSample(Tex, TexSampler, UV) instead of SceneTextureLookup... Or something that contains image behind object and works with one of these samplers.
                    Needs optimization! Need to downscale input resolution by 2 or 4 times, which will greatly increase performance, because 50 iterations is now gets 5-6 ms in 1080 - it's bad.
                    I wish there will be CUDA, because then I can precalculate normal distribution once and use it as const array, which will cut 90% of calculation time (power is very expensive operation). Is there any buffers or something like that?

                    Actually THE BEST optimization will be Fast Fourier Transform, but I don't find it possible anyhow.

                    We easily can make blur dependant from depth by "regulating" variable "cycle" with depth. The other question is how we can make it.

                    @Antidamage You know, we can even make blur under UMG using rendertarget! That's is a cool idea! I'am gonna make it, but little bit later, university takes a lot of time.

                    Material is free to use and edit
                    Last edited by HungryDoodles; 02-18-2016, 03:22 PM. Reason: Corrected some LULZ

                    Comment


                      #85
                      http://rastergrid.com/blog/2010/09/e...near-sampling/
                      You can cut amount of samples to half with this bilinear sampler trick.


                      Another smaller optimization:
                      http://www.humus.name/Articles/Perss...elThinking.pdf page 27.
                      Code:
                      exp(x) is implemented as exp2(x * 1.442695)
                      So you can save one multiply per iteration by hoisting that constant to new constant that is calculcated outside of loop(or compile time).
                      Code:
                      float inverseAmountWithExp2Trick = 1.442695 / amount.

                      Comment


                        #86
                        Untested code for taking advantage of bilinear sampling and halfing sample amounts.
                        Code:
                        float3 blur = 0.0; // Always remember to initialize all variables.
                        //Vertical pass
                        for (int i = 0; i < cycles; i += 2)
                        {
                          float c = 1.0f / (3.14159f * amount);
                          float e = -(i * i) / (amount);
                          float falloff = (c * exp(e));
                          float e2 = -((i+1) * (i+1)) / (amount);
                          float falloff2 = (c * exp(e2));
                          
                          float combinedFalloff = falloff + falloff2
                          float offset = falloff2 / combinedFalloff; 
                          blur += SceneTextureLookup(UV + float2((i + offset) * rx, 0), TexIndex, bFiltered) * combinedFalloff;
                          blur += SceneTextureLookup(UV - float2((i + offset) * rx, 0), TexIndex, bFiltered) * combinedFalloff;
                        
                        }

                        Comment


                          #87
                          Tested it, there is absolutely no visual difference (end-user will not suspect anything):
                          Click image for larger version

Name:	BlurShaderPresent.png
Views:	1
Size:	886.8 KB
ID:	1100839
                          And I've got 45 FPS instead of 35. (Without blur - 54). That's very cool optimization!

                          But we still need to scale down resolution by 2 or 4 times, because it will significantly increase performance.
                          And I thinking what is better:
                          Using another post process to fake resolution downscale, which will took 4 texture samples per pixel overhead, but for blur we will only use half or quarter of resolution?
                          Or use a render target to render actual half resolution, but increase drawcalls and make image more dependant from triangles count?

                          Comment


                            #88
                            Can you share the updated code and material.
                            So the cost of post process blur is now: 1000ms/54fps - 1000ms/45fps = 3.7037037ms. This is too cheap to cover extra pass overhead so don't try to use render target with extra render pass. You could try to lower sample count even further by skipping samples using spatial and temporal dithering.

                            One small optimization is to pull out constant math from Custom node to material level so then material shader compiler can precompute those values at CPU level. Example: If you calculate inverse of Amount at material and then feed that to Custom node you save Division(which is more expensive than Multiplication). Material compiler can't do any math folding for custom nodes and because Amount is Parameter it isn't constant for shader compiler.

                            Edit: Notice that c is constant so you can move it outside of loop and just multiply blur value outside of the loop.
                            Last edited by Kalle-H; 02-19-2016, 12:35 PM.

                            Comment


                              #89
                              Why not use Gaussian two-pass filter with pre-calculated weights as vector? no need to do any math except calculating UV and blend
                              Youtube Channel

                              Comment


                                #90
                                New material for vertical pass:

                                Updates:
                                Loop iterations halfed.
                                c is now outside loop. (-0.5 ms of calculation time) But I assume it need to drag it to node level...
                                Notices:
                                Temporal AA can cut by half calculation time, but it materials looks very-very strange if there is e.g. FXAA. But there is some variable to handle both Temporal AA and FXAA, right? The most thing I've done with GPU is progressive draw Mandelbrot set with CUDA, I barely know aspects of shaders pipeline...
                                There is not only constants can be calculated on CPU, I wrote it before:
                                I wish there will be CUDA, because then I can precalculate normal distribution once and use it as const array, which will cut 90% of calculation time (Lie. GPU utilizes only one instruction per pow()) (power is not very expensive operation). Is there any buffers or something like that?
                                Not 90%, maybe only 20%, but nevertheless if it can somehow be done that would be cool.
                                I'am creating some HUD for my game, and mostly I thinking about how to create blur behind HUD elements. But have no ideas...
                                Attached Files

                                Comment

                                Working...
                                X