Announcement

Collapse
No announcement yet.

Global Illumination alternatives

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • started a topic Global Illumination alternatives

    Global Illumination alternatives

    Hi guys,
    I just started getting interested in this fantastic realtime world coming from a vfx background (and having to wait for renders hours per frame)

    I browsed the forum a bit and aside from Lionhead's effort to include LPV in this release I didn't see much work from others, just a lot of complaints about the lack of SVOGI (which was looking very good btw)

    So, I just wanted to get out a list of alternative algorithms that I found online, hoping that someone with programming experience, access to the source and a lot of spare time can implement something the way lionhead did.
    So far the most interesting stuff I found is this:

    Delta Radiance Transfer
    Modular Radiance Transfer
    Deferred irradiance volumes
    This guy who has something nice going but I don't really understand what's going on

    Those are just a couple of papers I stumbled on the internet. Usually I find offline rendering stuff (like path-tracing etc) to document myself, but all this realtime thing has gotten my ears whizzing!
    Of course anyone is welcome to add his findings.

    One thing I wanted to know is, is there any research on faking a path tracing algorithm using GPU particles? Since unreal engine can manage a lot of those easily and they can bounce, and seeing that the rendering engine uses a deferred path, would it be possible to just use those and let them illuminate at each bounce while losing energy? Also, why not store light afterwards in something like a voxel grid for the indirect bounces and let particles illuminate every cell while they traverse it? this way you could emulate participating media and volumetrics.

    Just thoughts tho, I love the looks you can get now, especially the reflection system. I was astounded when I saw Remember me use it on the UE 3, and I'm glad you picked up from there and refined that.

    Cheers!
    Last edited by max.pareschi; 04-08-2014, 09:51 AM. Reason: other links found

  • replied
    It looks the initial raytracing code got merged 2 days ago
    https://github.com/EpicGames/UnrealE.../dev-rendering

    Leave a comment:


  • replied
    Originally posted by ZacD View Post
    Wait until DXR features start rolling out in early 2019 or version 4.22.
    That will be too expensive for sure

    Leave a comment:


  • replied
    Originally posted by ZacD View Post
    Wait until DXR features start rolling out in early 2019 or version 4.22.
    Hopefully those features just speed up the SDF tracing they were doing anyway, so you get backwards compatibility on PC for next generation games. That stuff was blazing fast and had a good flexibility, would much rather see it finally completed and totally useable than going for something shiny that gives as big a performance hit as reflections do on BFV.

    Leave a comment:


  • replied
    Wait until DXR features start rolling out in early 2019 or version 4.22.

    Leave a comment:


  • replied
    Any new on this topic?

    Leave a comment:


  • replied
    Originally posted by iniside View Post

    That's stretching. You just as well say that Enlighten does the same thing as VXGI. Yes they do real time GI. But that's where similarities end

    Presented technique is completely different from Voxel based techniques. It's far more similar to techniques used in The Division/ACU and other auto placement probes which hold precomputed light transport.
    Novelty here comes from it's sparse nature, dramatically lowering storage/memory foot print.
    I looked at VXGI and it looks like it kills people's GPU and framerate. It's quite nice, but I have only seen videos of it on a small map, not sure how it would look on larger landscapes, not to mention the fact that it's an NVIDIA product so by default its going to run worse on AMD GPUs.

    Leave a comment:


  • replied
    Originally posted by darthviper107 View Post
    It's not voxels, but it's doing a lot of the same type of thing, some open world games use a similar but much simpler technique.
    That's stretching. You just as well say that Enlighten does the same thing as VXGI. Yes they do real time GI. But that's where similarities end

    Presented technique is completely different from Voxel based techniques. It's far more similar to techniques used in The Division/ACU and other auto placement probes which hold precomputed light transport.
    Novelty here comes from it's sparse nature, dramatically lowering storage/memory foot print.

    Leave a comment:


  • replied
    It's not voxels, but it's doing a lot of the same type of thing, some open world games use a similar but much simpler technique.

    Leave a comment:


  • replied
    Originally posted by darthviper107 View Post
    That's not all that different then the voxel solution, it just uses less points
    Technique posted above have nothing to do with voxels. It's much higher quality, works mainly on static objects and requires hefty amount of precomputation, but does support fully dynamic lighting and dynamic occluders.

    Also it's very ast at runtime is very memory efficient. Looks like nice compromise for semi static big open world games.

    Leave a comment:


  • replied
    That's not all that different then the voxel solution, it just uses less points

    Leave a comment:


  • replied
    https://www.youtube.com/watch?v=mECv52eSjBo any use to EPIC? discovered that link from Blenderartists.

    Leave a comment:


  • replied
    http://www.aduprat.com/portfolio/?page=articles/PBGI

    Something i stumbled upong today. Technique has very high quality albeit in presented implementation is very slow (200ms). It works similiar to Geomerics, but it's fully realtime (do not need to precompute geometry visibility).
    Might be interesting to consider it.

    Leave a comment:


  • replied
    Hi, sorry to bring up an old thread.

    I see that screenspace deep gbuffer method seems to work really well and fast, its only problem is being screen space. Would rendering a scene from one or two more views in a very low quality really impact performance that much? They wouldn't be full quality passes, just very basic LOD meshes (only very close ones or far big ones being rendered) and low textures of solid colors, no tessalation/skinning/complex materials etc. and really lowres render target, GI is very low frequency effect.

    NVidia's current VXGI works amazingly well, but it's really slow in interior scenes for current generation of GPUs (half of the framerate for 8 cones is really bad). Would the screen space method be really that much slower in comparison to VXGI when rendering additional views in a very crude detail and low resolution?

    Leave a comment:


  • replied
    Originally posted by MrRabbit View Post
    I haven't read someone mentions 'Image-Based Photon Mapping' (ISPM):

    https://www.youtube.com/watch?v=GckOkpeJ3BY
    and more info here: http://graphics.cs.williams.edu/papers/PhotonHPG09/ (with source code)

    As far as I understood:
    -no need for voxelization, uses the actual polygonal mesh
    -not view-dependent
    -gives Global Illumination, Ambient Occlusion and Caustics with one solution

    It is from 2010 but I think with todays hardware, dx12 and optimizations (doing the path tracing part on the gpu rather on the cpu as the paper proposes) this technique could be even more performant.
    Quite interesting, would love to see this in UE4.

    Leave a comment:

Working...
X