Announcement

Collapse
No announcement yet.

Emulated Uniform Buffers explanation from the Fortnite on Android Launch Technical Blog

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

    Emulated Uniform Buffers explanation from the Fortnite on Android Launch Technical Blog

    After reading this very interesting recent blogpost from Epic: https://www.epicgames.com/fortnite/e...technical-blog

    I was wondering about the mentioned: "Emulated Uniform Buffers" code path that is mentioned there that gave a big performance saving for Fortnite on Android devices.

    Does anyone know how to use this? If it's not something that can be simply "enabled" with OpenGL then it would be great to see a little example project of how to implement it?

    A number of optimizations in our OpenGL renderer gave small wins, but one of our biggest wins was a surprise and came as part of one of our memory optimizations: emulated uniform buffers. This is a code path UE4 has supported for years and is used for OpenGL ES2 devices that do not have native support for uniform buffers, aka constant buffers. Here’s how it works. At shader compilation time we identify all constants needed for a shader and pack them into an array from which the shader reads. We also store a mapping table that tells the engine where to gather constants from uniform buffers and where to place them in the constant array. At runtime, we keep uniform buffers in CPU accessible memory, use the mapping table to copy them to a temporary buffer, and upload all constants with a single glUniform4fv function call.
    Headgear - VR/AR solutions

    VR Game Release: We Come In Peace... Oculus GearVR | Google Play Daydream

    #2
    OpenGL ES2 and Vulkan use this path by default. OpenGL ES31 by default uses real uniform buffers, and we have added a CVar to switch it to eUB path which can be overridden per project. This change will come in next release 4.21.

    Comment


      #3
      Basically, the one GPU feature that was supposed to make things faster was actually much slower because the mobile GPU vendors actually emulate it on their drivers instead of implementing it correctly.

      Comment

      Working...
      X