What Are You Working On? Community Screenshots & Videos

Started rewriting my threaded zip reader.

Previously, it was doing (roughly) process to load a texture:

  • Read (from disk) the zip header structure.

  • If a request to load the compressed data into a buffer hasn’t been made, make it (same with timer that checks the requests).

  • In the timer, if the compressed load is finished ( is via FIOSystem::LoadData btw), prepare and fire off the thread to decompress it and also generate the mip data for the texture.

  • Also in the timer function, it checks the decompress/mip gen tasks and if finished, creates a texture and puts it into the cache.

  • In blueprint, the cache is checked and if there’s a texture that was being requested and it shows up, it’s set onto the page mesh as a dynamic material instance.


Well, that works great for 1-2 pages, but quickly bloats up memory for loading a whole comic, and also introduces some hangs due to reading the zip header over and over (unnecessarily).

So I’ve restructured it quite a bit, and the result (not yet ) is going to be a cleaner process:

  • For a given zip file, if it hasn’t had info cached yet, it will submit an FIOSystem::LoadData for the entire zip file.

  • Once that comes back (checked in the timer), a task is created to decompress and generate mip data (the task now just gets a pointer to the compressed zip data - no more copying around buffers, and since it’s being read only, there’s no worry about doing it in the thread).

  • Task decompresses the data from the compressed zip buffer (an offset is added to the pointer, corresponding with the zipped file to load).

  • After it’s decompressed, the data is loaded from jpg into a raw bitmap buffer, and then a mip buffer is generated (pointer passed in to the task).

  • When the task is complete, the mip data will be copied into the texture.

In addition to that, I’ve set it up to use a smarter cache system, which will load the requested page and 3-4 pages after it (if available). It will request new pages and unload old ones when getting about halfway through the currently loaded pages.

Later on, I’d like to also keep a cache of half-sized pages to show if the page requested is still being loaded.