Reading and writing data to an asset at runtime

I need to store a set of numeric data in an asset file. The data is generated at runtime (and re-generated over and over as we iterate) during development then used at runtime in the final release. What would be the best solution? (Data assets and data tables seem to be read only.)

Could you not connect to an external database and save the data there, something like SQLlite or using a PHP API to an SQL/Mongo database? How much data are we talking about?

I’ll have to talk about SQL with the programmers. The data is not small, around 300K vectors in a single asset, although roughly 500 of them will be accessed at a time. A new batch will probably be read every 2-3 seconds.

Yeah Im not sure where to begin on that, you probably want to cache it in RAM and minimize random disk writes unless you have extremely fast disk access. You’ll want to use C++ most likely to get the most speed benefits and some sort of Array system that caches out chunks at intervals. You might be able to hook into Sequencer, its hard to say without knowing more.

Id recommend contacting Epic directly, if you can discuss it more with them under NDA they might be able to offer a solution.

I highly recommend considering reducing precision in place of extrapolating data to minimize the amount of data you need to shift around.

Well it is indeed a tough nut. I just got access to UDN, there I was able to get into more details. Still, if we come up with a solution I’ll update this question.

And yes, we’ll definitely reduce the amount of data and derive whatever we can on the fly, even if the result is not 100% precise.

Yup no worries, sorry I couldnt offer something alittle more solid in the way of an answer.

For now the system works like this: a recorder actor gathers data from actors doing their thing at runtime, in PIE. The data is put into another actor which just contains the appropriate data structure. The changes made to this actor is saved by the “Keep simulation changes”. Although this way the data pack is not an asset but an actor on a map but we can live with that because the recorded stuff is level specific anyway.

We cut down the recorded data considerably by deriving as many things as possible so the amount of data we work with now is much more reasonable, 5000 samples for ~400Kb.