as some of you may know im interested in generating sound in real time, dynamic sound whatever you gamers call it.
after some time spent trying to back engineer the mod player from git hub, im more than completely lost.
what im looking for is the part where information is passed to the audio buffer
ideally i would replace the ‘mod player’ part with something of my own, a waveform generated through code.
any ideas or anyone even looked at this?
or some other solution?
the end goal is to make something (possibly blueprint nodes) that feed directly into the audio buffer.
i have made synths and sound generators in a number of guises before so generating audio from code is not the problem, its where and how to feed it into ue4 where im stumped.
USoundModWave::GeneratePCMData is where the data is generated from the mod file and queued for the audio buffer to consume.
The basic flow is this:
Sound Mod gets added to an ActiveSound via the standard play sound methods (audio component or play sound at location of the mod directly or via the node in a sound cue).
The USoundMod::Parse finds the FWaveInstance for the given ActiveSound or creates it, sets the appropriate properties and adds it to the list of active wave instances for the frame
This is where things get kind of weird … sorry … I dynamically create a USoundModWave file that is associated with the FWaveInstance because we need a USoundWave (or child) for wave instances to implement all the appropriate interfaces.
The USoundModWave inherits from (the poorly named) USoundWaveStreaming (which should really be USoundWaveProcedural … oh well, someday). Procedural sound waves need to implement GeneratePCMData to provide the buffer with the data each time it requests it.