Hello, we are working on a student VR project which is due in 2 months.
The main mechanic of the project is voice manipulation and visualisation.
The new Audio Engine seems great but with lack of documentation and zero experience in sound programming it’s unclear how to achieve what we need,
so if you could point us into the right direction it would be great!
The first thing we would like to do is to control real time parameters of a particle system based on players voice coming from VR mic (its color is based on musical notes and its shape is based on different frequencies amplitudes).
As I can see Audio Capture provides interpolated envelop value which seems not enough for what we are trying to achieve as far as I understand.
The question is how do we get the frequencies from Audio Capture and how do we calculate which note was it?
We want to pre-record the voice, apply sound fx to it and then visualise it using the logic from step (1).
As I can see from this demo: https://www.youtube.com/watch?v=LFSxLaSNttQ&t=437s
the recording and fx part should be possible but I could not find documentation or demo project on how to do it.
Could anyone please provide a link or something that shows how to do it please? Or tell us which component to use to record the sound and how to apply fx to it.
Another question would be similar to 1): how can we get the frequencies and calculate which note were played from the processed sound?
So if to summaries we are after some kind of GetNote and GetFrequenciesByRange nodes regardless from which source the sound is coming from live or pre-recorded.
And also a documentation about how to record and manipulate the sound using new Audio Engine.
Your help is greatly appreciated!!!