Hey all,
Thank you for the kind words and considerations! I appreciate your feedback.
The music cue for the three layer piece became a sort of proof of concept. However, its pacing and structure don’t reflect the actual gameplay I set up in the level–so I’m probably going to go back to the blank sheet and write the cue over again and try to actually score the action in the level itself–that’s the next step.
Nonetheless, making my way through building this in BluePrints has helped me establish some BPs that I’ll be reusing and/or elaborating/expounding on as I continue development.
Oddly enough, I find that the easiest way to store BluePrints externally is as a screen shot rather than the text/script copy.
Here’s a quick overview of some of the BPs I’ve stored from this experiment:
First up, is the execution of a music segment (which is my language for a vertical slice of music):
I’ve created a custom function where you enter the BPM and Measure information to quickly calculate a delay just before executing the SoundCue. The Play Sound Attached returns a pointer to the SoundCue so that I can reference it in other parts of the BluePrints as “Current Track.” Then I execute a delay while the music plays.
With this in mind, my music segments will need to be small if I wish to be able to interrupt the music seamlessly. Creating a beat-synchronized exit system would be the next step in developing this music segment system.
Here is my Music Delay Duration custom function:
This is a simple math function that multiplies the meter by the number of bars plus the number of beats and multiplies that by the tempo divided by 60 (giving us the time per beat)–it returns a float value of the total time. Unfortunately, this is all manually entered. Ideally, a programmer could incorporate this information into say an extension of the SoundCue (let’s say a MusicCue) that allows you to return a delay value as well in a play function–but that’s moot–I’m trying to use what’s available to me.
Originally, I made a custom function to manage the volume interpolation on the 2-layer system like this:
That looked like this on the inside:
But I had to scrap that once I wanted to interpolate values on a medium level first and then a high level second, so I rebuilt it like this:
Basically, there are a few things going on here. Because this manages interpolation functions, and the fades need to be real-time, it’s run off of the Event Tick.
First, I set a float called MovementControlValue which is basically the current status of the interpolation. The interpolation is always trying to move toward a target value. The target value toggles between two values set by whether or not player controls are active (is the player moving?) which it feeds from the Player Controller BluePrint via a BluePrint Interface. There is an additional comparison that selects between two different Interp Speeds based on whether the value is greater or less than 1 (which is the middle value). This basically sets the interpolation speed to be different if the player is moving or is not moving, if they are moving, the speed is faster, if they are not then it is slower. This results in a ramp up that is faster than the ramp down.
After setting the MovementControlValue float, we set an AudioComponent float parameter. These are basically parameters that are linked to Continuous Modulator inside the SoundCues. Two values here, Med Status and Act Status. The “Status” in each instance is literally referring to the volume value of the respective Continuous Modulator. Having two of them allows me to set the volume levels of the Medium music layer independently of the Active music layer.
After the first Set Float Parameter, we do some fuzzy math where we “interpret” the Movement Control Value into a usable Volume Float value. I very crudely decide the minimum audible MovementControlValue and then let it Clamp that value into a usable 0.01 to 1.00 float Volume value.
This is a pretty “janky” system, but it works.
One thing that I found to be very important is setting the minimum volume at 0.01. This means that all the layers are playing, they’re just nearly inaudible. This seems to be the only way to ensure that the crossfade is sample accurate because the system seems to not play sounds that have a volume of 0.00.
I hope that was a decent elaboration of how I created this so far–let me know if there are other questions or if something needs clarification.
My exploration was waylaid by an attempt to build a version of UE4 with the WWISE engine integration that ran into several weird bugs that I think have more to do with my system than anything else. Software development… :rolleyes: