Dan's Interactive Music Exploration

One of the first things I was excited about with UE4 was the assertion that the timing problems with Kismet had been sorted out in BluePrints–so I was really excited to try creating some interactive music with BluePrints.

The first thing I did was take an old piece originally designed to loop indefinitely and set it up to loop until a key was pressed:

https://youtube.com/watch?v=F9eIIaM0HTw

So that was interesting, though there was a small timing issue on the BPs in the simulation setup.

Then I decided to try and build a little level that would give me some opportunities to create interactive music–so I built this little dungeon-y level:

Then I composed a short 8 bar loop, my premise was that I wanted music that dynamically adjusted based on Player Character Movement–I felt that under certain orchestrational rules, you could create natural movement between parallel layers of music–here’s the first 2-tier version:

And realizing that the transition still wasn’t smooth enough, I opted to create a 3-layer arrangement, expounding on my interactive orchestration theories with the intention of creating a very smooth and natural interactive movement:

I’d love to hear what you guys think so far.

Cheers,

  • Dan

Pretty cool. :slight_smile: Increasing the medium level with movement is good but high levels should depend on mouse input so that it gets more intense during fights, i think. But then, it depends on the game you are making of course.

Keep us posted!

This is great work, it’s awesome to see somebody experimenting with interactive music. Is there any chance you could explain your blueprints in more detail, or maybe make a tutorial or two for composers like myself who don’t have much kismet/blueprint experience? I think I get the general idea but then I look at boxes like “Music Case Manager from Blueprint Interface” and don’t have a clue what’s going on. There’s a lack of documentation on audio so far from Epic, and although I’ve been told that fmod integration is coming, there’s no word on when that will be.

The transitions are incredibly smooth. I like what you have so far.

Very cool! I was planning to do dynamically layered themes of some kind in my own upcoming project, so seeing your progress is quite encouraging and useful.

Nice work Dan! I always enjoy your videos, keep up the good work!

-Brendan

Great work so far Dannthr! Transitions are very smooth. I find that audio can really make or break any game, and your latest video of the Dungeon was very well done. Keep up the great work and be sure to keep us updated as you progress through your dynamic layered theme journey.

It looks great so far! I can already feel the excitement from jumping through the lava-filled temple halls with the music chasing me as I go. Perhaps in the finished version, there could be an urge to get you going more quickly, like the floors collapsing in behind you, so that you almost automatically cue the more exciting music as you reach more dangerous locations? Can’t wait to see another update video on this!

Nice work!

I would also appreciate a more detailed explanation of your blueprints.

Hey all,

Thank you for the kind words and considerations! I appreciate your feedback.

The music cue for the three layer piece became a sort of proof of concept. However, its pacing and structure don’t reflect the actual gameplay I set up in the level–so I’m probably going to go back to the blank sheet and write the cue over again and try to actually score the action in the level itself–that’s the next step.

Nonetheless, making my way through building this in BluePrints has helped me establish some BPs that I’ll be reusing and/or elaborating/expounding on as I continue development.

Oddly enough, I find that the easiest way to store BluePrints externally is as a screen shot rather than the text/script copy.

Here’s a quick overview of some of the BPs I’ve stored from this experiment:

First up, is the execution of a music segment (which is my language for a vertical slice of music):

I’ve created a custom function where you enter the BPM and Measure information to quickly calculate a delay just before executing the SoundCue. The Play Sound Attached returns a pointer to the SoundCue so that I can reference it in other parts of the BluePrints as “Current Track.” Then I execute a delay while the music plays.

With this in mind, my music segments will need to be small if I wish to be able to interrupt the music seamlessly. Creating a beat-synchronized exit system would be the next step in developing this music segment system.

Here is my Music Delay Duration custom function:

This is a simple math function that multiplies the meter by the number of bars plus the number of beats and multiplies that by the tempo divided by 60 (giving us the time per beat)–it returns a float value of the total time. Unfortunately, this is all manually entered. Ideally, a programmer could incorporate this information into say an extension of the SoundCue (let’s say a MusicCue) that allows you to return a delay value as well in a play function–but that’s moot–I’m trying to use what’s available to me.

Originally, I made a custom function to manage the volume interpolation on the 2-layer system like this:


That looked like this on the inside:
67ec063fa70d5290e9c74ffb3b42b2a8d9397ff8.jpeg

But I had to scrap that once I wanted to interpolate values on a medium level first and then a high level second, so I rebuilt it like this:
cc0ed970ef36a415febf720f0ba7e927af946e49.jpeg

Basically, there are a few things going on here. Because this manages interpolation functions, and the fades need to be real-time, it’s run off of the Event Tick.

First, I set a float called MovementControlValue which is basically the current status of the interpolation. The interpolation is always trying to move toward a target value. The target value toggles between two values set by whether or not player controls are active (is the player moving?) which it feeds from the Player Controller BluePrint via a BluePrint Interface. There is an additional comparison that selects between two different Interp Speeds based on whether the value is greater or less than 1 (which is the middle value). This basically sets the interpolation speed to be different if the player is moving or is not moving, if they are moving, the speed is faster, if they are not then it is slower. This results in a ramp up that is faster than the ramp down.

After setting the MovementControlValue float, we set an AudioComponent float parameter. These are basically parameters that are linked to Continuous Modulator inside the SoundCues. Two values here, Med Status and Act Status. The “Status” in each instance is literally referring to the volume value of the respective Continuous Modulator. Having two of them allows me to set the volume levels of the Medium music layer independently of the Active music layer.

After the first Set Float Parameter, we do some fuzzy math where we “interpret” the Movement Control Value into a usable Volume Float value. I very crudely decide the minimum audible MovementControlValue and then let it Clamp that value into a usable 0.01 to 1.00 float Volume value.

This is a pretty “janky” system, but it works.

One thing that I found to be very important is setting the minimum volume at 0.01. This means that all the layers are playing, they’re just nearly inaudible. This seems to be the only way to ensure that the crossfade is sample accurate because the system seems to not play sounds that have a volume of 0.00.

I hope that was a decent elaboration of how I created this so far–let me know if there are other questions or if something needs clarification.

My exploration was waylaid by an attempt to build a version of UE4 with the WWISE engine integration that ran into several weird bugs that I think have more to do with my system than anything else. Software development… :rolleyes:

Thanks!

Wrapping my head around it step by step…

Is there a reason for updating “Med Status” and “Act Status” after they have already been used to set the AudioComponent parameters, instead of before? The latter makes more sense to me. Right now there will be a one tic delay in the fading. Or am I missing something?

Danthr, this is really great stuff, thanks for sharing. What would your approach be to branching instead of layering cues? I’m doing a practice cue where I want to go from 4 bar phrase cue to the next one and so on. I used your music delay duration to get this right but for some reason I still get weird clicks when the cue loops. Fist time implementing in UE so any advice is much appreciated. This is how it looks like at the moment:

This is pretty cool! I’ll have to try this stuff sometime.