Download

[Support] Orquesta Cue

**Hi all,

I’m not the developer of this plugin. (https://www.unrealengine.com/marketp…t/orquesta-cue)

I’ve created to post to better facilitate discussions beyond the Questions section.**


Hi ArvidHD,

Just following up from you answer to my question.

Thanks for sharing the documentation, I had a good read and checked out the TimeSynthComponent (https://www.youtube.com/watch?v=qaOGv0gNSms). It was a little confusing but that’s on me as I’m a novice in this area - please bear with me! :slight_smile:

From what I understand you’re offering a* super cue* that integrates nicely into gameplay logic at certain events during sound play? eg - OnLoop, OnStart and OnEnd. Please correct me if I’m wrong here. It looks very interesting but I don’t think it’s going to help me with my current requirements. I’ll outline them below in case I’m wrong.

Some quick fire questions:

  • In the In the Orquesta BP can you call Subsystems?
  • Can the event in the Orquesta BP tell me which actor called the Orquesta Cue? Maybe I’m understanding the use case incorrectly, but I feel like here’s where I’d put certain gameplay logic? eg OnEnd run the BulletTime function.
  • Don’t see why it wouldn’t work but is VR supported?
  • I assume Sound Classes attach to the sound asset are respected?

I’ll outline some use cases I have for music. I’m not entirely sure if this system will make it easier and cleaner to implement and maintain.

(1)
I currently have a music system that works but it’s a bit messy. Would like to clean up the implementation. It doesn’t seem like this is a use case for the plugin.

OnLevelStart: play ambient background music
OnBattleStart: fade out ambient -> play main song intro -> then move into the main song loop
OnDeath: lowpass filter currently playing music
OnRespawn: remove lowpass filter
OnSublevelComplete: lower volume, lowpassfilter? (not implemented yet)
OnNextSubLevelStart: continue back to normal loop
OnLevelEnd: play outro, playoutro stinger sound followed by end level ambient music.

(it’s going to be a Hotline Miami style game)

(2)
Been playing with level objects that scale up and down. Would be nice to capture the currently playing song and scale to the beat. I didn’t see anything about this in the documentation but looks like a feature of the TimeSynthComponent (?).

Hi,

from the interfance/tooling side, sound cue definitely was the inspiration how to set it up. It would haven been awesome to just add quantization to sound cues, and my whole tool would haven been obsolete :stuck_out_tongue: but its not that easy. Because I don’t have the resources to replicate all the sound cue features and “only” add quantization, I focused on the music only.

Quick fire questions:

  • The Blueprint is like any other Blueprint. You can definitely call Subsystems.
  • I wouldn’t put gameplay logic into the OrquestaBP, you have to see it more like a Director Blueprint of the OrquestaCue. You create the logic for musical transitions here, and then create functions for them, which you call from an actor BP or a global MusicDirectorSubsystem.
  • I only have a windows PC without VR, so I couldn’t test it on other platforms, but it should run.

The OrquestaComponent used to play the assets, is based on the engines SynthComponent, which is the base for all procedural audio generators epic created, e.g. the GranularSynth or the TimeSynth. Internally it uses the AudioComponent. Therefore you can do everything what AudioComponent does, SoundClasses, Submixes etc. In more detail the new AudioMixer allows you to hook in the audio stream, and generate your own audio. Like this, I can count the samples and control exactly when to start clips.

To summarize, the keyword is quantization. Imagine you have two audio tracks, you want to play perfectly aligned. If you use the default audio component, and start them at the exact same time, they would be aligned. But if you want to trigger one track later, depending on what happens in your game, it is not possible to control the alignment, perfectly. The reason is, that the audio is rendered on a separate thread, not on the game thread, like in old times. So there are little latencies, when you trigger an audio component, which you can hear, depending on the performance of your game. If you have a very low frame rate, and you trigger at that moment, your music can be completely misaligned.
For audio that doesn’t need that, e.g. ambient sounds, one shot sounds or music that is not layered and can be transitioned with simple fade-in and fade-out, you do not need this and therefore my plugin would be absolute overkill :-).

(1)
What you plan to do is totally feasible with AudioComponents and Blueprints, if you do not need this quantized alignment. You could use a GameSubsystem as your music director, which knows all the music playing audio components in your game, and become creative ;-).
(2)
Epic has developed a plugin for this: https://docs.unrealengine.com/en-US/…sia/index.html

Thanks for the detailed responses, much appreciated. I get it now :slight_smile: Not something I need now but a super interesting submission to the marketplace. I’ll keep it in mind for future projects.

wow how did I miss that plugin - Synesthesia - thank you!

You are welcome :slight_smile: