The new audio engine and synth component are here and I thought that making a random music generator would be a great way to test out the new features.
The music generator works by picking a random entry in an array containing musical note values. The user can choose a musical scale/mode that the music generator picks it’s notes from. I’ve only implemented 4 set’s of music notes so far: Major, Minor, Mixolydian, Hungarian Gypsy, however more will be added when I find the time…
The musical styles that it plays can be controlled by the end user but I’ll also add logic to change style based on ‘moods’ which will smoothly mix between them at whatever duration the user wishes. Every play through will be unique if the user wishes or it can be set by a seed.
I’m currently working on making it sound more musical, particularly when it plays chords when certain scales are selected such as the Hungarian Gypsy scale as it sounds pretty awful when selected unless it’s playing a solo.
Playing around with this has reminded me of a VSTi and Windows standalone synth I wrote 7-odd years ago. It was a Yamaha CS-5 clone and I spent a huge amount of time making it sound as close as possible to the real thing. If you’d like to have a go with it you can find it here: http://yamamavcs5.weebly.com/
Found this on the Audio Engine thread… First off, this is lovely Secondly, thanks for the example – I’ve only gotten as far as making some basic tones in a BP and I’m already giddy!
Looking forward to seeing what you create with these amazing new tools…
I’ve done a bit of tidying up around the trigger logic and added wav file playback. All the triggering occurs randomly based on user define weights to give some sense of control. I have made a separate timing class which fires the events and the synth class decides if it wants to play anything or not. I also spent some time making a proper chord progression generator as a separate BP, the plan is to have it work alongside this BP to generate some semi-ordered music rather than spitting out chaotic music as this currently does. I’ll try to upload a video of that too once I’ve figured out a couple of things…
This video demonstrates the new trigger logic and the wav file playback (I’ll eventually get it playing from the granular synth but my machine keeps bombing out when I use it.
The video is a bit dry as the effects chain has been completely bypassed and I didn’t really perform any mixing so you’ll probably need headphones or decent speakers (i.e non laptop speakers) in order to hear the bass line.
This is really cool! I’ve been able to make a random note generator but am kinda lost at the rest of it (triggering, and enveloping). Any chance we can get a short tutorial on how you set this up? I am still relatively new to scripting in unreal.
I’ll try to do a write up at some point, however with work and stuff I struggle to find the time these days. I’m simply using a timer to trigger the notes at the moment and the ADSR envelope is simply a case of feeding attack, decay and release with a time value in milliseconds and the sustain is a sustain gain amount within the range of 0-1. I’m setting the envelope at runtime on tick as the values get changed dynamically, however you could set it in the construction script or on create if you don’t plan on changing the values at all.
reminds me of something i made with NI Reaktor (released as “RandomDotEns”, Tegleg Records)
although everything was random including all parameters of synths etc
looking forward to having a mess with ue4’s new audio system
Those tunes are awesome, I’m getting Aphex Twin vibes. Do you have any human composed tunes floating around that I could hear?
I haven’t played with Reaktor for a good 10 -12 years but did a couple of cool things in it back in the day… I also used to use PD, Max and Synthmaker (Flowstone). Synthmaker was great as it had VST export. I don’t really have any examples of what I made other than the Yamaha CS5 clone I posted: http://yamamavcs5.weebly.com/
I may dig out my old machine and see if there’s any of my projects still kicking around…
One of the most fun I’ve had with audio tool development was using a java library called ‘beads’ It’s open source and I massively extended its functionality. The best thing I made in that was an amen break slicer with built in semi-real time quantizer which was great for live jungle drums as you basically couldn’t screw up a performance. Simply bash on a key and it will play a slice at the next given interval (it queued them) and time stretched to fit in the given slot based on a bunch of queued up hits. There’s a massive jungle & breakcore scene round these ways so it was a great way of doing live jungle stuff without spending yonks warping a break in Ableton or perfecting my timing skills…Haha I’ll have to try and dig that out too…I should really start using version control…
Hi, I’ve taken a look at your post and it all looks okay to me. My set-up is not much different to yours actually, perhaps mine is slightly simpler than yours as I’ve not got round to adding a bunch of the synth parameters as of yet and am only using a single oscillator at the moment. The reason I’ve not really done much with the synth module in this project so far is because I’ll end up spending more time tweaking parameters and listening to the cool sounds that it produces than I would spend developing the actual music generation logic
Okay, so the main differences between your synth setup and mine are as follows:
I’m triggering my synth module via timer based events rather than an input event and I’m passing a duration value to ‘note on’ so I don’t actually ever call ‘note off’ but these are obviously decided based on the project requirements and the way you’re doing it is more fitting for your purpose. Other than that, I’m setting all my synth parameter values at run-time via the tick event. I’ll probably change this at some point to be set either by a custom event or a function call…either way as long as I can change these at run-time I’m happy.
Regarding the music generation logic, it’s not too dissimilar to what you are doing…I’m literally grabbing a random value from an array of note values and passing it to the synth. I’m not happy with it yet though and I’ve got a long way to go before I will be. I’m comfortable working with audio itself but I’m a complete amateur when it comes to music theory and composition so there’s much for me to learn before this sounds humanized
here is a good site for learning about music theory with electronically produced sounds http://www.phy.mtu.edu/~suits/Physicsofmusic.html
it has stuff like the frequency of notes as well as musical scales etc
Cheers Tegleg, It’s a great site and I’ve visited it a few times in the past.
Tbh, I’m actually okay at all of that stuff, I’ve studied both music and audio tech for over 15 years and actually work professionally in the audio tech field. When I say amateur, I really mean in terms of music composition, especially when it comes to generative music and making the computer compose the audio. It’s not even so much the music logic that I struggle with; although implementing some of it in code is definitely mind bending stuff. It’s more the psychology or even philosophy of music…what makes a music composition good? etc…
I recently went to a talk from a scholar at Cambridge university who had found and is attempting to complete an unfinished opera score from Franz Liszt who was a Hungarian composer. He explained how he went about doing that and it kind of blew my mind. It made me realize how little I know about composition, music theory and music psychology. He demonstrated the options he had on a piano for some of the incomplete score sections and how he decided which option was the likeliest candidate. There were even pages of the score that had only one or two notes written down but based on previous and future sections managed to complete these pages. It perhaps sounds easier than it is; I mean, surely the guy could just throw anything he wanted in there providing it was in the same key signature and it will work right? However, when he demonstrated this you could kind of feel it wasn’t right, I can’t even explain why…it’s beyond music theory as such and enters the realms of psychology.
The reason I mention this is because that’s essentially the ultimate goal in a generative music program. To be able to analyse all the available options and decide which is the most fitting for the composition. I can’t even imagine how I’d even begin to do that just yet, but I’m definitely going to give it a try.