Update: 18/09/2016

A bit more experimentation with sampling and generation: the following samples are generated by sampling from the embedding space rather than output probabilities. The sound (at least to me) a bit better. Here are a few tracks:


So I’ve been playing with different models for algorithmic generative music for a while as a side project. Lately, the models I’ve been building are starting to sound better than garbage.

I will write at length later on the details of the models, but it’s suffice to say for now that they are close to Deep Dynamic Graphical Models (what else they would be ;-)) trained on a bunch of MIDI files. The algorithm is capable of modelling multiple polyphonic instruments as a whole (meaning multiple instruments playing multiple notes with different values/durations per instrument at the same time) and hence it models more or less the band at its entirety.

Following samples are generated from a heavy metal model with a drummer, a bass guitarist, and two electric guitarists lead and rhythm.