Back in October, me and Aida released a Deep Learning based Twitter music bot, called “LnH: The Band”, that is capable of composing new music on-demand from a few genres by simply tweeting at it. It has so far succeeded in composing more than 700 new songs.
I have been interested for many years in the intersection of art, creativity, and technology and the recent advances in Deep Learning is enabling us to make rapid progress in bridging those disciplines.
The algorithms and models that power LnH music generation are now live on Twitter. You can simply tweet at it and get a composition back. Simply follow this format:
@lnh_ai Dancing robots /compose /g heavy-metal /t 160 /r 0.2
Check out http://lnh-music.ymer.org/ and https://twitter.com/lnh_ai for more information.
A bit more experimentation with sampling and generation: the following samples are generated by sampling from the embedding space rather than output probabilities. The sound (at least to me) a bit better. Here are a few tracks:
sample track So I’ve been playing with different models for algorithmic generative music for a while as a side project. Lately, the models I’ve been building are starting to sound better than garbage.
I am learning Hadoop and needed to build the plugin for Eclipse. Following this link I managed to build one for Hadoop 1.2.1 and Eclipse Indigo on Kubuntu 12.04 LTS. Feel free to download it and let me know if it worked.
This is exciting. At our computer vision group at London Studio, we have been working on a augmented reality gaming and story-telling PS3 peripheral which is called Wonderbook. Wonderbook is an augmented reality book where the story comes alive on the pages of the book. The first game is a collaboration with J.K. Rowling (author of Harry Potter books) and is titled Book of Spells.
It was announced at E3 2012, and here are a couple of early videos and trailers.