Application of Deep Neural Networks to Music Composition Based on MIDI Datasets and Graphical Representation

Published in International Conference on Artificial Intelligence and Soft Computing, 2019

Recommended citation: Dorobek M., Modrzejewski M., Rokita P. (2019) Application of Deep Neural Networks to Music Composition Based on MIDI Datasets and Graphical Representation. In: Rutkowski L., Scherer R., Korytkowski M., Pedrycz W., Tadeusiewicz R., Zurada J. (eds) Artificial Intelligence and Soft Computing. ICAISC 2019. Lecture Notes in Computer Science, vol 11508. Springer, Cham https://link.springer.com/chapter/10.1007/978-3-030-20912-4_14

Abstract:

In this paper we have presented a method for composing and generating short musical phrases using a deep convolutional generative adversarial network (DCGAN). We have used a dataset of classical and jazz music MIDI recordings in order to train the network. Our approach introduces translating the MIDI data into graphical images in a piano roll format suitable for the DCGAN, using the RGB channels as additional information carriers for improved performance. We show that the network has learned to generate images that are indistinguishable from the input data and, when translated back to MIDI and played back, include several musically interesting rhythmic and harmonic structures. The results of the conducted experiments are described and discussed, with conclusions for further work and a short comparison with selected existing solutions.

This is a pre-print of an article published in ICAISC 2019 - Lecture Notes in Computer Science. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-20912-4_14