Abstract: Various media information in life can have an important impact on our understanding of music. In this paper, we present a demo, Mind Band, which is a Cross-Media artificial intelligent composing platform using our life elements such as emoji, image and humming. In practice, we base our system on the valence-arousal model. We use emotion analysis of life elements to map them to music pieces, which are generated by a Variational Autoencoder - Generative Adversarial Networks model. We provide users with immersive experience by uploading emoji/image/humming and retrieving emotionally related music pieces back. With this platform, everyone can be a composer.
0 Replies
Loading